►
From YouTube: Community Meeting, April 5, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
There
we
go
all
right,
all
right,
hello
and
welcome
to
the
kcp
community
meeting
april
5th
2022..
We
have
a
couple
items
in
the
agenda,
each
of
which
probably
will
be
somewhat
large.
We
and
please
feel
free
to
add
your
own.
We
have
a
a
lot
of
items
on
the
0.4
milestone
and
we've
had
some
conversations
about
specifics
of
those
I
don't
know.
If
andy
do
you
think
yours
is
short
and
contained
mine?
Is
this?
One
is
large.
A
B
If
we
can
go
through
your
thing
quickly
and
still
have
time.
A
Yeah,
it's
pretty
short,
so
meru
I'll
steal
your
thunder
a
little
bit
and
happy
to
defer
to
you
if
you
want,
but
just
wanted
to
send
out
a
little
psa.
That
meru
has
been
working
very
hard
on
getting
rid
of,
or
at
least
setting
us
up
to
get
rid
of
the
code
that
we
currently
have.
A
We
most
likely
will
add
a
cubecontrol
kcp
plugin
sub
command,
to
make
it
easy
to
add
and
update
syncer
on
a
physical
cluster.
But
that
is
the
direction
that
we're
heading.
It
simplifies
a
lot
of
the
code
that
we
have
and
a
lot
of
the
paths,
and
it
also
matches
with
realistic
expectations
from
a
communications
standpoint
where,
in
general,
it's
extremely
unlikely
that
a
kcp
control
plane
would
be
able
to
acc
necessarily
be
able
to
access
100
of
the
time.
All
of
the
api
servers
for
all
of
the
physical
clusters.
B
A
It
will
map
similar
to
how
no
or
the
cubelet
reports
its
status
to
the
api
server,
and
if
that
heart
beating
goes
away
or
is
missed
for
some
configurable
amount
of
time,
then
the
status
will
be
unknown
and
then
we
can
decide
if
we
want
to
auto
evict
and
move
or
you
know
whatever
needs
to
happen.
But
it's
going
to
match
that
model.
Gotcha
nice.
B
D
A
I
think
when
our
when
the
apis
that
the
sinker
is
using
get
to
beta,
then
yes
like
we
will
we'll
have
a
better
support
contract
than
we
do
right
now,
if
you've
been
running
a
kcp
instance
for
the
past
several
weeks
and
upgrading
it
as
we
merge
changes
to
maine,
then
it's
extremely
likely
that
things
will
have
broken
on
your
cluster
or
you
had
to
delete
and
recreate
fcd
or
whatnot.
So
I
don't
think
we're
at
a
point
where
we're
we're
able
to
guarantee
anything
just
yet
but
yeah
in
the
future.
A
B
Yeah,
I
guess
that's
interesting
because
before
before
this
change
I
mean-
I
think
we
knew
this
change
was
coming
eventually.
But
before
that
we
committed
to
this
change.
There
was
a
world
where
kcp
managed
your
sinker
inside
of
your
cluster,
and
so,
when
you
wanted
an
upgrade,
kcp
could
do
that
upgrade
for
you.
But
now
it's
on
you,
the
p
cluster
admin,
to
keep
that
syncer
up
to
date
so
that
it
can
keep
talking
to
the
kcp
to
the
kcp
layer.
B
E
Just
potentially
a
forward-looking
statement
on
syncer
management,
so
yeah
kcp
won't
manage
casey,
sorry,
the
synchro
directly,
but
we
are
anticipating
splitting
the
syncer
into
a
cluster,
privileged
namespace
managing
component
and
a
strictly
resource
thinking
component,
and
to
me
that
that
does
suggest
the
possibility
I
mean
the
namespacing
is
probably
not
going
to
change
very
much.
It's
going
to
be
privileged
to
the
degree
that
maybe
it
could
be
responsible
for
upgrading
the
resource
thinker.
E
B
And
that
super
powerful
thing
could
could
upgrade
its
little
brother,
the
sinker,
if
it
wanted
to,
but
that
sort
of
yeah
potentially
far
future
optionally
yeah
cool,
I'm
really
excited
for
the
split
so
that
sinker
does
not
have
all
the
superpowers.
The
big
brother,
little
brother,
big
sibling,
little
sibling
thing
will
be
very
nice
great.
Otherwise,
the
syncer's
like
work
doesn't
really
change.
It's
more
of
the
installation
and
and
management
slash
lack
of
management.
B
That's
really
coming
cool.
So
with
that
we
can
go
through
some
of
the
point
four
milestone
stuff
we've
had
some
conversations
I
had
a.
I
don't
know
if
we
have
any
specific
ones
we
want
to
go
over,
but
on
is
that
yesterday,
yeah
yesterday,
stefan
had
a
good
long
chat
about
the
location
concept
or
location
constellation
of
concepts,
location
pool
things
like
that.
I
don't
think
he's
here.
So
I
don't
know
if
we
want
to
go
through
that
or
if
anybody
else
wants
to.
B
It
was
a
it
was
a
a
fairly
small
group
of
folks
from
from
this
group
who
were
interested
in
items
for
that.
I
believe
it
was
recorded
right
andy
was
it
reported?
Okay,
you
should
make
sure
that
that
recording
goes
out
but-
and
there
was
a
doc
associated
as
well-
which
I
will
also
frantically
go.
Look
for
now.
F
Yes,
dude,
please,
post
that
I
was
in
a
different
conversation
with
stefan
yesterday
that
touched
on
related
topics
and
we're
planning
to
follow
up.
So
that's
why
I'm
interested.
B
Okay
yeah
this
was
yesterday
morning.
There
is
a
doc.
There
is
a
recording.
I
will
make
sure
all
of
that
goes
on.
B
Leave
yeah
and
that
covers
all
of
api
negotiation
and
placement
decisions
and
who
gets
to
have
input
on
placement
decisions
and
who
makes
placement
decisions,
and
so
there
was
a
lot
in
there.
I
will
make
sure
that
that
video
gets
shared.
Are
there
any
other,
very
sort
of
top-level
0.4
milestone
stuff
we're
talking
about.
C
I'd
be
interested
if
folks
wanted
to
talk
a
little
bit
about
the
results
of
the
discussions
and
kind
of
the
context
of
the
delivery
of
what
put
into
the
demo
script.
I
think
stefan
and
joe
keem
are
out
but
david.
You
might
have
some
insight
on
those
location.
Conversations.
G
G
The
the
approach
is
that
for
now
it
sticks
to
you
know
a
preliminary
version
of
of
this.
That
would
not
be
virtual
workspace
based,
which
is
you
know,
part
of
what
we
discussed
also
yesterday,
but
but
which
would
contain
most,
you
know,
principles
or
most
ideas
of
of
what
will
be.
Finally,
so
I
think
yeah,
the
the
the
demo
script
for
both
location
sets
and
advanced
scheduling,
is
quite
self
explaining.
It
seems
to
me.
B
G
Yes,
obviously,
that's
that's
a
topic
that
spans,
you
know
p4,
but
the
demo
that
is
in
the
the
demo
script
that
is
for
p4,
is
you
know
precisely
just
limited
to
what
would
be
available
in
p4,
so
especially
only
on
the
thinker
side,
with
no
transformation
down
on
the
sql
virtual
workspace
side.
B
Thank
you
yes,
yeah
this
advance,
so
it's
not
that
we're
not
planning
to
do
it.
It's
that
we're
not
planning
to
do
it
in
the
p4
time.
The
milestone
four.
G
B
Okay,
using
the
virtual
workspace
that
sinkers
each
talk
to
cool
exactly.
G
And,
as
for
the
apis
related
to
location,
location,
set
and
placement,
so
that's
the
the
one
just
above
you
know
that's
more
on
the
scheduling
part
so
having
placement
annotations
and
having
the
scheduling
happen
and
translate
that
into
labels
that
are
targeted
to
to
the
thinker.
In
fact,
so
one
main
one
main
point
in
this
was
the
the
more
clear
the
clearer
separation
between
the.
G
Responsibilities
of
both
the
scheduler
and
the
sinker,
and
really
separate
both
and
clearly
define
the
inputs
and
outputs
of
both
and
the
the
flow
of
information
through
those
two
components.
G
That's
an
interesting
question,
and
that
was
a
bit
discussed
yesterday
at
this
famous
meeting.
G
Obviously
there
would
be-
and-
and
we
still
need
to
you
know,
discuss
that
further,
but
there
would
be
different
type
of
transformation,
typically
transforming
the
service
accounts.
You
know
in
a
really
physical
cluster
specific
way
would
could
stay
on
the
on
the
single
side
on
the
sinker
agent
side
in
the
physical
cluster,
but
then
high-level
transformation
like
you
know,
deployment
spreading.
That
needs
the
knowledge
of
the
variance
of
every
status
for
every
location.
G
Obviously,
that
would
be
on
the
virtual
workspace
side,
which
has
the
knowledge
of
all
the
location
views
for
for
every
singer.
So
I
mean
this
is
not
completely
fixed.
You
know
discussion
here
and
we
we
might
find
other
criteria
to
define
where
any
transformation
occurs,
but
that's
more
for
p4,
p5,
sorry,
yeah.
H
B
Nice
is
there
anything
else
we
want
to
call
out
and
talk
about
in
the
in
the
demo
script
workspace
types
policies,
admission
web
hooks.
C
I
guess
sean
andy
maru
antonio
steve
farsha.
Is
there
anything
that
was
d
scoped,
that
we
should
talk
about.
I
Yeah
barcia
and
I
are
going
to
be
looking
at
generating
those
client
go
wrappers.
We
don't
have,
I
think,
a
demo
section
on
here
yet
because
we're
not
exactly
sure
what
we'd
want
to
demo,
but
we're
gonna
figure
that
out
once
we
look
at
the
generators
a
little
bit
more,
but
nothing
described.
B
B
Yeah,
I'm
not
sure
how
we
would
demo
a
controller
works
with
this
client,
but
great
I
mean.
Is
there
anything
else?
This
is
the
rest
of
the
0.4
milestone.
B
Now
I
am
now
I
am
sharing
the
right
thing.
Excellent.
Is
there
anything
else
on
here
that
people
have
questions,
concerns,
lack
of
clarity.
A
C
All
right,
so
we
need
to
reconcile
work
packages,
this
demo
doc
and
the
milestone
itself
to
make
sure
we've
finalized
there.
So
if
you
haven't
created
github
issues
for
what
came
out
of
your
discussions,
please
try
and
do
that
in
the
next
day
or
so.
B
Yeah
and
if
you
have
created
one-
and
it's
not
listed
here-
add
it
to
this
milestone
so
that
we
can
track
it
well,
yeah
great
I
mean
otherwise,
it
sounds
like
people
are
having
conversations.
People
are
working
toward
good
stuff.
I
don't
know
if
there's
anything
else,
we
need
to
talk
about
today.
If,
if
anyone
has
any
other
topics,
please
bring
them
up.
B
This
one
is
probably
a
milestone
for
one
right,
just
because
we
this
may
even
be
a
good
first
issue
right
based
on
the
the
chatter
and
the
slack
it
sounded
like.
This
was
a
overly
strict
validation
we
imposed
when
we
thought
we
would
have
to
have
two
pieces
of
information.
A
Yeah
and
there's
a
pr
for
it,
but
sam,
I
think,
has
some
random
community
questions.
Yes,
I.
J
A
J
All
right,
hi
folks,
I'm
sam,
I'm
at
grafana
labs.
We
we
have
been
contemplating.
J
We've
been
contemplating
basically
what
amounts
to
embedding
an
api
server
inside
of
grafana's
back
end
for
a
while
now
we're
doing
various
experiments
around
it,
but
we
started
looking
at
kcp
last
year
noticing
that,
basically,
our
main
reason
for
doing
it
is
is
not
because
we
want
grafana
to
become
a
scheduler,
but
because
we
like
the
object
model
essentially-
and
we
noticed
you
know
the
things
that
kcp
had
like
pulled
out
of
you
know-
we
get
rid
of
a
whole
bunch
of
resources
that
are
really
related
to
like
scheduling
specifically
and
instead
we've
got
like
namespaces
and
and
are
back
and
and
such
like
yeah.
J
That
looks
pretty
close
to
to
what
we
would
actually
like
want
out
of
the
set
of
resources
for
from
from
an
api
server.
So
the
actual
question
here
is
like:
is
there
any
goal
for
for
kcp
to
become
like
reasonably
directly
embeddable
inside
of
another
go
program?
Is
that
a
thing
you're
targeting
and
or
more
broadly?
Are
we
just
totally
nuts
like
how?
How
crazy
is
this
on
a
scale
from
one
to
one
to
ten.
B
I
would
say
you
know
nine
on
the
scale,
but
we're
also
at
a
nine
so
like
okay,
it's
entirely
within
the
within
the
level
of
insanity
that
we
expect.
I
think
the
it
is
absolutely
a
goal
to
be
able
to
embed
a
minimal
api
server
into
something
else.
We
did
that
and
then
moved
on
to
what
we
can
build
on
top
of
that.
So
it's
definitely
possible.
B
I
don't
know
if
we
have
good
docs
for
it
like
we
have.
We
have
one
good
example
of
it
and
it's
all
of
the
thing
we
do
so
I
I
would
believe
that
we
could
have
better
docs,
for
so
you
want
to
have
a
minimal
api
server
only
and
not
all
of
this
multi-cluster,
you
know
nonsense,
but,
like
I
just
want
a
minimal
api
server,
I
think
that
could
be
a
good
well,
we've
been
waiting
for
a
second
example
of
one,
so
you
get
to
be
the
guinea
pig
of
the
second
example.
J
J
Well,
good
good
yeah,
I
mean
one
of
the
reasons
for
thinking
about
this.
In
the
first
place,
like
grafana
is
a
subset
of
resources
which
are
really
just
inert
objects
right
like
we,
don't
really
need
reconciliation
for
them,
but
one
of
the
weird
things
about
grafana
is
that
there
are
some
things
which
maybe
ought
to
then
also
ultimately
be
able
to
be
scheduled
out
like
as
workloads
that
get
pushed
off
to
a
proper
kubernetes
cluster.
So
you
know
we
have.
J
We
still
have
a
lot
of
experimentation
that
we're
doing,
but
this
has
been
yeah.
I
I
have
people
who
I
think
would
would
love
to
to
to
actively
be
that
kind
of
guinea
pig.
J
The
main
thing
for
us
right
now
is:
we
also
have
lots
of
folks
who
are
terrified
by
the
prospect
of
how
complex
it
would
be
to
basically
embed
an
api
server
inside
of
the
grafana
back
end,
and
I
have
no
ability
to
like
actually
answer
that
question
in
a
way
that
staves
off
well
confirms
or
denies
any
of
their
concerns
so
yeah
anything
that
I
can
give
them
there.
So.
A
Steve
brings
up
a
good
question
in
chat
because
we
do
use
a
fork
of
kubernetes
that
enables
us
to
do
a
lot
of
things
that
we
want
to
do
and
that
stuff
hasn't
made
it
to
upstream
core
kubernetes.
Yet
so
some
of
that
hopefully
could
go
upstream
and
be
accepted,
but
maybe
not
all
of
it.
A
H
J
Okay,
that's
enough
all
right,
so
what
would
be
well
actually
first
is
well.
I
understand
there
isn't
good
documentation
on
this
in
general,
but
is
there
at
the
very
least,
a
list
of
things
that
are
in
the
fork
is
that
is
that,
like
a
curated
list.
A
Somewhere,
it's
mostly
in
a
single
package
for
the
what
the
the
thing
that
represents
the
cube-like
api
server
that
we're
importing,
that
has
config
maps
and
secrets
and
are
back
and
whatnot.
That's
this
generic
control
plane
package
that
steve
referenced
will
happily
drop
a
link
as
well.
There
are
some
other
changes
that
we've
made,
but
most
of
those
changes
have
to
do
with
the
multi-tenancy
aspects
that
we've
added,
which,
if
you
don't
need
those,
then
you
can
get
by
without
them.
We
actually
care
a
lot
about
that.
A
The
workspace
concepts
yeah
I
mean
a
lot
of
that
is
heavily
modified
in
the
fork
and
what
one
of
the
things
that
was
mentioned
briefly
in
the
0.4
milestone,
is
needing
to
generate
wrappers
or
wanting
to
generate
wrappers
for
existing
client
sets
and
make
things
like
controller
runtime
workspace
aware
and
also
have
the
informers
and
listers
be
scopeable.
A
So
there's
a
series
of
work
items
that
we
need
need
and
want
to
do
to
try
and
make
it
easier
for
controller
authors
to
take
advantage
of
multiple
workspaces
using
stock
client
go
hopefully,
and
eventually,
stock
client,
rent
or
controller
runtime.
But
if
you're
trying
to
embed
an
api
server
and
get
take
advantage
of
the
the
multi-workspace
aspect,
then
there's
more
of
the
fork
that
you're
going
to
need,
like
basically
all
of
it.
J
Okay,
yeah-
that
is
a
one
of
our
largest
open
questions
at
the
moment,
has
to
do
with.
J
We're
trying
to
figure
out
how
to
name
and
organize
our
objects
and
and
whether
we
can
like
organize
them
hierarchically
or
not,
and
literally
yesterday,
like
I
bumped
across
some
of
the
the
documents
that
are
floating
around
out
there
on
your
side
about
at
this
hierarchical
name,
workspace
content,
so
I've
no
opinion
on
it.
Yet,
apart
from
very
very
high
level,
this
seems
like
this
might
be
a
similar
thing.
J
I
don't
know
if,
in
the
universe
where
we
actually
do
go
this
path
with
grafana,
I
don't
know
if
we
would
then
want
to
pick
up
your
workspace
concept
or
if
we
would
need
our
own.
You
know
no
idea
yet
multiple
layers
of
uncertainty
there,
but
in
any
case
it
is
it's
good
to
know
that
it's
what
you're!
C
J
What
would
be
like
what
would
be
the
most
useful
way
to
try
to
try
to
make
grafana
from
your
perspective
like
to
try
to
make
your
fun
and
do
a
guinea
pig
here?
What
would
be
the
most
useful
way
to
for
us
to
interact
with
you.
A
I
think
an
experiment
where
you
try
and
embed
kcp
the
api
server
aspect
recognizing
you'll
need
all
the
replace
statements
and
see
how
that
works.
A
Is
it
both
like?
Could
it
go
either
way
and
we
do
have
the
ability
to
turn
controllers
on
and
off
right
now,
but
some
things
like
the
workspace
concept
is
fairly
fundamental,
and
so
those
controllers
are,
I
think,
are
pretty
much
always
on.
But
you
know,
if
you
run
into
problems
where
you
don't
want
something,
and
you
have
no
way
to
turn
it
off
like.
Let's
talk
about
it
and
we'll
see
if
it
makes
sense
and
then
we
can
do
some
pr's
to
refactor
some
things.
J
Yeah,
so
the
the
spot
we're
at
right
now
we
already
have
a
a
working
prototype,
but
the
prototype
basically
just
proxies-
I
mean
you
know
just
like
spin
up
a
spin
up
a
kubernetes
api
server
somewhere
else
and
and
we'll
just
like
delegate
all
calls
to
it.
J
So
we
have
this
approach
to
things
where
we
built
in
the
abstractions
in
a
at
the
moment,
feature
branch
but
hopefully
soon
just
feature
flagged
version
of
grafana
where,
where
we're
able
to
interact
with
an
underlying
api
server,
but
expect
that
it's
remote
and
we've
designed
it
in
such
a
way
that
we
would
like
to
be
able
to
swap
because
we
have
operational
models
where
I
mean
like.
J
Basically,
you
know
simple
in
this
hypothetical
universe:
simple
grafana,
oss
out
of
the
box,
like
that's
the
one
where
we
don't
want
you
to
have
to
run
a
separate
kubernetes
server
in
order
to
run
grafana.
That
would
be
bananas.
But
for
like
our
you
know,
cloud
offering
probably
makes
a
lot
more
sense
to
delegate
to
something
external
than
to
be
running
kubernetes
inside
of
every
grafana
running
process.
A
Yeah
and
as
steve
wrote
in
chat
like
if
you
wanted
to
do
a
prototype,
that
was
just
an
api
server,
but
not
workspaces.
You
could
spin
up
the
generic
control
plane
like
kcp
does
and
you
would
have
a
kubernetes
like
api
server
with
you
know
everything
but
pods
and
whatnot
in
there,
and
if
you
need
some
pointers
like
just
hop
in
slack
and
let
us
know-
and
we
can,
we
can
set
up
some
separate
screen
sharing
sessions.
If
you
want
and
walk
through
some
stuff.
J
Yeah
I
mean
that
would
be
it'd
be
great.
I
think
I
I
explained
that
the
context
there,
mostly
because,
like
the
the
main,
the
way
that
we've
set
this
up
internally
is
we
don't
need
to
answer
the
question
right
away
as
to
whether
it
will
be
feasible
to
embed
to
embed
an
api
server
inside
of
grafana
in
order
to
make
progress.
However,
naturally,
as
we
do
more
experiments,
the
question
looms
larger
of
whether
that
will
ever
be
feasible
or
not.
J
So
I
I
don't
know
how
the
folks,
who
would
do
this,
what
their
relative
prioritization
would
be
on
like
checking
whether
embeddability
is
feasible
or
or
how
costly
it
seems
to
be
versus
the
other
things
that
they
might
want
to
do,
but
certainly
I
will.
I
don't
think
it
will
end
up
being
me.
I
think
I'll
end
up
being
someone
else,
but
I
will
I
will
pass
along
and
I
will
connect
them
with
you
all
and
they
can
figure
out
how
to
yeah.
B
B
I
think
there
are
people
who
have
expressed
idle
interest.
I
wouldn't
say
that
anyone
has,
you
know
climbed
the
guinea
pig
wall
yet,
but
you
know
that
the
nature
of
being
a
guinea
pig
is
the
more
of
them
that
do
it
the
easier
it
is
for
folks
behind
you.
So
thank
you
for
your
service
to
the
wider
community.
B
Like
oh
yeah,
I've
wanted
that
before,
but
never
really,
like,
you
know,
done
the
work
to
actually
try
it
so
sure.
H
J
B
It's
it's
worth
trying
it's
worth
seeing
if
it
works
right,
yeah,
right,
well,
yeah
great,
thank
you
for
thank
you
for
showing
up
and
asking.
B
Guys
thanks
all
right,
I
guess
we'll
go
back
to
on
milestone
issues.
Where
did
I.
A
A
Topics,
please
feel
free
to
interrupt
if
you
absolutely
don't
care
about
triaging
issues
and
assigning
them
to
milestones,
feel
free
to
drop
as
well.
D
I
just
had
a
follow-up
question
from
the
sinker
conversation
that
I
thought
about.
As
I
was
sitting
here.
Listening
does.
Does
that
change
cluster
registration
then?
So
just
would
you
imagine
that
when
you
create
the
syncer
on
your
p
cluster
that
that
will
register
your
cluster
with
kcp.
B
C
B
Will
set
up
a
syncer
in
there
and
and
do
stuff
instead
in
this
model,
cluster
registration
is
install
a
syncer
and
point
it
to
kcp,
and
the
syncer
will
say:
I'm
here
give
me
stuff
or
like
I'm
here,
here's
the
apis.
I
know
about
negotiate
and
then
give
me
stuff.
A
If
you
know,
if
you're
just
playing
around
with
kcp,
we're
going
to
try
and
add
a
cubecontrol,
kcp,
plugin
sub
command
and
if
you're,
in
sort
of
some,
if
you're
in
some
sort
of
like
large
fleet
management
situation,
they'll,
probably
be
able
to
take
advantage
of
some
some
library
code
to
do
that.
Work
as
well.
But
I
think
it's
going
to
be
some
split
responsibilities.
B
A
B
B
All
right
offers
on
the
table
if
you
have
any
topics
that
are
not
issue
triaging,
please
raise
your
hand
and
let
me
know
I'm
gonna
present.
B
Am
I
presenting
yes,
okay,
great
yeah.
This
was
this
was
validation
on
the
metadata
name,
which
was
overly
strict,
because
we
were
trying
to
cram
two
things
into
it.
This
one
should
be
relatively
straightforward.
I
may
make
this
a
good
first
issue.
It's
got
a
pr,
so
oh
well,
even
better
such
a
good
first
issue.
It's
already
gotta
fix
excellent
excellent,
never
mind,
but
it's
still
milestone.
Four
gotta
get
those
milestone.
Four
points.
C
A
B
Yeah,
I
think
we
were
trying
to
do
for
this
one.
We
were
trying
to
do
better,
developer,
docs
and
stuff
in
point
four,
so
point
forward.
A
B
L
Yeah,
it's
just
related
to
push
mode,
so
the
setup
is,
if
you
have
workload
cluster
registered
and
then
the
workload
cluster
disappears.
Then
there's
some
cycle
that
happens
and
I
put
that
it's
approximately
20
milliseconds
for
the
cycle.
I
don't
think
that's
actually
coded
anywhere,
that's
just
how
long
it
takes
for
the
loop
to
complete
just
by
eyeballing
the
logs.
L
So,
however
often
that
happens,
you
have
that
many
cycles
kind
of
spinning
around
and
it's
just
whether
we
want
to
fix
it
or
wait
for
push
mode
to
go
away.
A
I'm
I'm
tempted
to
just
close
it
and
not
put
any
time
into
it.
L
Yeah
so
that
happens
when
I
reach,
I
just
spin
up
kcp
with
an
authenticating
front
proxy.
So
the
way
that
we
would
do
for
like
getting
ready
for
shards.
L
When
is
the,
when
is
the
charting
stuff
scheduled
to
start
landing.
L
Best,
I
can
tell
it's
not
actually
causing
a
functionality
issue.
It's
just
an
error
that
pops
up
that
just
needs
to
be
tracked
down.
B
I
don't
have
a
lot
of
context
on
this
one
andy.
Do
you
know
what.
A
No,
let's
I'll
ask
you
on
for
some
clarification,
sure.
B
E
There's
just
like
a
little
bit
of
color
on
this,
I'm
one
of
the
few
people
actually
run
zwe
test
repeatedly
locally
and
it
was
like
the
garbage
there's
just
so
much
spew
like
I'm,
showing
you
two
lines
here.
This
was
like
every
yeah.
C
A
B
All
right
yeah,
it's
especially
weird
that
there's
two
different
versions
of
the
string
and
that
one
has
the
like
dollar
sign
format,
thing
that
get
he's
supposed
to
handle
anyway,
weird
another
shard
related
one
service
account
keys
between
shards.
A
Five,
I
don't
know
I
mean
I
think,
let's
put
it
in
five
and
something
we
should
probably
do
when
we
start.
The
milestone.
Planning
for
o5
is
look
at
everything
in
the
milestone
and
reevaluate.
B
A
A
But
yeah
the
the
priorities
low
it
it's
ideally
an
easy
fix.
Yeah
is
there
a.
B
Well,
there's
only
one
priority
label
and
it's
urgent,
so
I'm
not
going
to
do
that,
but
yeah.
I
think
the
only
so
there's
a
very
easy
fix,
which
is
to
update
the
docker
file
to
use
distrolus
instead.
But
maybe
that's
a
good
first
issue.
L
It's
actually
working
on
stage
we're
working
on
that
now.
The
minor
hiccup
is
getting
all
of
the
the
switches
into
the
image,
so
that
the
previous
issue
that
you
saw
with
the
kcb
client-side
plug-in
and
getting
those
warnings
to
go
away,
gotcha.
B
A
I
I
think
it's
going
to
be
a
like
debugging
requirement
in
the
future,
like
if
the
rule
of
thumb
that
I
have
gone
with
is
if
it's
not
user
facing
in
a
status
field
like
a
condition
somewhere,
and
you
expect,
as
a
service
operator,
kcp
owner,
to
be
able
to
do
some
debugging
here,
then
any
log
line
needs
to
fully
qualify
all
the
information
you
need
to
know
which
would
include
the
workspace
or
logical
cluster
namespace
resource
name,
that
sort
of
thing.
So
that's
what
this
is
about.
L
L
A
Are
happening,
I
don't
see
that
I
mean
look
at
like
gke
and
things
like
that.
I
mean
if
they
provide
you.
Let
me
put
it
this
way.
If
the
managed
star
ks
providers
give
you
control
plane
log
access
for
your
tenant,
then
we
should
probably
try
and
find
a
way
to
do
that.
If
they
don't,
then
I'm
not
worried
or
I'm
not
necessarily
interested
in
trail,
placing
okay.
B
A
I
think
somebody
should
probably
take
a
look
at
it.
I
mean
it's
related
to
the
development
and
the
file
being
out
of
date.
B
B
This
is
sounding
a
lot
like
deployment.
Splitter
slash,
you
know,
spread
scheduling.
A
I
think
whoever
is
doing
the
like
location
and
placement
work.
Should
it
definitely
read
through
this
and
make
sure
that
you're
either
covering
it
or,
if
not,
that
we
have
a
conversation
to
figure
out.
You
know
what
are
the
gaps
and
are
there
ways
to
close
them,
or
do
we
want
to
offer
different
guidance.
B
Yeah,
let
me
distill
and
rephrase
that
in
a
comment
I
can
do
it.
If
you
want
excellent,
then
I
will
move
on
or
I'll
come
back
to
it.
A
It
might
have
been,
I
think
we
we
probably
should
go
through
the
failures
in
github
actions.
It's
like
periodically
and
see
if
the
flakes
are
still
showing
up
or
not
yeah,
this
one's
definitely
fixed.
Okay,.
B
You'd
love
to
see
it
consistently
deploy
a
kcp
server
in
kind
clusters,
for
this
is
this
I
mean.
Is
this
ever
done,
but
also
is
this
just
part
of
updating
development.md
to
be
good.
E
Yeah,
so
this
is
part
of
enabling
kind,
basically,
ci,
testing
with
kind.
So
part
of
this
is
done.
We
have
a
basically
we
can
create
a
test
server
compatible
with
ede
via
like
a
golang
command.
The
idea
would
be,
is
we'd,
extend
that
actually
creating
kind
clusters
and
maybe
doing
some
initial
setup.
E
C
E
Like
deployment
mutation
and
stuff,
because
right
now
we
don't
actually
have
e2e
testing
and,
as
part
of
the
ee
testing,
I
would
expect
that
we
would
actually
have
a
reproducible
environment
for
developers
as
well.
So,
yes,
I
would
put
this
in
p4
honestly
if
we
want
to
put
it
into
milestone,
but
it's
kind
of
it's
not
really
a
deliverable.
E
That
would
be
the
hope
I
mean,
like
I
said.
The
initial
work
I
did
was
just
to
capture
like
getting
a
kcp
server
that
was
compatible
with
early
testing
trivially
without
having
to
like
slang,
bash
or
command
line.
Args,
and
so
the
hope
would
be.
Is
that'll
go
a
little
bit
further,
because
I
mean
my
focus
is
really
ensuring
that
we
have
same
environment
locally
as
we
have
in
ci
trivially
without
extra
work.
So.
B
Right
this
is
this
is
embedding
the
version
information
in
the
server
at
build
time
right,
yeah.
B
This
one's
now
getting
quite
old
but
possible
race
when
sinking
deletes
to
downstream
clusters.
I
think
we
have
known
that
there
are
possible
races
in
this.
I
remember
having
a
conversation
with,
I
think,
maru
about
sinker
being
racy
about
the
leads.
E
Yeah
this
is,
I
mean
the
hope
would
be
as
we
we
break
the
sinker
into
resource
and
name
space
and
we
actually
yeah.
This
is
some.
This
is
something
that
is
you
know
it's
just.
We
have
a
lot
of
like
foundational
work
to
do,
and
this
will
be
part
of
it.
It's
like
it's.
It's
a
symptom
of
kind
of
there's,
just
a
lot.
K
B
Okay,
I'm
going
to
toss
it
into
five
unless,
unless
we
feel
strongly
that
it
isn't
there.
B
Be
nice
to
people
asking
questions
bug.
Does
anyone
object
to
me
putting
this
in
point
four.