►
From YouTube: Community Meeting, March 29, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
march
29th.
This
is
the
kcp
community
call
up
on
screen.
We
have
our
issue
for
the
agenda
for
today
and
if
you
are
interested
in
adding
some
agenda
items,
please
feel
free
to
add
comments
to
the
bottom
of
the
issue.
I
just
put
the
link
in
the
meeting
chat,
so
you
have
a
an
easy
link
to
that.
Why
don't
we
go
ahead
and
get
started
with
the
first
one
which
jason
you've
got.
B
Yeah,
I
don't
know
I.
I
hope
that
this
doesn't
turn
into
a
a
long,
debater
discussion
or
whatever,
but
in
so
issue
774
or
pr
774
was
adding
the
demo
script
for
cluster
cordon
and
evict,
and
maru
pointed
out
that
this
is
is
pretty
similar.
I
mean
I've
realized
it
too,
but
it's
pretty
similar
to
the
existing.
You
know
end
end-to-end
tests.
We
have
that
prove
that
you
can
create
a
cluster
stuff
schedules
to
it.
You
coordinate
no
new
stuff
schedules
to
it.
B
You
evict
everything
leaves
you
know
like,
like
the
mechanics
of
that
are
pretty
simple,
but
now
we
have
it
in
go
and
to
end
tests
and
we
have
it
in
bash
magic,
demo,
script
and
he's
yeah.
I
guess
he's
here,
but
I
will
I
will
poorly
summarize
like.
Why
do
we
have
two
of
these
and
paul's
answer
was.
I
think
we
should
discuss
this
might
actually
tie
into
the
next
item
too
about
prototype
versus
december.
B
But
I
I
don't
you
know
I've
already
written
it,
so
I
don't
personally
mind
having
two
copies
of
it
so
long
as
only
one
of
them
is
canonical
for
breaking
ci
and
that's
the
end-to-end
test
like
the
go
test.
B
I'm
also
fine
not
having
bash
magic,
the
bash
magic
version
in
the
repo-
if
we
think
that's
not
providing
value
or
only
you
know
I'll,
create
it
and
then
I'll
record
it,
and
then
I
will
delete
it
or
it'll
live
in
a
directory
in
the
repo
that
slowly
rots
over
time.
Until
you
know
whatever
I
don't
know,
I
mostly
brought
it
up
because
it
seemed
like
a
useful
discussion
for
future
prototypes
and
demos
was
curious.
What
other
folks
thought.
A
Yeah
we
had
talked
about
this
a
few
meetings
ago
about
if
it
was
valuable
to
continue
maintaining
the
demos
and
making
sure
that
they
function
as
the
code
in
the
repo
evolves
or
if
we
wanted
to
treat
them
as
point
in
time
and
not
maintain
them
or
if
we
wanted
to
just
to
not
have
demos
in
the
repo
at
all
and
instead
have
documentation.
That
explains
the
functionality,
with
the
expectation
that
we
would
keep
the
docs
up
to
date.
A
C
B
Scripts
are
not
exercised
in
any
way
after
they're
committed,
they
will
break.
They
might
have
already
broken
like
this.
This
the
script
might
have
already
broken
since
you
know
friday.
So.
A
B
Yeah
we're
in
future
because,
like
the
demo
script
is
part
of
each
work
item
in
each
prototype
is
at
the
end
like
and
show
a
demo
that
it
works.
But
we
could
just
re
rephrase
those
from
commit
a
demo,
a
demo
magic
crash
script
to
record
a
video
or
record
an
ascii
cinema
or
something
yeah.
A
D
You
have
two
other
ideas,
a
second
repository
under
kcp
dev,
where
it's
clear
what
the
purpose
is
and
the
guarantees
like
contrib
or
I
know
something
like
that,
and
I
personally
would
prefer
a
markdown
blog
post
like
presentation
of
a
feature,
maybe
with
a
ascii
cinema
as
well,
but
more
like
this
style
than
just
a
video
video
10
minutes
of
video,
showing
that
who
will
follow
that.
Actually.
A
E
F
Ahead,
vapian,
we
have
prototype
specific
branches
too.
Don't
we
that
aren't
going
to
be
updated
as
we
move
to
the
next
one,
so
we
could
also
commit
the
prototype
specific
demos
to
those
branches
only
and
not
have
them
in
maine.
F
C
A
So
I
think
an
action
item
here
might
be
to
summarize
the
different
options
and
then
have
some
async
discussion.
If
there's
any
more
that
people
want
to
have
and
then
decide
a
path
to
go
forward
with.
A
Sounds
good
jason,
would
you
be
willing
to
yeah.
B
A
A
I
wonder
if
it's
maybe
a
little
confusing
to
outside
observers
coming
into
kcp,
to
see
that
we
have
prototype
version
or
prototype
numbers
instead
of
0.1020.3
and
so
on,
and
I
would
I
will
second,
what
staphon
just
wrote
in
chat
that
it
would
be
my
preference
to
start
using
semantic
versions,
all
pre
1.0.
So
everything
is
subject
to
breakage,
but
I
think
that'll
be
less
confusing
to
the
outward
community
and
outside
observers.
So
my
suggestion,
which
is
certainly
up
for
debate,
is
to
move
december
for
going
forward.
G
A
And
I
see
gorkham
wrote
for
me:
prototypes
also
represented
a
date,
we're
not
necessarily
removing
dates
from
december
milestones
and
github.
It's
just
changing
the
name
from
prototype
n
to
0.3
or
whatever
yeah.
H
I
I
was
actually
just
gonna
say
that
it's
like
for
me
separating
the
prototype,
which
is
attached
to
a
date
to
from
the
version
number
makes
a
lot
of
sense.
That's
what
I
was
gonna
say
I
was.
I
C
D
G
A
Okay,
I'm
not
seeing
any
objections,
so
I
will
happily
take
on
the
action
item
of
renaming
the
milestones
and
I'll
send
an
email
to
the
dev
list
as
well.
A
Okay,
jason,
you
have
the
next
topic.
B
Yeah,
I
think
I
mentioned
it
before,
but
the
cfp
for
kubecon
north
america
in
detroit
is
open.
I
think
we're
doing
a
ton
of
really
interesting
novel,
exciting
work,
and
if
you
don't
come
to
these
meetings,
you
probably
don't
know
about
it.
So
I
would
like
to
lobby
at
least
a
few
of
us
to
think
about
what
talks
we
could
propose.
I'm
more
than
happy
to
help.
You
write
the
abstract
help.
You
write
the
proposals
more
than
happy
to
talk
about
co-presenting.
B
If
it's
something
I
think
I
could
possibly
co-present
with,
but
I
think
we're
doing
a
lot
of
cool
stuff,
and
this
would
be
a
really
good
opportunity
to
go
into
more
detail
with
folks
about
what
we're
doing,
and
you
know
brag
on
our
progress
a
little
better.
B
A
Very
cool,
so
folks,
if
you
are
interested
in
working
on
those
topics
or
have
other
topics,
please
let
jason
and
the
rest
of
us
know.
J
J
You
see
my
screen
correctly,
yes,
mainly
just
for
the
reminder.
It
consists
in
this.
I
have
in
fact
a
sub
path,
an
endpoint
dedicated
to
each
sinker.
So
here
the
sinker
that
lives
in
the
u.s
west
can
cluster
that
has
been
registered
in
the
default
demo
kcp
workspace,
and
then
I
have
in
fact
one
endpoint
for
each
sinker
and
the
sinker
would
point
to
this,
and
it
would
be
the
thinker
view
of
the
kcp
world.
Let's
say
it's
like
that.
J
So
just
for
the
reminder,
so
the
the
the
addition
based
on
on
last
week,
work
is
exploration
to
implement
the
scene.
You
know
the
sinking
strategy,
typically
the
in
my
case,
the
deployment
splitter
having
a
distinct
view,
a
slightly
changed
view
of
the
same
deployment
for
each
sinker.
J
You
know
a
six
replica
on
one
side,
seven
replica
on
the
other
side
and
then
be
being
able
to
consolidate
the
statues
back
to
the
external,
visible
kcp
deployment.
J
How
we
did
that
previously,
we
mainly
just
create
two
deployments
leaf
deployments
explicitly
in
the
kcp
world,
and
instead
of
that,
we
can
just
on
the
fly
directly
in
each
of
these
virtual
workspaces
change.
The
view
that
each
sinker
has
to
change
the
number
of
expected
replicas
and
then
also
on
the
other
way
around.
When
a
sinker
would
update
its
replicas
on
the
fly,
we
would
be
able
to
update
the
the
main
kcp
replicas
by
just
some
summarizing
summing
in
facts,
the
replicas
of
all
the
locations.
J
J
Yes,
so,
as
you
can
see,
I
point
here
to
kcp
it's
the
with
the
cube
config,
but
I
override
the
server
here
to
point
to
the
virtual
thinker
virtual
workspace,
for
this
location
for
the
location
of
the
us
west,
one
cluster
and
the
same
for
the
other
sinker.
So
each
sinker
is
really
pointing
to
a
cluster
which
is
its
dedicated
virtual
workspace.
J
Let
me
start
it
here.
The
other
one
is
already
started
for
the
east
one
location.
Now,
if
I
create
the
deployment,
so
here
is
the
deployment
I
will
create
is
the
one
used
in
in
other
demos.
Obviously
I
changed
the
labels
here,
because
currently
the
cluster
label
is
mainly
you
know,
oriented
to
the
one-to-one
thinking
use
case
you,
you
can
have
only
one
cluster
label
that
points
to
a
given
location,
and
here
what
I
want
to
do
is
split
between
two
locations.
J
So
that's
why
I
just
rewrote
the
labels
a
bit
differently:
cluster
dot
and
the
name
of
the
location
here
so
and
the
virtual
thinker,
the
virtual,
the
sync
virtual
workspace,
sorry,
will
be
able
to
directly
transfer
one.
This
deployment
to
east1
location
is
transinker,
showing
exactly
what
east
one
should
see
and
would
forward
this
to
the
west
one
sinker
just
also
presenting
what
west
one
should
see.
J
So
if
I
apply
the
deployment
now
and
now
I
get
deployments,
I
can
see
that
I
have
exactly
the
same
mechanic,
the
same
behavior
as
the
deployment
splitter,
but
without
a
deployment
splitter.
In
fact,
I
mean,
without
addition,
additional
deployments.
Everything
is
stored
on
the
single
deployment,
which
is
the
kcp
one.
So
obviously
the
question
is:
where
is
the?
Where
are
the
location,
specific
statuses
replicas
stored?
J
And
if
we
look
in
more
details
here,
we
can
see
that
they
are
stored
in
that's
something
we
had
discussed
in
the
past
they're
stored
in
annotations,
which
are
typically
internal
annotations,
that
only
the
thinker
is
interested
in,
and
only
the
virtual
thinker,
of
course-
and
these
are
just
the
diffs
between
the
main
kcp
object
and
each
location
diff
for
this,
both
the
spec
and
the
statues.
J
So
that
means
that
really
only
the
main
deployment
contains
the
whole
information,
its
status,
own
status,
the
externally
visible
and
spec,
and
then
the
divs
that
allows
you
reconstructing
the
corresponding
view
for
each
location
based
on
the
main
kcp
object
and
on
the
other
hand,
if
I
try
to
have
a
look
to
what
is
the
view
that
the
west
one
sinker
has
on
on
these
deployments
here,
you
can
see
that
the
west
only
sees
seven
available.
Replicas.
J
Well,
not
well,
that's
the
statues,
but
if
you
look
in
the
spec,
the
west
only
gets
seven
replicas,
but
also
on
the
fly
during
you
know,
transformations
that
are
on
the
fly
applied
by
the
synchro
virtual
workspace.
J
It
gets
the
div,
on
the
other
around
the
way
to
rebuild
the
kcp.
The
main
kcp
object
from
this
location,
so
location,
can
know
what
the
main
kcp
object
is
and
the
other
around
and
managing
that
round
trip
correctly
I
mean
in
round
trip,
allows
incre
iteratively,
adding
both
I
mean
consolidating
the
status
of
the
main
kcp
object,
based
on
the
the
the
status
of
each
location,
because,
in
fact
always
the
main
kcp
object
has
all
the
information
regarding
all
the
locations
that
are
associated
to
it
for
each
sinker.
J
So
just
follow
for
the
funny
here.
If
I
change
not
not
here
sorry,
but
here,
if
I
just
do
a
patch
and
try
to
scale
up
to
51.
J
And
then
you
know
get
the
deployments,
then
we
can
see
that
it's
slowly
increasing
and
if
also
I
try
to
directly
look
into
the
kind
cluster
the
west
one
here.
I
can
see
that
now
it
was
increased,
so
I
mean
everything
here
I
mean
the
the
external
behavior
is
exactly
the
same
as
the
existing
deployment
splitter,
but
you
just
don't
have
additional
deployments
one
for
each
location.
J
Another
point
which
is
quite
important
is
that
in
such
a
model
the
sinker
becomes
even
more.
You
know
systematic
and
blind.
The
thinker
even
doesn't
have
to
you
know,
even
use
a
label
to
get
everything
that
is
labeled
for
it,
because
this
filtering
would
also
be
done
on
the
virtual
workspace
for
each
endpoint.
In
fact,
for
each
thinker
and
also
here,
everything
is
done
due
to
this
round-trip
management
of
this
annotations.
J
It's
just
based
on
transformations
that
are
done
on
the
virtual
workspace
side,
which
I
showed
last
week
that
are
really
synchronous
when
just
done
just
before,
transferring
the
request
to
the
kcp
chart,
in
fact
so
yeah,
that's
mainly
so
how
it
works
for
the
the
deployments
feature
and
I'm
currently
working
also
on
testing
that,
on
the
on
the
other
main
use
case,
which
is
the
ingress
splitter,
because
clearly
here,
the
the
syncing
in
fact
is
really
just
implementing
in
in
the
code
in
the
go
code
of
the
virtual
thinker,
implementing
a
sync
strategy,
which
means
that
just
exactly
the
logic
that
you
have
in
the
deployment
splitter,
you
have
a
location.
J
You
have
a
new
kcp
resource
that
has
been
changed
on
the
kcp
side,
you
want
to
produce
the
corresponding
location
view
the
corresponding
view
for
a
given
syncer.
Based
on
this.
Typically,
you
would,
you
know,
divide
the
number
of
expected
replicas
by
the
number
of
locations
and
and
for
the
update
from
location.
It's
the
contrary.
You
get
the
new
location,
specific
view
of
the
object
with
the
status
changed
updated
and
then
it
would
due
to
the
divs.
J
It
would
be
able
to
reconstruct
the
main
kcp
object
and
based
on
this,
to
get
the
other
locations
and
based
on
the
status
of
each
location,
be
able
to
reconstruct
the
status
of
the
main
object.
So
that's
the
whole
point
is
really
that
all
the
information
is
contained
into
these
annotations,
which
also
makes
it
possible
for
more
complex
cases
like
in
the
ingress
you
know,
and
for
ingress
controller,
that
it
could
be
done
out
process
in
the
case
of
the
deployment
splitter
is
just
in
process.
J
J
I
mean
this.
This
implementation
would
mainly
just
do
nothing,
but
in
any
case
you
would
have
all
those
diffs
annotation
up
to
date,
which
means
that
any
controller
can
look
at
the
main.
Kcp
object,
get
an
you
know,
a
helper
that
that
is
in
the
code.
That
would
be
in
the
code
to
reconstruct
each
location,
specific
object
and
then
then,
based
on
this
would
apply
the
logic
that
it
would
would
like
to
apply.
J
So
that's
how
obviously
we
we
would
do
for
the
ingress
splitter,
and
so
in
fact
the
ingress
splitter
would
just
react
each
time
an
ingress
is
changed
and
the
annotations
are
changed.
Let's
rebuild
all
the
the
view
for
each
location
and
then,
of
course,
get
the
updated
statues
for
each
location
and
based
on
those
status,
update
the
envoy
controller
yeah.
So
that's
mainly
how
I
mean
a
proposal
that
I'm
about
to
to
try
to
formalize
and
write
and
that
we
can.
You
know,
start
discussion
based
on
this
yeah
go
ahead.
Stephan.
D
J
Yes,
exactly
typically
here,
if
we
have
a
look
to
the
the
strategy
implementation
for
for
that,
I
plugged
to
to
the
deployments
in
fact
it's
a
strategy
that
is
really
for
all
the
scalable
objects,
because
everything
it's
exactly
the
same
logic
as
what
exists
in
the
deployment
splitter,
but
based
on
unstructured
object.
So
mainly
as
soon
as
I
have
you
know,
spec
replicas
and
status
replicas.
J
J
D
Yeah,
I'm
not
sure
we
can
talk
offline
about
that.
I
just
want
to
I
mean
if
we
use
annotations
now
I
mean
they
grow
bigger
right
this.
This
use
of
this
is
clever.
Maybe
it's
even
a
good
solution,
but
maybe
in
the
future
you
want
to
have
something
that
users
can
understand
like.
This
is
nothing
for
users
at
the
moment,
right,
that's
a
diff
for
controllers
or
for
the
virtual
workspace.
J
Yeah,
I
mean
that's
mainly.
There
are
several
aspects
there.
It's
the
annotations
here
are
mainly
the
storage
part
of
it
somewhere.
You
have
to
to
to
to
know
where
you're
going
to
store
the
the
divs
between
the
main
object
and
each
locations.
Here
I
stored
that
in
annotation.
So
obviously
I
had
to
you
know
enable
updating
annotations
at
the
same
time
as
the
status
in
the
crd
handler
just
for
internal
annotations,
but
we
could
obviously
choose
to
store
that
somewhere
else
or
in
some
other.
D
You
know
anything
else,
yeah
we
we
can
do
some
random
brainstorming
session
about
this.
I
think
I
have
some
ideas
how
we
could
use
that,
maybe
hide
it
in
the
future.
So
we
should
talk
about
that
if
we
start
like
that,
it's
fine,
but
we
need
a
way
forward
when
you
want
to
get
rid
of
this
arguments
eventually,.
J
Yes,
surely
I
mean
that's
really
implementation
detail,
but
just
to
explain
just
a
way
to
to
to
showcase
the
fact
that
we
can
finally
have
a
way
to
store
all
these
divs
and
information
really
attached
to
a
single
object.
D
To
answer
steve's
question:
there's
a
big
yes,
we
can.
I
mean
this
is
just
a
url
right.
Yes,
pointing
your
cube
cuddle
to
this
url
with
the
right
permissions.
A
user
could
see
what
the
synchro
sees
so.
J
Yes,
that's
this
one!
If
I
sorry
not
this
one,
yes,
this
one.
If
I
do
quick
ctl
and
I
override
the
server
here
to
point
to
the
virtual,
the
the
virtual
workspace
endpoint
dedicated
to
this
syncer,
then
I
can
see
exactly
what
what
this
the
corresponding
thinker
would
see.
J
J
Yes,
yes
completely,
I
mean
it's
just
a
question
once
again,
it's
just
a
question
of
the
the
url
where
you
the
where
you
point,
and
you
expect
the
cluster
to
to
answer
in
the
the
the
cube
api
server.
I
mean.
A
D
The
explanation-
maybe
just
one
thing,
which
I
found
surprising,
but
in
a
good
way
all
of
the
stuff-
is
really
real-time
right.
It's
not
about
this
that
there's
an
informal
running
in
the
virtual
api
server.
It
now.
J
J
Yeah,
so
it's
completely
real
time
I
mean
on
the
fly
and
the
other
thing
is
that
sorry,
it's
stateless
right.
That's
that's
the
point.
It's
stateless!
It's
a
state,
yes
completely,
and
also
the
fact
that
you
know
this
round-trip
diff
mechanism
that
on
the
fly
when,
when
a
location,
retrieves,
you
know
its
view,
it
gets
the
the
div
added
the
other
way
around.
I
mean
the
d
between
the
location
and
the
kcp
and
the
main
kcp
object.
J
J
You
can
just
reconstruct
that
and
you
get
in
fact
all
the
locations,
all
the
statuses
of
all
the
locations
on
the
fly.
So
once
again,
it's
there
is
no
additional
request
to
that
is
required
to
the
kcp
chart
to
be
able
to
reconstruct
the
status.
K
Yeah,
thank
you
so
yeah
that
is
awesome
to
have
a
solution
to
get
rid
of
the
leaves
resources
that
gets
created.
I
I
didn't
get
the
the
path
from
an
external
controller
standpoint.
How
would
you
see
to
that
logic
to
be
delivered?
Basically,.
J
Yeah
so
I
mean,
I
think
it's
mainly:
it
depends
on
on
the
level
your
controller
would
work
on
if
you're
and
if,
if
you
have
a
controller
that
is
only
interested
by
you,
know
the
the
external
kcp
view,
the
main
kcp
view,
then
there
is
nothing
changed
if
you
have
a
controller
that
is
interested
at
leveraging
the
status
of
location,
specific
views.
J
Typically,
that's
the
ingress
controller,
it
gets
the
you
know,
ips
of
every
leaf
ingress
currently
and
then
updates
the
an
envoy
configuration
based
on
that,
and
in
such
a
case,
the
one
of
the
ideas
would
be
to
mac
available
the
logic
that
that
allows
you
reconstructing
location,
specific
resources
based
on
the
divs,
mainly
the
same
method
that
the
same
function
that
is
used
to
reconstruct
the
locations
in
the
in
the
in
the
virtual
workspace
would
be
available
as
an
api,
or
you
know,
as
a
helper
for
external
controllers,
so
I
mean
the
ingress
controller.
J
K
I
L
K
So
I
I'm
I'm
thinking
basically
like
the
ivory
code
gateway
where
it
won't.
I
mean
that
would
be
an
external
controller
running
connected
to
a
kcp
service.
Basically,
so
the
I
mean
you
would
have
that
controller
want
to
want
to
perform
a
logic
on
the
on
the
spec
of
the
resources
that
get
sync
to
the
physical
cluster.
L
Right,
but
what
resources?
Because,
if
you're,
if
the
hybrid
cloud
service,
which
is
trying
to
fit
like
it's,
what
is
the
use
case
for
the
the
object?
What
object
you're
talking
about?
Because
if
you're
talking
about
deployment
you're
saying
oh
I'd,
run
a
different
deployment
splitter.
But
that
would
mean
that
you're,
conflicting
with
the
sinker
yeah
yeah.
C
K
It,
but
I
mean
not
not
not,
I
mean
just
as
an
example
taking
it
out
of,
I
mean
kcp
and
run
it
externally,
just
as
an
example
but
yeah.
Obviously
we
wouldn't
have
two
one
running
in
kcp
by
default.
I
guess
and
another
one,
but
just
so
that
we
understand
how
that
mechanism
could
be
applied
for
other
resources.
J
Yeah,
by
the
way
in
in
the
kcp
virtual
I
mean
the
thinker
virtual
workspace.
It
would
not
be
necessarily
hard
coded
because
it's
mainly
just
you
know,
pluggable
transformations
that
you
can
add
gbr,
so
I
mean
it.
It
would
be
something
that
could
even
be
be
configured.
But
what,
if
I
understand
clearly
your
question,
the
question
is
that
for
now
it's
completely
synchronous
the
transformation
from
the
kcp
object
to
the
location,
specific
object,
and
what,
if
you
want
to
plug
some
logic,
that
would
be
you
know,
provided
by
a
controller.
A
J
So
I
mean
for
the
for
the
the
downstream
to
upstream
case
the
status
consolidation.
It's
a
bit
easier
because
you
know
it's
the
end
of
the
flow
yeah,
so
I
mean,
even
if
you
did
nothing
special,
if
you
updated
all
the
diffs
for
power
location,
then
you
can
react
after
the
fact,
in
the
controller,
in
the
ingress
controller,
that's
right
that
for
the
for
the
down
I
mean
upstream
to
downstream
it's
a
bit
trickier,
because
it's
really
a
you
know
a
synchronous
process.
J
The
transformation
is
done
during
lists
and
watches,
for
example,
and
so,
if
obviously
we
could
delegate
to
some
you
know
webhook
like
or
even
to
some
controller.
You
know
a
bit
on
the
model
of
of
subject,
test,
review
or
stuff
like
that.
But
then
the
problem
is
that,
on
the
on
the
asynchronous
part,
your
controller,
you
would
need
to
really
be
very
fast
because
you
are
waited
for
by
by
a
process
that
is
inherently
synchronous.
So
that's
the
main,
it's
completely
possible
to
delegate
that
to
some
other
external
component.
K
J
Yeah
exactly
yes,
that
that
would
I
mean
that
would
also
be
an
option
to
do
a
bit
like
what
I
explained
with
the
status.
That
means
that
the
same
way
ingress
controller
would
be
able
to,
you
know,
have
an
api
or
a
helper
to
decode
the
the
the
the
divs.
J
Possibly,
we
could
provide
some
sort
of
api
to
be
able
to
produce
the
divs
yourself
outside
of
the
overall
process
and
then
in
the
in
the
main
thinker
loop,
if
the,
if
those,
if
they
are
already
there-
and
maybe
there
is
a
flag
or
something
like
that
or.
A
Did
you
know
I
need
to
break
in
here?
We've
got
about
20
minutes
left.
We
want
to
talk
about
scoping
for
0.4,
so
I
would
encourage
you
all
to
continue
this
discussion
offline,
either
separately
or
in
a
google
doc
or
slack
or
somewhere,
if
that's
cool,
all
right
jason,
if
you
would
wouldn't
mind
resuming
screen
sharing.
The
last
item
is
the
p4
items
in
the
work
packages
doc.
A
Okay,
so
our
target
date
is
a
month
from
today
april
29th.
A
We
have
been
hoping
to
have
about
a
six
weeks
or
so
a
month
and
a
half
if
we
had
finished
prototype
three
on
the
18th,
but
it
slipped
a
little
bit.
So
looking
at
the
schedule,
actually,
let
me
start
with
themes,
transparent
multi-cluster
workloads
and
repository
repository
hygiene
and
approachability,
so
schedule
wise.
We
want
to
have
designs
and
rough
sketches
for
demos
on
april
4th,
which
I
don't
have
my
calendar
in
front
of
me
is
monday.
A
So
stefan
can
I
turn
it
over
to
you,
or
would
you
be
willing
to
take
over
and
talk
about
the
focus
here?
So
we
have
lots
of.
D
D
There
are
topics,
that's
also
commented.
There
are
topics
around
yeah.
What's
the
questions
of
your
achievement,
so
this
multi-cluster
workload
thing
multiple
clusters
for
one
namespace,
which
probably
depend
on
david's
work.
So
david
will
continue
this
work.
We
will
see
when
this
lands,
but
if
we
decide
this
is
this
is
the
future.
You
should
be
very
careful
not
to
build
new,
apis
or
newsletters,
or
something
like
that,
which
then
have
to
move
over
to
the
virtual
workspace
yeah.
We
have
to
just
to
see
which
of
those
we
want.
D
A
D
D
D
D
D
C
D
B
For
the
pod
logs
item,
are
we
comfortable
with
0.4,
including
just
in
a
design?
We
all
agree
on
for
this?
I
feel
like
the
implementation
of
this
will
be
yeah.
B
A
Yeah
that
makes
sense
to
me.
I
guess
we
need
to
check
in
with
antonio
and
see
if
he
has
availability.
A
So
we
didn't
talk
about
the
first
one
diagnosability
when
why
sinking
fails,
but
before
we
get
into
that,
like
I,
I'm
trying
to
figure
out
the
approach
here,
given
that
there's
so
many
things
in
here
and
a
finite
number
of
people
and
like
stephanie,
were
talking
about
putting
names
next
to
items
like,
should
we
should
each
of
us
put
our
our
name
next
to
one
thing
and
plan
for
getting
that
one
thing
in
or
what
were
we
thinking.
D
A
C
A
Okay,
I
want
to
put
my
name
in
here
as
well,
at
least,
to
help
guide
the
direction.
B
Are
there
specific
things
for
repo
hygiene?
I'm
not?
This
is
not
intended
to
say
that
I
think
we
don't
need
repo
hygiene,
but
like
are
there
issues
tagged
with
hygiene
or
stuff,
specifically
that
we
want
to
get
done
for
that.
A
One
of
them
that
comes
to
mind
for
me
is
creating
a
base
controller
that
all
the
controllers
can
extend.
There's
an
issue
open
for
it.
You
know
we're
we're
not
using
controller
runtime.
It's
it's
somewhat
impossible
right
now,
given
the
state
of
things.
So
without
that,
then
the
next
best
thing
is
to
create
a
base
controller.
J
Yeah,
maybe
it's
also
related
to
the
fact
that
we
would
remove
demos
if
we
remove,
you
know
the
the
demo
scripts
having
entrance
tests
end-to-end
test
with
you
know,
complete
real
mode
is
quite
important.
I
assume.
C
B
B
Yeah,
this
would
definitely
not
be
something
we
would
recommend
in
general,
but
some
of
the
downstream
users
want
to
test
with.
This
is
my
understanding
or
or
so
I
could
port
some
yeah.
I
could.
D
B
B
Yeah,
it's
not
something
we
want
long-term.
It's
it's
only
to
be
able
to
enable
short-term
experimentation.
So
yeah,
it's
experimental.kcp.net!
A
Yeah
agreed,
but
those
two
things
are
important
for
some
downstream
stuff,
so
we
will
want
to
get
those
done
in
p4.
A
Stefan,
are
you
cool
moving
that.
D
G
B
J
Yeah
and
by
the
way,
related
to
virtual
thinkers,
I
think
we
we
should
have
discussions
in
you
know
the
various
workload
related
and
sync
reality
stuff.
We
should
have
discussions
about
what
should
be
put
and
transformed,
as
as
for
transformations,
what
should
be
put
back
into
the
virtual
thinker
transformations
and
what
should
be
transformed
on
the
you
know,
thinker,
client,
side
and
on
the
physical
cluster.
Maybe
everything
should
be
moved
back
to
the
virtual
thinker,
but
that's
not
you
know,
sold,
not
sure.
I
think
we
have.
We
have
to
discuss
that.
J
G
B
J
B
J
Seems
to
me
that
we
have
to
more
clearly
draw
the
lines
between
scheduling,
thinking,
quite
systematic
thinking,
but
with
strategies,
and
then
the
thinker
client
side,
which
is
completely
blind.
C
G
Not
so
much
on
the
hygiene
side
and
there's
a
tension
between
you
know
getting
features
done
and
getting
nice
to
have
development
stuff.
But
ideally
we
would
move
all
the
controllers
into
a
single
controller
manager
and
enable
debugging
controllers
individually
in
tests.
I'm
not
sure
the
timeline
on
that,
but
that's
definitely
to
the
next
big
task.
C
D
Please
start
those
design
meetings,
one
hour,
half
an
hour
meetings,
whatever
invite
the
people
in
slack
so
might
get
public
if
possible
and
try
to
scope
it
down
for
the
time
we
have.
A
I
think
now
we're
probably
at
a
pretty
good
stopping
point
just
to
reiterate,
if
you
are
putting
your
name
next
to
something
in
this
list.
Please
start
working
on
designs
and
scheduling,
meetings
to
discuss
and
the
goal
is
to
come
back
on
monday,
tuesday
next
week
and
be
able
to
have
them
roughly
finalized
so
that
we
have
our
final
scope
for
0.4
locked
in.