►
From YouTube: Community Meeting, July 12, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
welcome
everybody.
This
is
kcp
community
call,
july
12,
2022
and,
as
usual,
the
issue
is
visible
now.
So
if
you
have
topics
you
want
to
talk
about
today,
please
add
some.
At
the
moment
we
have
the
usual
hygiene
for
the
issues.
B
A
A
And
then
we
have
three
more
topics.
First,
one
is
maybe
short:
it's
basically
a
shout
out
so
sergius
started
this
document
just
to
understand,
testing
and
how
we,
which,
which
users
are
used
to
connect
to
which
components,
especially
in
the
test
and
to
uncharted
android
tests,
and
this
is
what
came
out.
A
I
just
want
to
point
out
a
few
things
which
might
be
helpful
when
you
start
debugging
this
pretty
special
end-to-end
cia
job,
it's
special
because
we
have
the
front
pocket.
So
there's
one
component
more
than
once,
and
the
test
is
running
here
at
the
top,
so
the
top
height.
I
think
you
cannot
see
my
cursor
right.
If
I
point
yeah,
we
can
here.
A
All
the
tests
use
is
basically
this
effort,
so
it
connects
to
the
front
proxy
and
only
then
it
connects
to
the
chart
to
the
kcp
server
and
of
course
there
are
more
connections
like
the
tests
connect,
some
of
them
connect
directly
for
reasons,
and
I
think
one
error
is
missing.
Some
tests
also
connect
to
the
synchro
virtual
workspace
here,
but
basically
in
mind.
This
is
a
connection.
It's
the
main
connection,
yes
plus.
D
A
A
Yeah,
but
for
the
for
the
test
cases,
it
just
doesn't
make
any
difference.
They
aren't
the
same
way
they
connect
the
same
way
all
right.
So
this
is
mostly
what
I
want
to
say.
Maybe
just
highlight
those
colored
texts
here
they
are
super
important
and
they
they
are
the
reason
why
the
connections
are
the
way
they
are
so.
White
cards
are
completely
disabled
to
the
front
proxy.
A
If
a
workload
wants
to
connect
viability,
cards
to
api
exports,
for
example,
it
has
to
go
through
the
green
path
here
to
the
sim
capture
workspace
or
the
ai
export
virtual
workspace.
A
The
same
way
if
the
test
case
wants
to
wildcard
do
by
cutting
requests
against
the
the
kcp,
but
not
going
into
the
virtual
workspace
like
when
you
test
the
virtual
workspace
you
want
that
you
have
to
go
directly
to
the
to
the
chart,
so
this
is
also
a
reason
why
this
error
exists.
A
So
keep
those
in
mind
the
arrow
here
and
also
the
vertical
one
they
are
privileged.
So
you
have
to
be
system.
Admin
system
master
to
be
able
to
even
use
this
feature
and
yeah
that's
different
to
the
secured
authorized.
Green
white
cards
on
the
left
side
for
the
virtual
workspace,
but
those
are
the
reasons
when
you
notice
you
want
wildcards,
it
doesn't
work.
You
need
a
different
client.
D
A
D
Right
so
yeah
one
thing:
I'm
I'm
working
on
the
system,
masterless
kcp
front
cpr,
so
I'm
constantly
doing
one
step
forward
and
constantly
chasing
also
main
with
changes.
D
So
after
rebasing
today,
with
the
change
that
I
believe
emerged
on
yesterday,
I
believe
that's
the
testing
wrappers
of
cluster
client
calls.
I
think,
some
things
broke
essentially,
if
you're
using
now.
I
think
the
biggest
change
from
this
pr
1376
that
I
referenced
here
in
the
issue,
the
biggest
change
that
it
was
that
you're
using
a
context
now,
instead
of
a
dedicated
method,
to
invoke
calls
against
the
cluster
against
the
client
to,
for
instance,
create
a
cluster
workspace.
So
yeah.
D
D
Instead,
what
you're
doing
now
is
you
omit
the
dot
cluster?
You
pass
it
the
context,
with
the
hope
that
you
know
the
create
call.
In
this
case
a
secret
is
being
executed
within
the
context
of
the
given
cluster,
and
this
is,
as
far
as
I
understood,
to
ease
sort
of
like
the
compatibility
with
yeah
playing
cube,
vanilla
clients.
D
However,
it
broke
because
when
you
do
the
create
the
implementation
underneath
create
actually
doesn't
really
consider
the
context,
at
least
not
in
my
tests,
so
I'm
not
sure
how
this
could
have
passed,
but
just
to
raise
awareness.
Stefan,
I
don't
know
if
the
original
contributor
varsha
can
have
a
picture
on
slack,
but
if
not,
maybe
we
can
sync
or
anybody
else
can
have
a
look,
because
I
think
it's
so.
F
I
have
a
stupid
question
here
along
these
lines.
I
was
looking
at
regarding
the
issue
of
client-side
sharding,
the
implementation
of
the
well,
the
interface
for
in
regular
kube.
The
rest
interface
is
not
actually
in
terms
of
interfaces.
There's
a
particular
struct
that
things
go
through,
so
it's
not
like.
I
can
provide
an
alternate
implementation.
F
There's
there's
a
fixed
implementation
that
that
I
ran
into.
Did
I
miss
something
is
or,
if
not,
is
this
something
that
we've
started
discussing.
A
It's
changing
at
the
moment,
so
there
has
been
work,
but
so
here
she
can
talk
about
that.
This
is
changing
light
at
the
moment
and
I
think
some
changes
went
in
so
do
you
want
to.
B
Explain
sure
so
the
idea
behind
this
was
instead
of
using
a
dot
cluster.
B
B
So
the
cluster
round
ripper,
basically
modifies
the
a
host
and
then
passes
on
the
request
so
yeah
I
so
this
was
just
a
pr
to
check
out
if
things
work
in
the
api
export
controller,
and
I
think
there
were
a
lot
of
flakes
in
ci,
but
then
eventually
it
passed,
but
I
can
look
into
the
test
which
is
failing
and
one
another
drawback
of
this
particular
thing
was
the
discovery.
Client
didn't
use
a
cluster
scope
context,
so
it
was
creating
a
context
somewhere
inside
in
the
implementation.
B
So
that
was
another
thing
where
we
had
a
work
around
in
which
we
were
modifying
the
host
of
the
config
and
then
sending
it
directly
so
but
yeah
I
can
look
into
the
test,
which
is
failing
and
dig
more
into
it,
and
I
also
created
issue
of
the
list
of
controllers
and
list
of
instances
in
kcp
which
we
need
to
modify
to
make.
The
client
calls
cluster
scoped
at
least.
D
From
what
I
recognized
is
the
request:
url
is
not
properly
constructed
so
like
when
you
give
it
like
today,
literally
in
the
test
that
I
have
in
my
open
pr.
For
instance,
when
you
do
a
create-
and
you
give
it
a
cluster
context.
Essentially,
the
url
will
just
have
the
base
path
and
will
not
have
the
actual
cluster
appended
to
the
url
that
you
gave
it
to
that's
the
symptom
that
I'm
observing.
A
D
Okay,
yeah,
that's
also
what
I
wanted
to
know.
If
anybody
else
recognized
this
varsity,
maybe
we
too
could
sing
out
of
band,
maybe
tomorrow
and
maybe
even
have
a
live
debug
session,
not
sure
what
time
zone
you're
in.
D
C
By
david
yeah,
I
just
had
that
so
that
the
whole,
the
main
and
first
part
of
the
homework
spaces
request
was
merged
last
week,
and
so
it
introduced
the
the
new
cube
ctl
workspace,
tilde
here,
syntax
that
I'm
showing
here
yeah
now
this
one,
as
you
see
in
the
in
the
comments
and
based
on
this.
C
Yes,
thank
you,
stephen
and
based
on
this,
we
could
go
one
step
further
and
change
a
bit,
the
semantic
of
the
ws
command
to
better
match
the
typical
cd
semantics
that
that
you
know
the
ws
command
already
looks
like
mainly
because
now
we
have
a
home
and
a
first-class
citizen
home
concept.
We
could
think
about
having
the
ws
without
parameters
being
exactly
the
same
as
ws
2d,
because
on
some
keyboards,
it's
quite
very
painful
to
type
this,
and
then
it
would
bring
you
to
the
homework
base
and
keep
the
other
ones
ws
dot.
C
C
A
I
have
one
comment
so
the
question
I
asked
myself
in
those
cases:
if
we
change
it
and
think
about
in
six
months,
will
we
miss
the
old
behavior?
Will
the
new
one
be
strange,
or
is
it
just
normal
like
if
there
had
never
been
the
case
that
ws
just
prints
the
current
workspace,
but
we
we
had
that
from
the
beginning.
C
Yeah
the
thing
I
mean
to
me
I
mean
my
feeling
at
least
is-
is
that
this
one
is
quite
intuitive
and
by
the
way,
having
ws
without
anything
type.
The
current
value
is
also
not
very
intuitive.
I
mean
I
mean
we
could
have.
You
know
double
obvious
ws
current
or
something
like
that.
It's
quite
explicit,
it
shows
it,
but
ws
without
any
parameter
can
be
anything
in
fact,
so
I
mean
it's
just
a
convention
to
me
it's
at
least
this
way.
F
C
F
Well,
actually,
that's
my
point
right
because,
with
cube
cuddle,
every
new
thing
is
another
plug-in.
So
I'm
not
sure
this
is
the
right
path
to
go
down.
Maybe
I'm.
F
A
A
G
G
A
G
F
G
E
G
C
There
is,
I
mean
the
homework
space
is
precisely
just
one
case
where
you
want
to
be
where
you
are
exactly
because
when
you
do
ws,
tilde
or
ws
without
anything
here,
then
you
are
brought
to
let's
say:
root
users
bucket
one
backer,
two
and
username,
and
then
this
precise
logical
cluster
name,
you
didn't
know
before
I
mean
it's
calculated
for
you,
it
comes
from
the
server
side,
so
it's
at
least
in
this
case.
C
G
That
it's
not
just,
I
think
what
I'm
saying
is
the
existing
cube.
Control
like
semantic,
is
extremely
powerful
right,
like
if
I
like
chris,
is
saying.
If
I
want
to
edit
something
or
delete
something
or
create
something,
and
I
know
it
has
to
be
in
a
particular
workspace
I
or
in
a
particular
name
space.
I
add
that
flag
and
if
the
yaml
I
passed
in
doesn't
match,
I
get
an
error
yeah
I
mean
the
whole
like.
Look
it
up
and
then
implicitly
use
it
like
that's
a
little
bit
less
yeah.
F
G
A
But
this
is
a
different
discussion,
you're
opening
here,
so
I
think
the
user
experience
per
se
has
been
discussed
many
many
times.
So
we're
really
talking
about
dots
current,
whatever
the
the
keyword
is
the
sub
command
and
whether
we
use
ws
alone
without
any
parameter
for
something
else,
like
everybody
who
used
this
command
for
months
has
to
he
learned
something:
that's
the
question:
do
we
want
that?
Is
it
worth
it.
C
A
C
We
already
have
the
dot
dot,
so
I
mean.
Obviously
you
already
have
a
reference
to
you
know:
shortcuts
used
in
cd
and
photoshop
cuts,
so
the
the
meaning
of
dot
should
be
quite
obvious
for
anyone
knowing
the
meaning
of
dot
dot.
A
C
E
Chris
yeah
yeah,
I
just
wanted
to
highlight
that
I've
made
simple
code
changes.
Adding
a
flag
was
one
of
them
and
then
just
updating
and
then
point
and
the
the
sinker
test
seems
to
be
flaky,
but
also
I've
noticed
psych
faults
and
core
knobs
and
those
have
happened
in
multiple
different
tests.
A
A
E
E
I
have
looks
like
systemd
hasn't
cleaned
up
all
of
them.
I
have
the
one
a
core
file
from
just
a
little
bit
ago.
G
On
the
other
ede
front,
we
recently
made
it
so
so,
like
one
source
of
a
lot
of
our
flakes
was
a
series
of
go
routines.
That
would
sit
in
the
background
and
query
health
and
readiness
like
kcp
during
the
tests
for
some
completely
unknown
reason.
G
The
cd
would
go
unready
every
once
in
a
while
for
a
little
bit,
it
doesn't
seem
to
ever
affect
the
actual
correctness
of
the
tests.
So
I
just
made
it
only
fail
if
it
gets
two
in
a
row
and
that
seems
to
hopefully
have
made
it
much
less
likely
we'll
keep
thinking
about
how
to
fix
that
cd.
But
if
it's
not
actually
leading
to
any
problems,
I
don't
think
we
should
be
wasting
time.
Re-Testing.
A
J
J
The
next
thing
that
we're
going
to
do
on
that
in
maybe
tomorrow
or
so
is
go
through
them.
Anything
that
we
don't
think
is
going
to
finish
up
this
week,
we'll
go
ahead
and
move
it
to
0.7
unless
it's
critical,
but
I
think
we
should
probably
make
the
goal
of
tagging
at
the
end
of
this
week
or
on
monday
for
0.6
just
so,
we
can
call
that
complete
so
for
0.7.
J
What
really
means
is
that
we
should
focus
on
finishing
what
we've
already
started,
because
we've
got
a
lot
of
things
in
flight
and
preparing
any
designs
for
0.8,
so
we
can
get
ahead
of
those
discussions
where
people
have
time,
but
it
does
sound,
like
maybe
priority
over
design
even
would
be
stability
for
our
test
system.
If
people
agree
with
that,
there
is
a
list
in
our
work
packages,
document
of
things
that
are
in
flight,
that
we
should
swarm
around
and
try
and
finish
up
where
we
can.
So.
J
My
suggestion
would
be
that
if
the
group
agrees
that
we
team
up
on
these
topics
and
help
them
over
the
finish
line
for
folks
that
need
other
items
to
work
on,
there
are
three
new
topics
to
talk
about.
We
do
have
some
other
contributors
who
are
looking
at
networking
but
is
still
valuable
if
anyone
is
interested
in
that
area
to
to
sync
up
on
that,
but
api
evolution
and
inverse
syncing
were
ones
that
we
know
we
want
to
start
thinking
about
in
the
0.8
time
frame.
J
J
So
actual
tangible
steps
is
going
to
be
we're
going
to
filter
out
zero.
Six
close
anything
that
finishes
friday,
move
the
rest
of
zero
seven.
We
need
names
on
any
of
the
themes
here
that
you
wanna
team
up
and
work
on
to
focus
getting
the
top
list
over
the
finish
line,
otherwise
flakes
finally
design.
H
J
Okay
for
location
and
workspaces
is
sheehan,
I
think
sheen
and
david,
and
maybe
joaquin
have
been
working
on
that.
So
if
you
want
to
describe
just
kind
of
where
it
is
and
what's
next.
J
If
you
have
any,
maybe
description
of
where
things
are
and
what
the
next
steps
are,
that
we'd
want
to
accomplish
to
close
it
out.
C
Yeah,
if
I'm
not
mistaken,
there
is
quite
some
work
in
you
know
separating
location
workspaces
from
where
real
resources
from
the
workspaces
that
contains
the
real
resources.
For
now
I
mean
until
now,
both
were
quite
the
same,
so
we
have
to
update
things
on
on
this
think
of
virtual
workspace.
I
assume
amazing
yeah,
meaningful
things,
that's
the
fun
it.
A
Makes
sense
just
motivation,
the
motivation
is,
there
are
different
owners
of
locations
and
some
targets,
which
means
we
will
have
different
location
workspaces
like
bring
your
own
kind
of
workspaces,
where
the
clusters
in
the
local
data
center
are
and
then
there's
a
public
service,
and
you
might
want
to
schedule
your
workloads
to
both
like
one
namespace,
to
bring
your
own
clusters
in
the
basement
and
the
others
into
the
cloud
and
to
make
that
possible.
A
A
A
All
right
next
one
is
quota,
I'm
not
sure,
there's
so
much
to
say
about
that.
Andy
has
prototypes
of
cube
quarter,
but
they
are
still
not
completely
there
yet.
So
this
will
continue.
I
think
andy
will
just
continue.
This
work
same
for
sharding
lucas
is
doing
the
first
steps
deep
in
the
server
binary
and
adding
flags
and
adding
flexibility,
adding
a
second
chart.
That's
basically
the
goal
and
fix
whatever
comes
up.
This
will
not
be
the
super
scalable
sharding,
but
it
will
be
the
starting
point
to
go
or
to
scale
horizontally.
C
A
Not
yet
not
yet
the
idea
is
the
plan.
Is
the
workspace
scheduler
will
by
default
schedule
to
the
root
chart,
so
everybody
who
just
creates
a
workspace?
This
will
land
on
root
and
in
end
to
end,
we
will
have
a
way
to
schedule
workspaces
under
our
control
to
the
second
chart
and
see
what
happens.
That's
how
we
want
to
attack.
A
B
A
A
A
I'm
not
sure
maybe
microphone
doesn't
work.
We
talked
about
that
earlier
today.
So
basically,
there
are
improvements
in
the
controllers
we
have
built.
This
is
it's
critical
at
the
moment
that
the
resource
controller,
which
enables
stuff
is
the
same
as
api
binding.
So
there's
good
work
to
be
done.
Hello.
K
And
now
you're,
here,
yeah,
okay
yay.
I
did
update
the
tracking
epic
and
added
like
four
issues
that
I
think
probably
should
be
for
0.7
effectively.
This
is
a
refactor,
though
big
one
is
this
refactor
that's
defined.
I
talked
about
today,
which
is
we
want
to
separate
out
the
determining
of
permission.
Claims
are
added
removed
or
invalid
into
one
controller
and
the
actual
labeling
of
resources
into
a
different
controller.
K
That's
the
big
one
and
then
the
rest
of
them
are
smaller
ones
that
are
just
gonna,
be
like
stuff
that
we
can
add
on
to
so
and
making
sure
that
we
do
some
end-to-end
tests
make
sure
certain
things
can
happen.
So
one
of
the
other
big
pieces
is
going
to
be
tying
together.
The
exported
permission
claims
to
the
binding
permission,
claims
to
let
a
user
know
that
there's
permission
claims.
The
export
has
that
you
haven't
accepted,
or
things
like
that.
So
that's
the
those
are
the
two
big
ones.
I
think
for
0.7.
A
C
Yeah
mainly
bringing
back
the
the
transformations
that
were
shown
months
ago,
as
as
demos
step
by
step,
basic
transformations.
First
then,
using
those
transformations
to
manage
thinker,
specific
views
of
kcp
objects
that
the
main
point
so
that
we
can
provide
us
more.
You
know
consistent
and
short
foundation
for
sort
of
coordination
controllers.
All
the
you
know
the
feature
that
would
be
awaited
by
a
number
of
of
external
actors.
I
mean
from
kcp
you
know,
for
example,
you
know
to
coordinate,
increases
or
or
or
spread
deployments,
or
all
this.
C
This
work
all
these
cases
where
we
need
to
maintain
a
precise
state
of
a
kcp
resource
which
is
which
has
variants
across
thinkers
which
is
not
synced
with
exactly
the
same
status,
for
example
on
each
on
each
thinker.
So
the
transformation
framework
would
be
the
base
for
this
and
which
would
then
be
the
base
for
for
tackling
the
coordination
controller,
marginal
work.
A
A
Basically,
conversions,
they
are
always
hard
to
do.
We
know
that
from
cube,
and
we
are
not
sure
it's
really
the
right
vehicle
for
kcp.
If
we
can
avoid
some
of
that
another
more
flexible
migration
mechanism,
this
would
help.
So
we
have
to
discuss
those
things
some
some
explorations
from
some
months
ago.
We
should
talk
about
those
and
see
what
we
can
do
and
how
it
would
look
like
when
we
implement
that
it
doesn't
mean
that
we
finish
that
this
release,
but
we
have
to
start.
I
think.
A
A
The
deployment
by
conformance
of
cube
can
assume
certain
things
like
that
that
the
pod
can
reach
a
service
ip.
That
dns
works
like
you,
can
resolve
service
domain
names
and
those
depend
on
namespaces.
Namespaces
are
mapped
in
a
physical
cluster,
so
they
are
different
on
the
kcp
site,
which
means
the
kcp
user
has
no
idea
how
the
dns
name
will
look
like,
which
means.
Basically,
what
we
have
at
the
moment
is
not
conformant,
so
we
need
some
kind
of
mapping
from
dns
names.
A
I
think
also
the
downwards
api
so
that
you
can
map
in
namespace,
for
example,
into
our
environment.
Variable
these
things
are
open,
so
we
need
to
extend
the
silica
probably
to
do
that
to
do
the
mapping
the
rewriting
and
some
way
for
dns.
So
I
just
heard
so
the
word
core
dns
plug-in,
I'm
not
sure
what
it
is
or
whether
it's
a
solution.
That's
what
I
heard
from
networking
for,
except
we
probably
need
that
so
everything
around
enabling
the
single
cluster
use
case.
This
is
network
mvp.
A
C
Yeah
that
there
are
some
some
cages,
obviously
that
were
that
you
know
emerged
or
were
spotted
for
in
thinking
about
storage
use
cases
where
it
might
be
required
to
pull
some
objects,
that
you
know
that
are
created
on
physical
clusters
up
to
the
kcp
level,
so
that
they
are
known
and
then
can
be.
Possibly
think
of
you
know:
storage,
migration,
for
example.
Pvs
pvcs,
then,
would
be,
you
know,
pulled
up
at
the
kcp
level
and
then
sink
back
on
a
distinct.
C
Physical
cluster,
so
I
mean
it's
a
sort
of
you
know
we
have
to
design
and
also
to
to
to
define
the
limits
of
inverse
thinking
which
would
allow,
in
some
really
specific
and
well-defined
use
cases,
pulling
data
the
just
array
that
the
way
the
inverse
way
of
what
we
we
usually
do,
which
is
from
downstream
physical
clusters
to
to
upstream
you
know
or
kcp
level.
So
that's
the
whole.
You
know
area
that
we
have
to
first
explore
design
and
and
then
define
the
implementation
at
least
the
minimum
one.
G
Yes
and
okay,
it's
not
entirely
clear
to
me
if
this
requires
us
to
redo
all
of
the
cluster
name
stuff
or
not,
because
we're
not
storing
it
in
that
cd
and
continuing
to
use
the
deprecated
field
for
now
should
probably
be
fine.
J
J
J
A
F
Yeah
I
haven't
had
time
to
work
much
on
it.
I
do
also
have
a
colleague
that's
interested
in
it,
so
I
hope
we
can
make
some
progress
for
0.7
cool.
A
G
A
This
so
I'm
just
reading
certificates
which
are
pre-generated
by
kcp,
is
this
about
that
topic,
or
is
it
about
any
kind
of
certificates.
A
G
A
A
A
A
A
A
C
And
it
used
to
work
and
more
recently
when
I
did
that
I
had
a
certificate
problem
and
I
was
wondering
maybe
there
is
a
problem
in
the
order
of
the
bootstrap
that
the
external
hostname
is
is
is
not
completely
set
up
when
we
generate
the
certificates.
Maybe
there
is
something
to
search
here
as
well.
I
didn't
touch
recently,
then.
A
It's
mine,
actually
it's
against
woodwork
space.
I
tried
that
and
it
failed.
So
basically,
when
you
start
a
kcp
for
main
branch,
you
are,
you
only
have
the
root
workspace
which
has
all
apis.
So
you
can
start
using
tmc.
It
doesn't
make
much
sense
in
production
setups,
but
for
playing
it's
probably
important,
because
people
start
like
that.
A
A
B
Yeah,
that's
fine.
This
are
the
list
of
controllers
and
places
where
we
need
to
modify.
The
client
calls.
A
A
K
C
Since
recent
neighboring,
I
think
we
should
be
have
an
agreement
this
time.
Okay,.