►
From YouTube: Community Meeting, August 9, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
all
today
is
august
9th.
This
is
the
kcp
community
meeting.
If
you've
got
any
topics,
please
feel
free
to
add
them
to
this
agenda
in
github
and
as
always,
we
skip
over
and
leave
till
the
end
triaging
new
issues
and
looking
at
the
milestone
status.
So
I
have
added
a
question
here
on.
Maybe
we
should
rename
or
consider
renaming
the
slack
channel.
It's
currently
kcp
dash
prototype
and
I
think
we've
moved
hopefully
beyond
the
prototype
stage.
C
B
A
A
Yeah
there's
kubernetes
contributors,
there's
kubernetes
users.
A
A
A
Okay,
we
move
on
and
lukasz
you've
got
the
next
item
on
here.
B
Yeah,
just
a
small
announcement
that
the
e
to
e
shorted
ci
job
runs
all
e2e
tests
against
a
multi-shared
environment.
So
that
means
you
need
to
prepare
your
future
controllers
and
tests
for
that.
But
only
if
you
deal
with
you
know
multiple
controllers,
sorry
shards
and
also
there's
an
easy
way
to
run
an
environment
like
that
on
your
local
machine.
You
can
use
charted
test
server,
binary
with
a
flag.
B
It
will
create
the
front
proxy,
the
root
chart
and
additional
shard,
and
then
you
can
run
the
test
against
that
environment.
Also,
keep
in
mind
that
you
know
not.
Everything
works
because
you
know
sharing
is
still
working
progress.
So
if
something
doesn't
work
feel
free
to
to
ping
me-
and
that's
that's
all.
D
D
Questions
one
addition
just
one
comment:
the
work
space
controller
schedules
to
boot
chart.
D
B
Yeah,
you
have
to
specify
essentially
the
short
name
during
scheduling.
I
think
yeah.
B
And
also
like
authorization,
I
know,
is
broken,
so
if
you
would
like
to
talk
to
the
chart,
essentially
you
would
have
to
use
system
masters
group.
B
A
Okay,
steve,
you
had
some
comments
in
chat.
Do
you
want
to
say
them.
C
Sure
I
think,
there's
going
to
be
a
fairly
large
amount
of
churn
in
controllers,
as
we
move
from
assuming
one
virtual
workspace
to
one
per
chart
and
then
on
top
of
that
from
assuming
one
set
of
client
or
sorry
one
set
of
informers
to
two.
We
might
want
to
think.
E
D
D
D
I
pasted
an
example:
if
you
want
to
experiment
with
multishot
this
new
organization
picture
there.
This
would
give
you
a
workspace
on
the
screen
chart.
If
you
don't
do
that,
like
you,
don't
have
the
short
constraint
at
the
end.
You're.
A
Good,
okay,
anything
else
on
the
sharding
topic.
A
A
I
have
one
which
I
need
to
add,
which
is
that
last
friday
we
released
0.7,
we
don't
have
release
notes
for
it
yet,
but
it
is
tagged
and
there's
a
whole
bunch
of
changes
that
have
come
in.
I
think
we'll
get
the
release,
notes
up
and
then
folks
can
take
a
look,
but
we
have
got
a
lot
of
work
from
a
lot
of
contributors.
A
So
thank
you,
everybody
for
your
hard
work
on
that
and
if
you
haven't
had
a
chance
to
play
around
with
either
zero,
seven
or
main
lately,
I
think
you'll
find
a
lot
of
stuff.
That's
pretty
cool!
That's
been
going
on
lately
any
comments,
stefan
or
anybody
on
the
zero
seven
release.
I
know
I
was
pretty
vague.
What's
in
there.
A
There's
this
refactoring
of
permission,
claims
reconciling
or
labeling
that
sean
was
working
on
and
I'm
helping
him
out
with.
We
could
tag
with
what
we
have
now
or
we
could
wait.
I
don't
think
it
I
mean
we
could
do
a
zero,
seven
one
and
then
a
zero.
Seven.
Two
doesn't
really
matter.
D
C
A
We
can
do
that
all
right,
yeah
we'll
do
that
we'll
get
zero,
seven
one
out
and
sean's
can
go
into
zero.
Seven,
two.
A
Okay,
anybody
have
any
topics
before
we
go
into
issue
triage
and
epic
review.
A
D
D
A
Oh
right,
so
this
one
I
encountered
this
in
a
prowl
run,
didn't
have
enough
logging
to
figure
out
why
it
failed.
I
have
since
added
a
pr
with
more
logging,
so
if
you
all
happen
to
be
working
on
pull
requests-
and
you
see
a
failure
that
looks
like
this
come
on,
so
if
you
see
test
user
homework,
spaces
create
a
workspace
in
the
non-existing
home
and
have
it
created
automatically
in
workspace
request.
A
If
you
see
it
fail
where
it's
trying
to
create
a
workspace
workspace
one
and
it
gets
a
not
found
trying
to
create
the
cluster
workspace.
Please
let
me
know-
or
please
add
a
comment
to
this
issue
with
a
link
to
the
the
proud
job,
because
it
doesn't
happen
all
that
often,
but
I
think
there's
a
bug
lingering
in
there
somewhere.
B
A
Yeah,
we
don't
have
it
hooked
into
the
bigquery
infrastructure
that
kubernetes
is
in
for
the
job
failures
we
are
in
test
grid.
So
if
you
go
to
test
grid-
and
you
just
look
for
kcp.
A
You
will
see
that
we
have
unit
tests,
ees
and
ed
multiple
runs.
These
are
all
from
pull
requests.
We
are
running
periodics,
but
I
haven't
added
them
to
test
grid.
Yet
so
you
know
you
can
come
in
and
look
and
you
can
ask
it
for.
A
Excluding
non-failed
tests
and
you'll-
I
don't
know-
sometimes
you
see,
I
guess,
yeah,
there's
a
smattering
of
tests
that
have
failed
here
and
there,
and
so
you
can
go
look
at
any
individual
one
when
it
fails.
But
in
terms
of
your
specific
question,
I'm
not
aware
of
a
way
to
do
that
and
if
you
want
to
look
at
the
periodics,
we
do
have
them
pinned
in
our
slack
channel
under
periodics
or
under
the
ci
folder,
and
you
can
see
we're
running
all
of
these.
Every
or
e
to
e
and
e
d.
A
Multiple
runs
every
two
hours
and
I
think
they're
generally
good
and
steve,
you
wrote,
we
might
be
able
to
get
search.ci
to
index.
Ours
is
that
search.ci.kates.io.
A
Why
don't
we
maybe
file
an
issue
and
we
can
get
somebody
to
look
into
it.
A
So
I
added
an
issue
about
enabling
audit
logging
for
e
to
e
launched
kcp
servers.
But
before
we
talk
about
this,
stefan
and
steve,
I
know
that
two
of
you
were
talking
about
how
to
proceed
with
our
ee
server
setup,
and
you
all
were
talking
about
having
some
that
were
run
in
process
and
some
that
were
shared.
D
A
D
C
A
A
C
C
We
can,
we
can
probably
still
add
it
for
local
dev.
If
you
want
it,
but
I
imagine
yeah
in
ci
we
will
have
a
single
kcp
either
started
with
kcp
start
or
something
more
or
sorry.
There's
two
cases,
one
with
kcp
start
then
one
with
their
started
setup,
but.
D
A
But
if
we
preserve
this
code
or
sorry
if
we
preserve
the
ability
to
run
kcp
from
an
e
to
e
test,
which
we
at
least
need
for
the
disruptive
ones,
the
destructive
ones,
I
think,
would
be
useful
to
get
the
audit
logs.
Because
they're
not
there.
D
A
Navigation
issue
with
the
use-
oh
this-
this
was
the
one
around
yes
didn't.
We
fix
this.
E
Well,
in
fact,
I
don't
think
you
know,
the
issue
was
mainly
explaining
how
to
how
it
currently
works.
I
don't
think
there
is
additional
accesses
for
that.
E
A
E
This
is
you
know,
this
is
related
to
the
fact
that
you
need
to
have
the
get
yeah
you
need
just.
E
You
just
need
to
have
the
get
permission
in
any
case,
because
it's
it
has
been
already
some
time
that
that
it
it
is
implemented
like
that
or
also,
if
you
look
in
the
entrance
tests,
but
we
never
communicated
on
that
and
since
we
switched
to
homework
spaces,
I
mean
I
mean
a
sort
of
you
know:
fuzzy
area
was
there
since
you
know
the
last
the
last
release,
so
I
it's
mainly
a
question
of
making
it
clear.
D
But
to
be
clear,
it's
still
broken.
If
you
give
somebody
access
without
get
permissions,
correct.
E
D
If
somebody
does
sharing
manually,
he
has
to
give
those
two
permissions.
Yes,
which
is
fine.
I
mean
if
we
say
that
yeah.
E
Sure
I
mean
that
that's.
That
was
the
the
the
idea
of
the
issue
previously
just
to
make
it
clear.
I
mean
made
the
answer
clear
and
and
track
this
question,
but
obviously
related
to
you
know:
sharing
ux
the
the
preliminary
task
was
really
to
come
back
to
a
row
or
you
know
standard
airbag
first
and
then
it's
much
easier
to
build
a
ux
layer.
On
top
of
that.
E
A
C
A
D
A
A
Joaquim
or
david,
could
I
think
you
all
were
in
this
code
recently?
Is
this
something
you
could
look
into.
D
A
A
Schedule
this,
so
you
know
it's
I
put
tbd.
I.
D
C
A
C
It's
a
little
bit
tricky
as
you
create,
for
example,
a
scene
target
called
cluster
one.
A
C
C
Well,
yeah
you
remove
the
cluster
one
and
recreate
the
cluster
one
yeah
and
the
previous
running
sinker
will
keep
running,
but
things
will
get
a
little
bit
weird
as
it's
using
a
different
sync
target
uid.
So
this
should
be
fixed
as
1687
yeah.
A
Okay,
just
in
in
the
future,
when
you
do
a
pr
to
fix
an
issue,
you
can
always
put
fixes.
A
A
This
one
was
mine
related
to
running
too
many
kcp
instances
in
parallel.
I
think
this
relates
to
the
earlier
discussion
so
steve.
Can
I
like?
Are
you
going
to
run
point
on
the
various
edes,
and
would
this
fall
under
your
work
on
that,
or
are
you
not
planning
on
doing
that
and
we
need
somebody
else
for
it.
A
Alrighty,
that
is
all
of
those,
so
let's
just
do
a
quick
milestone
check
in
on
the
epics.
I
don't
think
mike
is
here.
Has
anybody
step
on,
or
anyone
heard
on
if
mike
has
done
anything
on
pride
in
fairness,.
D
A
Okay,
starting
this,
I'm
assuming
still
in
progress
any,
I
know,
lukash
had
his
update
earlier.
Any
specific
updates.
D
A
Okay,
all
right:
do
you
all
need
to
make
any
changes
to
any
of
this
that's
happening.
A
So
api
export
permissions
on
binding,
as
I
mentioned
earlier,
I'm
helping
out
here
so
sean
was
working
on
the
refactoring
of
the
controller
I'm
helping
him
out
with
that
looks
like
we've
got
some
other
stuff
before
we're
mvp
complete,
so
I
guess
we'll
have
to
see
if
we
think
those
will
fit
into
zero,
eight
or
not
yeah.
We
should.
D
A
All
right
cluster
workspace
type
take
two.
I
think
the.
A
I
know
we
had
this
api
binding
initializer
work
that
stevie
would
started.
I
think
stefan
and
I
were
talking
when
you're
on
vacation
about
possibly
adding
bindings
to
the
cluster
workspace
type
spec.
Instead
of
having
a
new
like
api
set
type.
D
Yeah,
we
don't
have
to
talk
about
the
details
here.
We
should
come
back.
I
think
we
want
the
solution.
We
all
know
that.
D
C
A
Yeah
we
can
set
up
a
separate
time
to
chat,
use
your
homework
spaces.
What's
left.
E
D
E
Yeah,
well,
it's
it's
not
also
how
hard
requirement,
in
the
sense
that,
if
I
remember
correctly
what
we
did
can
work
on
any
chart.
I
mean
if
it's
not
not
even
the
the.
If
it's
not
the
right
chart,
then
it
would
be
reforwarded
to
the
external
client
to
the
with
the
front
proxy.
You
know
address
and
finally
end
up
yeah
with
the
workspace
homework
space
being
created
at
the
right
place.
So.
E
D
E
Think,
there's,
I
think,
can
be
yes
that
maybe
the
only
thing
is
at
the
very
end
you
have
doc
and
demo
and
I'm
not
sure
there
was
the
most
really
stored
somewhere.
You
know
I.
C
A
Okay,
thank
you
for
all
the
work
on
that
david.
I'm
so
much
happier
that
we
have
homework
spaces
now,
yeah
all
right
quota.
This
one
has
mostly
been
mine,
so
we
have
per
workspace
quota
support
for
namespace
normal
and
bound.
We
do
not
yet
have
cluster
scope
resources,
but
I
am,
I
have
a
proof
pr
working
on
that
that
works
that
works
that
works
that
works.
A
So
there
is
still
some
work
left
to
do.
If
anyone
is
interested
in
helping
out
with
quota,
please
let
me
know,
but
at
least
in
0.7
we
do
have
the
sort
of
standard
kubernetes
resource
quota
that
you'd
expect
per
name
space.
We
do
not
have.
It's
only
object,
count
quota
for
right
now.
So
if
you're
trying
to
quote
a
cpu
memory,
node
port
services-
that's
not
implemented
yet
I
will
come
back
in
and
add
that
all
right,
multi-workspace
controller
development.
A
D
A
Of
this
is
around
the
work
that
fabian
is
finishing
up
on
the
informers
and
listers
that
are
that
we
can
generate
that'll,
be
multi-cluster,
aware
so
bobby,
and
I
see
you
hear
any
updates.
I
know
we've
been
in
contact,
but
just
at
a
high
level,
what's
left
to
do.
C
Yeah,
I
think,
with
the
steve
helped
with
yesterday
we're
now
generating
upstream
interface
compatible
informers
and
listeners
for
everything.
So
it's
just
kind
of
a
matter
of
starting
to
plumb
them
through
everything
else
which
I'm
going
to
get
to
this
afternoon,
and
hopefully
there
won't
be
any
any
more
bugs
uncovered.
A
Okay,
awesome
and
as
part
of
that
work,
along
with
what
varsh
has
been
doing,
we
will
eventually
undo
our
changes
to
the
client
generator
and
the
informer
generator
that
we
made
to
our
kubernetes
fork
and
switch
anybody
who's,
not
using
controller
runtime
over
to
a
new
code
generator
that
we've
written
that
generates
informers
and
listers
that
are
multi
workspace
aware,
and
we
already
have
a
new
way
that
doesn't
require
any
code
gen
to
take
a
client
and
make
it
multi-workspace
aware.
A
So
we
have
a
lot
of
docs
that
we
need
to
do,
but
we've
made
a
lot
of
progress.
So
thank
you
to
to
you
all
for
all
that
hard
work.
A
Okay,
this
one
stefan
location
works
space
is
basically
right
at
this
point.
D
A
Thank
you
and
then
the
last
one
is
logs,
exact,
etc.
I
haven't
heard
from
antonio
in
a
while
do
we
have
you
talked
to
him.
Stefan.
D
A
A
All
right
we've
got
20
minutes
left.
If
anybody's
got
anything,
you
want
to
chat
about.
A
Okay,
well
thanks
everybody,
it's
good
to
see
you
as
always
see
you
next
time,
bye.