►
From YouTube: Community Meeting, July 19, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everybody
today
is
july
19th.
This
is
the
kcp
community
meeting
the
I
am
currently
screen
sharing
the
agenda
for
today.
If
you'd
like
to
add
any
items
to
the
agenda,
please
feel
free
to
go
into
github
and
add
comments
to
this
issue
and
we
will
get
started
so
joaquin,
I'm
going
to
stop
screen
sharing
so
that
you
can
do
a
little
demo.
B
C
C
I
will
show
you
that
there
is
something
target
as
you
see,
so
let
me
show
you
well
kcp.
We
have
some
basic
deployment
here,
spaces.
Okay,
so
let
me
show
you
how
it
looks
on.
C
On
the
downstream
coordinates
cluster,
so
let's
skip
the
namespaces
as
you
see
here.
Well,
this
one
is
a
sinker,
the
we
have
deployed
using
the
cli
and
those
are
the
now
how
the
new
spaces
downstream
look
like.
Okay
before
there
were
longer,
and
they
were
lacking
some
information,
so
there
could
be
some
conflicts
right
now.
As
you
see,
those
are
shorter,
so
external
controllers
that
generate
some
kind
of
url
or
whatever,
based
on
those
namespaces.
C
C
So,
as
you
see
here,
we
serialize
this
information
as
json
inside
the
annotation,
so
we
know
that
this
namespace
comes
from
the
workspace
testing
testing
and
the
namespace
upstream
in
kcp
is
the
fall
and
the
scene
target
that
is
synchronizing.
This
comes
from
this
path.
Has
this
name,
which
is
the
sinker?
That's
a
synchronizing,
this
namespace
and
the
sync
target
uid
that
we
can
find
upstream.
C
Okay.
This
has
been
done
previously.
We
had
only
these
two
fields
and
this
could
cause
some
conflicts
with
other
ones.
So,
apart
from
that,
there
is
something
important
here
which
is
now
we
have
we
well,
we
can
add
other
singers,
so
we
can
add
the
same
downstream
kubernetes
cluster
as
another
scene
target.
Let
me
show
you
quickly,
usually
cluster
one.
I
will
add
basically
sinker
two
and
now
I
will
apply
the.
C
If
I
record
on
the
first
cluster,
the
cluster
one,
we
should
see
how
we
get
new
namespaces
synchronized,
and
this
one
should
be.
If
we
check
the
namespace
locate
or
extract,
we
will
know
which
one
it
is.
Let
me
show
you,
so
it
is
the
full
namespace
from
the
same
workspace.
So
it's
the
same.
C
So
we
get
the
synchro
test.
This
is
useful
even
for
developing
and
for
testing
things
and
well.
You
can
even
have
more
well
several
clusters
on
on
a
physical
one
right
and
that's
it
for
the
demo.
I
mean
it's
really
simple.
A
Thanks
joakim
one
quick
question:
is
there
a
reason
that
we
chose
workspace
versus
path
as
the
two
key
names
there?
Could
we
be
consistent.
C
A
C
D
A
A
Okay,
next
up
is
paul
and
owner's
files.
E
Yeah,
so
I
wanted
to
let
folks
know
that
I
opened
an
issue
about
using
owner's
files
to
try
and
make
sure
that
we
get
reviews
spread
out
where
we
can
and
that.
I'm
asking
for
folks
that,
if
you're
comfortable
in
a
section
of
code
to
let
us
know
or
if
you
have
an
opinion
on
the
way
owner's
files
are
used
throw
in
that
issue.
But
the
basic
proposal
is
adopting
the
same
sort
of
system
that
we
see
upstream
in
kubernetes
for
reviewers
and
approvers.
So
we
can
spread
the
love
a
little
bit.
D
E
A
All
righty,
stefan
time
bombs.
D
A
D
A
D
F
I
was
just
wondering
I
don't
know
how
many
good
first
issues
we
have
at
the
moment
where
we
have
written
out
like
what.
D
A
A
Think
the
the
call
here
is,
if
folks
have
bandwidth
and
want
to
look
into
these,
please
do,
and
these
will
obviously
be
worked
on
before
we
hit
1.0.
A
But
you
know
we
we
have
stuff
slated
for
0.7
or
if
it's,
if
we
need
to
bump
things
around,
we
can,
but
I
don't
think
that
we
can
expect
that
we're
all
just
going
to
rally
on
all
of
these
right
now.
Oh.
D
A
Yeah,
I
mean
I,
I
think,
basically
all
of
the
stuff-
that's
in
tbd,
in
addition
to
the
time
bombs,
if
they're
not
here,
but
I
mean
we
have
138
issues
that
were
opened
at
some
point.
We
decided
yes,
we
eventually
need
to
get
around
to
them,
but
we
haven't
so
like
anything.
That's
in
the
tbd
milestone
is
a
candidate
for
inclusion
in
future
milestones.
A
G
I'm
next
all
right,
so
I'd
like
to
you
know,
share
with
you
something
I've
been
working
on
quite
recently
and
essentially
what
I
want
to
show
you
is
how
working
with,
or
even
you
know,
creating
a
cluster
workspace
against
multi-shard
environment
could
look
like
so
I'm
going
to
share
my
screen
with
you.
B
G
G
Now
it's
creating
the
shard
one,
so
I
gave
it
two
additional
flags.
One
is
a
short
name
just
to
identify
the
shard
and
the
second
one
is
a
path
to
cubeconfig
that
points
to
the
root
chart.
So
I
use
it
for
various
things.
For
example,
I
use
it
to
register
the
new
chart
in
inside
the
root
the
root
chart.
So
it
looks
like
our
environment
is
ready.
So,
let's
log
into
the
root
chart.
G
And,
as
you
can
see,
I've
got
two
shards
the
root
one
and
chart
one,
and
now
I'm
going
to
create
a
cluster
workspace
in
the
root
chart,
but
I
will
schedule
it
on
to
shard
one.
So
what
I
mean
by
that
is
cluster
workspace
will
be
hosted
by
the
root
chart,
but
the
content
of
that
workspace,
like
namespaces
secrets,
will
be
hosted
on
the
shard
one.
G
So
let
me
show
you
the
manifest,
so
we
are
going
to
create
example,
cluster
workspace-
and
this
is
how
we
are
going
to
schedule
that
workspace
on
chart
one,
and
let
me
just
show
you
current
workspaces
before
I
do
that.
G
G
G
So
if
you
would
like
to,
you
know,
learn
more
about
the
details,
you
can
check
out
this
pr,
it's
kind
of
a
messy
but
give
us
you
know
a
starting
point
and
helps
us
identify
all
the
places
that
need
to
be
aware
of
multiple
charts.
So
so,
for
example,
right
now
what
we
do
is
we
simply
pass
an
additional
informer.
G
So,
for
instance,
I
had
to
change
the
authorizers,
so
they
will
try
to
find
the
given
workspace
in
the
root
chart
if
the
local
chart
does
not
contain
the
given
workspace-
and
perhaps
this
is
something
we
will
have
to
do
to
all
existing
controllers
and
all
new
controllers
until
we
have
something
more
proper
like
for
example,
in
the
future,
we
could
you
know,
change
informers
to
be
aware
of
multiple
shards
and
provide
uniform
interface
to
end
clients,
and
that's
all
I
have
for
you
today.
Let
me
know
if
you
have
any
questions.
A
Thanks
lucas,
that
was
really
cool.
I
am
looking
forward
to
you
know
where
we
go
with
that
beyond
just
two
shards.
G
A
Okay,
so
before
I
move
on
to
doing
issue
triage,
does
anybody
have
any
other
topics
of
any
kind
that
you
would
like
to
discuss.
E
I'll
just
put
a
reminder
out
there
from
planning
that
we
hope
to
talk
about
the
design
topics
in
next
week's
community
call
and
the
following
community
call,
and
those
are
topics
that
we
expect
to
work
in
0.8.
So
we
can
get
a
little
bit
more
feedback
on
there.
So
hopefully
folks
are
having
their
design
discussions
this
week
and
I
will
be
ready
to
present
next
week
in
the
week
after.
A
Thank
you,
paul
yeah.
I
know
that
I've
been
invited
to
a
couple
that
folks
have
scheduled,
and
hopefully
others
are
as
well.
A
All
right,
I'm
gonna
start
doing
issue
triage.
We've
got
eleven
and
if,
if
y'all
come
up
with
new
topics
that
you
wanna
discuss,
please
feel
free
to
raise
your
hand
in
meat
and
we
can
go
to
those.
So
first
up
was
a
flake
around
permission
claims
I
know
sean.
You
had
an
idea
for
how
to
fix
this
with
the
controller.
A
F
F
A
Okay,
on
the
topic
of
like,
have
we
seen
tests
flake
recently,
I
know
kubernetes
has
the
fancy
web
ui,
where
you
can
look
at
tests.
I
know
there's
test
grid
and
then
there
was
the
other
one
whose
name
I'm
forgetting
yeah
like
do.
We
have
access
to
that
or
would
we
have
to
get
added.
A
Yeah
I
mean,
I
think
it
was
like
a
big
data
thing
all
right.
I
have
not
looked
at
this
before
stefan.
I
think
you
would
talk
with
matusha
about
this
a
little
bit
right
or
my.
D
Knife
delete
option
to
not
use
syntax,
oh
yeah
yeah,
that's
something
he
wanted
to
to
add
like
an
experimental
flag
for
development
purposes,
sentiment
cpr,
so
I
haven't
seen
it
yet.
Maybe.
A
Okay,
I'm
just
going
to
put
tbd
on
it
because
it's
not
n07
to
use
the
cluster
workspace
type.
The
user
needs
to
have
access
permissions
in
workspaces
where
that
type
exists.
D
Yeah,
this
is
a
it's
a
hole
in
our
story
at
the
moment,
so
we
need
a
solution
for
that
briefly
and
like
we
talked
about,
maybe
those
export
concepts
like
what
should
drive
the
catalog
might
solve
that.
A
A
D
You
can
place
airbag
rules,
then
maybe
it's
fixed.
B
D
H
D
D
Yes,
our
argo
cd
is
checking
slash,
ready,
set
and
the
ua.
We
have
a
slash
cluster,
something
so
we
don't
have
ready
set
under
that,
and
so,
if
we
could
just
redirect
those
to
the
the
mainly
ready
that
would
swap
that
okay.
D
It's
it
depends
on
the
interest
of
the
person
doing
that
you
have
to
know.
D
B
A
C
H
C
C
C
The
issue
is
that
in
some
cases,
for
example,
the
global
ingress
controller
will
monitor
namespace
for
ingresses,
but
and
they
will
add
a
soft
finalizer
to
the
ingress,
but
that's
actually
not
useful.
If
the
deployment
and
the
services
and
everything
in
the
namespace
is
gone,
so
they
will
need
to
add
the
software
analyzer
to
all
the
resources
in
the
namespace.
C
B
If
you
want
existing
cube
apps
to
just
work
using
hdg
right,
then
like
how
do
you
guys
designate,
you
know,
designate
a
workload,
it
seems
like
you
guys,
are
using
namespace
the
designated
workload
right
now.
So
that's
why
we're
saying
this.
A
A
H
Yeah,
maybe
it's
because,
if
putting
everything
related
to
a
workload
inside
a
namespace,
you
don't
need
to
explicitly
define
the
relationships
between
the
objects
that
constitutes
this
workflow
right
right.
So
until
we
have
a
design
that
correctly
defines
those
relationships,
we
can
put
everything
into
into
a
namespace
yeah.
B
A
Okay
well
as
usual,
tbd
on
this
and
then
yeah,
joakim
or
phil.
If
one
of
you
all
can
come
back
and
add
some
more
details,
even
just
a
pointer
in
the
code
to
what
the
annotation
is
that'd,
be.
D
A
All
right,
stefan,
you
filed
this
security
issue.
D
So
the
idea
was
that
we
might
want
to
have
default
network
policies
which
basically
stop
workloads
from
accessing
anything
outside
of
the
workspace
like
outside
of
the
namespaces,
which
belong
to
the
same
workspace
right
and
in
addition,
we
can
also
support
network
policies
defined
by
the
user.
But
this
is
like
an
add-on
to
default
policies.
A
So
I
think
we
need
some
action
items
here
I
mean,
I
think,
the
the
title
we
might
want
to
tweak
a
little
bit,
or
maybe
we
turn
this
into
an
epic
or
a
mini
epic.
In
terms
of
network
policy
security.
Like
do
we
need
an
action
item
to
create
a
default
network
policy.
A
Do
we
need
to
make
sure
that
the
syncer
is
configured
by
default
to
sync
them
that
sort
of
thing?
So
if
somebody
wants
to
come
back
in
and
and
update
this
or
add
comments
for
concrete
action
items,
I
think
that
would
be
helpful.
A
Okay,
next
up
is:
oh,
yes,
adding
known
internal
types
to
internal
schemas
for
the
virtual
workspace
to
use,
and
I
had
some
conversation
with
sean
in
slack.
I
think
yesterday
about
this,
where
we
have
a
package
level
shared
exported
variable
that
lists
the
types
that
we
want.
All
virtual
workspaces
to
potentially
have
access
to.
A
These
are
internal
types,
so
config
maps,
namespaces
secrets
and
service
accounts-
I
think,
are
the
four
that
are
available
and
that
makes
sense
for
the
syncer
virtual
workspace,
but
for
the
api
export
virtual
workspace,
we're
potentially
going
to
want
to
add
more
like
our
back
resources
so
and
sean
had
put
this
in
as
well.
I
think
this,
this
is
the
sort
of
thing
we
probably
would
want
to
put
into
zero.
Seven
as
part
of
the
ongoing
work
for
permission,
claims.
A
A
All
right,
I'm
gonna
put
it
in
seven.
For
now
we
can
always
bump
it
and
I'm
happy
to
work
with
robin
or
whoever
on
what
the
what
to
do
here.
A
F
F
A
B
A
Okay,
I
put
tbd
on
this,
although
I
don't
know
I
feel
like
we
probably
need
to
fix
this
sooner
rather
than
later,
so
I'm
actually
going
to
put.
A
No
way
to
get
the
owner
creator
of
a
homework
space
as
we
clean
the
annotation
stefan
and
I
had
talked
yesterday
about
possibly
storing
a
hash
of
the
workspace
creator
as
an
annotation
on
the
workspace,
so
that
you
could
do
at
least
do
a
comparison
against
a
username
and
the
hash
of
it
against
the
annotation.
D
A
D
A
H
Yeah,
I
think
the
homework
spaces
is
quite
a
specific
thing
in
the
sense
that
in
its
inherent
meaning
it's
it's
attached
to
to
a
user.
It's
not
just
a
resource
that
can
be
you
know,
for
which
permissions
can
be
done
given
to
anyone.
But
you
know
it
has
been
created
by
a
user
and
it's
dedicated
to
be
really
personal
to
this
user.
So
maybe.
D
D
B
H
Yeah
the
bucket,
so
the
the
full
path
is,
is
based
on
hash
of
the
unmangled
username.
So
the
probability
that
you
that
the
full
path
of
the
homework
space
is
different.
A
B
D
H
Yeah
to
be
clear
at
the
beginning,
before
the
the
period
that
stephanie
was
speaking
just
before
the
two
days
period,
the
owner
was
mainly
important
so
that
in
any
case,
we
have
the
information
of
what
what
is
the
user
that
requests
the
creation
of
this
homework
space
to
correctly
calculate
the
buckets
also
to
correctly
set
up
the
airbag
rules
that
gives
the
user
access
to
his
homework
space.
H
So
that
was
mainly
at
bootstrapping
and
automatic
creation
of
the
workspace
that
and
the
various
steps
of
it
that
you
needed
to
at
least
temporarily
store
the
owner.
Also
for
your
initializing
work.
You
know
the
initializer
virtual
workspace,
but
it
seems
that
in
the
case
also
of
the
user
sign
up
flow
and
they
also
needed
this
owner
information
I
mean
later
on.
But
since
we
didn't
want
to
leak
the
whole
owner
struck,
then
that's
the
meaning
of
the
last
pier
of
today
just
keeping
the
minimum
user
information
for.
D
But
this
is
just
one
small
example.
Workspaces
also
needs
that,
maybe
so
I
would
not
over
design
something
until
we
see
the
full
stopping
and
we
should
come
to
that.
I
mean
it's
not
far
away
all
right.
A
B
D
B
D
A
Okay,
I
quickly
triage
what
was
left,
so
that's
done
milestone
epics
for
zero.
Seven,
I
don't
know
I
mean
given
that
we're
just
starting
design.
I
don't
know
that
we
need
to
go
through
these,
but
here's
what
we
have
on
the
list
for
right
now.
A
So
if
you
have
any
interest
in
any
of
these
areas-
and
you
have
some
spare
cycles
to
help
out-
please
take
a
look
also,
we
do
have
the
work
packages
document
where
we
have
candidate
themes
and
explorations
for
zero,
seven,
so
I'll
paste
this
into
chat
as
well.
If
y'all
are
interested
in
helping
out
we'd
love
to
have
more
more
folks
involved.