►
From YouTube: Community Meeting, June 14, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Sure,
basically,
we
in
the
background
we
are
currently
thinking
through
cluster
workspace
types
and
how.
B
How
do
users
use
them?
How
do
you
do
something
with
them?
Today?
We've
got
the
org
team
and
universal
types
that
are
kind
of
you
know
in
there.
They
do
one
thing:
we're
trying
to
take
away
some
of
the
hard
coding
and
make
this
a
little
bit
more
user
friendly.
So
if
you
have
thoughts
on
you
know,
either
I've
got
this
structure
I
want
to
create
in
kcp.
You
know
my
organization
looks
like
this,
or
I
want
to
do
this
particular
thing
with
the
workspace
and
I'd
like
a
type
to
help
me.
B
Do
that
give
a
shout
on
slack,
read
through
the
doc
good
feedback,
we're
just
kind
of
gathering
information
right
now.
That's
all.
A
C
Okay,
let
me
know
if
you
can
read
well
this
one:
it's
just.
I
have
kcp
running.
So,
let's
focus
on
this
one.
You
can
read
it
well,
it's
yes,
okay,
okay!
So
what
I
wanted
to
show
you
is
something
that
was
hidden
behind
a
feature
gate
on
the
sinker,
which
is
basically
the
ability
to
add
virtual
finalizers
to
the
syncer.
C
C
Testing
and
enter
sorry.
Okay.
So
now
we
are
on
the
testing
workspace.
What
I
want
to
do
now
is
to
basically
I
will
create
a
workload
cluster
okay,
so
I
will
create
a
workload
cluster
cluster
one,
and
this
is
the
synchro
image.
I
will
apply
that
into
my
kind
cluster.
Okay.
C
Four
global
clusters:
we
have
cluster,
one
see
it's
ready,
so,
let's
move
on
what
I'm
going
to
do
is
to
create
a
deployment
that
they
have
already
here
that
they
have
already
here
is
the
test
deployment,
nothing
special
about
it.
Just
we
need
some
resource
to
showcase
those
new
thinkers
and
now,
let's
check
the
deployments.
C
So
it's
created
it's
already
red.
It's
it's
ready,
so
it
has
been
synced
downstream
into
my
cluster
one.
So
checking
the
the
object
itself.
You
will
see
that
now
we
have
a
finalizer
which
is
a
standard
kubernetes
finalizer,
so
the
sinker
on
the
cluster
one
is
kind
of
signaling
its
ownership,
so
this
object
has
been
seen
downstream
and
it
has
this
finalizer
because
we
have
the
state
internal
blah
blah
who
tells
the
cluster
one
to
sync
the
object?
Okay.
C
So
this
is
the
new
thing.
So
now,
if
we
try
to
delete
the
deployment,
we
will
see
that
nothing
stops
us
from
doing
that.
So
it's
gone,
but
now
let's
apply
the
deployment
again
and
let
me
show
you
a
new
way
for
this.
Virtual
finalizer
is
per
location,
so
it's
an
annotation
that
you
can
set
to
block
the
deletion
of
an
of
the
object
in
a
location,
and
this
is
meant
to
be
used
by
by
external
controllers
as
a
back
pressure
mechanism.
C
C
Slash
the
workload
cluster
name,
okay
and
here
you
can
actually
set
whatever
you
want
in
the
well
testing.
Okay,
so
we
can
say
now
is
annotated.
So,
let's
check
the
deployment
you
will
see
same
as
before,
finalizers
are
set,
but
in
this
case
we
have
this
finalizers
annotation
with
testing
this
finalization
annotation
supports
comma
separated
values,
I
mean
so
you
can
have
multiple
external
controllers,
adding
different
virtual
finalizers.
So
now,
if
we
try
to
delete
if
we
try
to
delete
the
deployment,
as
you
see
it
just
basically
hangs
here.
C
C
So
that's
how
the
syncer
now
knows
that
it
needs
to
actually
delete
the
object.
The
synchro
looks
for
this
deletion
timestamp,
not
the
global
deletion
timestamp
of
the
object,
but
the
sinker
is
not
deleting
the
object
downstream
in
the
physical
in
the
work
load
cluster
because
we
have
this
final
virtual
finance
finalizer
set.
C
So,
basically,
what
we
can
do
right
now
it's
to
delete
the
to
set
the
annotation
to
empty.
C
C
Okay,
that's
a
good
one.
Those
spiritual
finalizers
are
per
location,
so
what
we
are
trying
to
do
is
when
the
scheduler
supports
multiple
locations,
and
you
can
set
one
resource
deployed
into
two
different
physical
clusters.
C
What
we
will
do
is
allow
the
user
even
to
do
a
failover,
and
some
this
is
meant
for
some
use
cases
of
external
controllers,
like
an
ingress
controller,
which
wants
to
handle
and
apply
back
pressure
to
be
able
to
wait
until
one
place
has
been
taken
down
or
wait
until
the
new
place
has
set
up
properly
the
new
hostname,
so
they
can
do.
For
example,
the
the
failover.
A
I'm
not
sure
anybody
is
here
from
the
storage
team,
so
recording
we
will
publish.
So
maybe
it's
worth
to
tag
some
in
slack,
so
this
is
especially
interested
interesting
for
them
and,
of
course,
for
ingress,
as
you
said,
so
you
should
spread
that
a
bit
more,
maybe
cut
out
the
video
or
something
I
really
like.
It
was
really
good.
A
Let's
have
another
very
short
demo,
not
as
polished
and
advanced
as
your
games,
but
anyway
you
saw
the
picture
of
the
colorful
output
already.
I
just
want
to
say
some
words
about
that.
So
we
are
trying.
I
mean
the
big
epic
is
sharding
and
charting
means
you
have
to
install
some
something
locally
if
you
want
to
develop.
A
Cluster,
for
example,
plus
a
front
proxy,
and
this
command
does
everything
around
plumbing
itself,
like
all
the
certificate
magic
which
must
be
done
like
we
need
a
request
header,
so
we
need
serving
inserts
and
everything
must
fit
together.
So
it's
quite
a
bit
of
plumbing-
and
this
is
just
hidden
from
from
from
the
user
from
the
developer.
When
you
use
this
command.
A
A
You
have
seen
the
picture
already,
so
if
I
run
that
it's
compiling
gohan
so
that
multiple
kinds
of
parameters
or
flags,
the
ones
which
are
not
prefixed,
are
for
the
test,
server
and
so
switch
minus
minus
proxy
are
for
the
front
proxy
and
the
minus
minus
short
ones
are
passed
to
the
charts.
So
you
can
customize
all
those
components
of
all
three
components
and
yeah.
It
does
crypto
magic
here
for
creating
cas
inserts
and
so
on.
Eventually,
it
will
launch
the
kcp0.
A
Kcb0
is
the
root,
so
we
will
have
one
singleton,
basically
in
the
cluster
that
will
host
the
root
workspace
and
we
have
to
pass
around,
at
least
at
the
moment,
have
to
pass
around
the
cube
contact
to
access
that,
because
this
is
basically
the
source
for
all
cluster
workspace
chart
objects,
so
every
other
chart
has
to
connect
to
that
to
get
this
data
eventually,
this
will
change
probably,
and
we
will
replicate
and
all
this
stuff.
But
for
the
moment
it's
as
simple
as
that.
So
guess.
A
If
zero
starts,
everything
blue
is
guessing
zero
and
when
it's
ready
after
some
lines,
you'll
see
some
readiness
checks,
so
there's
some
weight
implemented.
Obviously,
and
when
it's
ready
it's
a
green
one,
that's
a
proxy
when
we
started
and
this
command
I
mean
one
advantage-
is
you
can
just
say
ctrl
c
and
it's
gone
so
all
process
are
killed,
so
nothing
to
kill
manually,
pretty
convenient
and
where
this
is
at
the
moment.
So
you
can,
you
can
talk
to
it.
It
creates
the
usual
admin
conflict.
A
We
know
so
it
looks
like
kcp
start
from
the
outside,
but
does
of
course,
all
this
process
process.
Magic
kcp
admin
cube
config
can
be
used
so
as
as
normal
same
contexts
and
if
I
try
and
everything
works,
your
default
as
usual,
and
you
can
say
the
usual
thing,
so
you
can
ask
for
company
maps,
for
example,
but
they
are
on
so
this
works
and
everything
else
should
work
as
well.
Tests
are
not
green,
yet
so
there's
still
work
to
be
done,
but
you
can
at
least
do
simple
commands.
A
This
is
fine,
so
it
works
all
right,
so
this
command
is
just
next
to
the
other
one
in
cmd,
and
hopefully
we
will
run
that
in
ci.
As
I
said
right,
that's
a
demo
questions
comments
and
somebody
or
some
people
ask
you.
Of
course
the
command
creates
proc
log
files
per
com
for
components.
They
are
not
colored,
so
just
pure
text
files,
it's
just
on
the
screen
to
understand
when
one
command
shows
errors
and
is
fine
or
something
like
that
already.
These
two
commands
is
pretty
hard
to
really
understand.
A
A
So
maybe
one
question
steve
that
this
is
for
you,
the
sharded,
not
the
child,
it's
a
shared
server.
We
want
to
move
to
pro.
I
guess
there
been
any
work
in
this
direction.
If,
yes,
I
would
try
to
take
that
and
add
another
case.
If
not,
I
would
continue
scripting.
A
A
A
B
If
we'd
be
getting
more
test
coverage
from
him
like
if
the
chardonnay.
A
A
A
A
A
I
don't
think
there's
anybody
anything
planned
for
prototype
6
at
the
moment.
So
I
will
talk
to
andy
whether
this
this
is
actually
part
of
v6,
zero.
Six
and
the
last
one
consume
computer,
transparent,
multi-cluster,
that's
mostly
placement,
I
guess
plus
fix-ups
and
bug
fixes
and
so
on.
Yeah.
This
is
placement
all
of
them
have.
As
far
as
I
have
seen,
I
asked
on
slack
most
need
all
have
yeah
a
line
for
pull
type
six
plus
plus
stretch
goals.
A
A
All
right,
the
other
thing
yeah
I
go
to
the
milestone.
First,
the
milestone
is
pretty
big.
A
I
try
to
move
out
those
which
have
no
name
so
which
don't
have
an
assignee.
So
what
we
have
left
is
basically
there's
a
flag
here
which
happens
once
in
a
while,
but
I'm
not
sure
it's
so
painful
at
the
moment.
So
if
anybody
wants
to
look
into
that,
that's
highly
welcome.
Please
do
and
the
second
thing
this
is
about
gbr
creation.
Basically,
it's
yeah.
On
the
one
hand,
we
want
to
change
the
systems
system
crds
into
api
bindings.
A
A
A
A
A
F
Yeah
but
but
that
one
is,
I
answered
your
question,
I
think
it's
it's
already.
F
A
A
B
There's
probably
something
simple-
I
don't
really
know,
but
certainly
somewhere
in
server
startup
we're
adding
more
than
one
health
and
readiness
check.
Okay,.
B
A
Okay,
so
let's
say
I
pointed
so
and
I
guess
it's
not
a
blocker,
it
doesn't
break
anything.
Does
it.
G
Yeah
we
it's
not
really
a
blocker.
We
we
just
have
to
use
two
clients,
basically
one
for
creating
the
the
informers
watching
for
the
the
wild
card
and
doing
multiple
workspaces,
it's
just
so
yeah,
so
we
we
have
to
use
two
clients
in
the
meantime
that
get
fixed,
and
I
think
it
also
applies
for
delete
operation.
G
G
So
so
what
we
do,
what
we
do
would
have
hope
is
that
we
could
like,
as
a
api
provider,
use
the
api
export
virtual
workspace
url,
and
that
would
enable
us
to
watch
over
multiple
workspaces,
but
in
a
restricted
environment
it
does
not.
We
have
to
in
an
environment
where
we
have
to
create
a
you
know,
service
a
compare
workspace,
it
kind
of
defeat.
The
purpose
of
using
the
virtual
workspace
yeah.
A
So
I
added
feature
completion.
I
think
we
missed
that
in
api
export
virtual
workspace
right
so
I
mean
you're,
not
blocked,
but
yeah.
It's
lacking
so
help
wanted.
So
everybody
who
wants
to
look
into
virtual
workspace
api
servers,
I'm
not
sure
how
hard
it
is
after
steve's
functional
biggest
tree
there.
B
I'm
surprised
that
it
panics,
I
think
it
just
shouldn't,
have
an
end
point,
and
I'm
also
surprised
that
he's
not
able
to
do
discovery,
which
I
feel
like
are
two
separate
bugs
and
the
third
bug
we're
not
the
third
bug.
But
the
third
missing
feature
is
that
there
is
no
create
yeah,
because
there
is
no
create.
I
guess.
A
The
neighbors
are
correct
right.
It's
it's
part
of
that.
It's
about
a
part
of
sentence
missing,
feature
yeah,
but.
B
Yes,
yeah
so
I'd,
say:
there's
there's
three
issues
here
and
I
can
break
them
out
into
separate
ones.
A
A
A
So
it's
a
time
bomb.
Where
is
it
used?
I
mean
resource
controller
workload,
resource
scheduler.
I
think
it's,
the
former
name.
E
F
Now,
actually,
a
sort
of
you
know
another
one
that
probably
looks
like,
but
but
in
the
future
it
would
not
be
necessary
anymore
with
the
virtual
okay,
because
when
they
expose.
A
A
A
A
Oh
from
joachim's
demo,
demo
jacking,
can
you
create
an
issue
about
the
timestamp?
It's
not
it's
not
the
same
format
as
the
one
for
it
was
a
deletion
timestamp,
something
which
has
spaces
inside
okay,
okay,.
F
A
A
A
Can
you
comment
on
this,
but
well
it's
just
an
issue.
What
is
it
yeah.
E
So
it's
just
the
work
tracking,
the
getting
the
api
in
a
reasonable
state:
okay,
the
api
bindings
accepting
and
rejecting
a
permissions
claims,
but
I
need
to
finish,
I
feel
like
I
need
to
finish
the
permission
claims
for
the
api
export
first
before
I
can
get
to
this
one.
Just
because,
like
I
want
to
build
them
on
top
of
each
other,
rather
than.
A
A
A
D
D
A
A
F
Well,
there
was
there
was
a
comment
about
if
I'm
not
mistaken,
in
the
new
document
that
you
are
kim
created.
You
know
about
the
the
thinking
about
the
new
names
for
downstream
namespaces,
and
there
was
a
comment
about
the
fact
that
we
should
limit
the
names
of
the
namespaces
to
avoid
having
you
know
overflow
in
the
ingress
names
which
are
I
see
from
ingresses,
so
you
know
might
be
related
to
to
the
the
new
name
naming
scheme
work
that
iraqi
is
doing.
F
A
A
Okay,
this
is
a
flake.
I
skipped
that
that
one-
oh
yes,.
A
Yeah,
I
I
I
put
some
code
of
it
in
my
pr
about
one
proxy
about
the
file
command
and
I
just
added
always
200
ready
that
which
is
not
an
answer
to
your
to
your
issue
here,
but
it's
related.
A
A
That
kind
of
part
of
the
sharding
epic,
so
I
would
put
them
in
tbd
part
of
those
ideas
and
mostly
about
ideas
and
direction,
are
addressed
by
the
epic.
So
if
you
don't
want
to
lose
ideas,
I
think
that's
fine
dvd
feature,
which
would
be
a
filing
we
do
yeah.
Do
we
have
something
like
that?
No
okay,
there's
at
least
one
more
like
that.
A
F
A
F
Yeah,
and-
and
the
I
mean,
the
main
point
is
that
we
should
have
a
central
place
to
you,
know,
store
and
and
follow
the
consistency
between
individual
api
exports
and
the
negotiated
negotiating
and
currently
this
year
and
this
logic
busy
spread.
You
know
when
all
event-based,
though
it
should
be
state-based,
so
that's
the
main
one.
We
are
just
missing
some
corner
cases
or
there
might
be
hidden
bugs.
F
A
Yes,
obviously,
can
you
merge
those
two?
Maybe
yes,
156
since
158.
C
That's,
I
would
say,
that's
the
first
issue.
I
open
okay.