►
From YouTube: Community Meeting, July 26, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Then
we
have
the
usual
issue:
hygiene.
We
do
at
the
end.
The
dbs
has
landed,
everybody
noticed
or
hopefully
didn't
notice.
So
I
think
it
was
pretty
slick
and
nothing
big
happened,
great
work
from
steve,
so
big
shout
out.
There
are
not
so
many
things
changing.
So
please
change
to
go.
118..
A
We
have
generics,
we
still
use
cluster
name.
This
will
change.
I
think
within
I
don't
know
this
week
or
next
week
or
something
I
hope
we
change
that
something
else.
Now
it's
pretty
pretty
ugly
and
some
that
said
that
something
cluster
name
field,
so
you
will
notice
and
if
you,
if
a
code
breaks,
please
update
your
code
to
use
our
logical
cluster
library.
Don't
touch
this
field
manually.
If
you
do
that's
a
big
sign
update
to
this
library.
A
A
D
Yeah,
so
I've
been
doing
as
you've
probably
heard
from
me
on
your
slack
channel
quite
a
bit.
Recently,
I've
been
doing
some
looking
into
the
workload
migration
by
advanced
scheduling
and
it
it's
it's
able
to
work
with
the
advanced
scheduling
feature.
That's
there
and
my
like
demo.
Workload
can
migrate
with
no
downtime
between
two
clusters,
which
is
great.
D
D
It
became
kind
of
apparent
that,
although
we
can
make
sure
that
their
pod
moves
in
a
graceful
way,
we
can't
ensure
that
their
service
would
stay
up
unless
we
can
somehow
get
a
sort
of
exhausted
list
of
all
resources
that
we
need
to
watch
and
migrate,
and
so
we've
discussed
it
a
little
and
what
the
concept
that
kind
of
came
to
us
is
that
the
placement
object
seems
to
be
defining
the
namespace
as
the
like
unit
of
currency,
for
where
work
is
placed
in
the
sync
targets.
D
From
kcp,
and
as
that's
the
case,
if
we
could
have
the
soft
finalizers
implemented,
I'm
calling
them
soft
finalizers.
Sorry,
that's
just
the
vernacular
that
we've
sort
of
generated
for
them.
I
don't
know
what
the
right
way
to
use
for
that
is,
but
the
like
annotation
based
finalizers
anyway,
if
they
could
be
moved
to
the
name
space
level,
where
every
resource
in
a
namespace
wouldn't
be
removed
from
the
like
losing
sync
target.
D
E
Well,
we
already
talked
about
this
sometime,
but
it
didn't
feel
very
like
no.
I
mean
soft
finalizers
behave
like
a
more
or
less
finalizer,
so
you
set
that
per
object.
I
think
it
kind
of
makes
sense
because
for
them
for
external
coordination
controllers,
it
will
be
a
little
bit
madness
having
to
look
for
each
one
of
the
resources
inside
the
namespace.
So
perhaps
perhaps
we
should
take
a
look
at
that
and
try
to
implement
something
similar.
F
Yeah,
I
tend
to
see
that
as
well
as
a
as
something
possible
as
a
first
step,
or
at
least
minimal
unblocker,
for
for
the
users
for
the
team
form,
but
it
seems
to
me
that
the
main
the
the
main
underlying
question
or
requirement
is
possibly
dependencies
between
objects.
How
do
we
manage
the
fact
that
you
know
some
resources
rely
on
some
of
the
resources
and
that
we
have
to
you
know
create
or
delay
in
order
with
ordering?
So
maybe
there
is
a
wider
question
I
think
under
this,
but
obviously
I'm
not
against.
F
You
know
managing
that
as
long
as
as
the
namespace
is
as
all
the
related
objects
are,
we
know
that
they're
in
a
single
name
space
having
the
ability
to
define
this
soft
finalizer
on
the
namespace,
my
might
might
help.
I
think,
at
least
on
the
virtual
workspace
side.
It
would
be
quite
easy
to
just
read
on
the
namespace
that
of
of
the
object
instead
of
directly
reading
on
the
on
the
resource
itself.
This
annotation
that
would
not
be
much
of
a
problem.
A
So
owner
references
don't
help
here:
do
they.
D
Really
I
I
sorry,
I
did
mention
that
in
the
alternatives
that
we
considered
is
some
sort
of
like
education
of
the
user,
to
rig
up
their
owner
references
so
that
we
can
still
just
look
at
the
deployment
and
the
service
and
the
ingress
and
kubernetes
will
know
not
to
clear
out
all
the
other
stuff
until
those
things
have
been
deleted.
F
It
seems
to
me
that
the
long
term
solution
is
that
when
you
know
bringing
things
to
the
to
the
sync
target
layer,
I
mean
through
the
thinking
we
completely
remove
all
ownership
or
you
know,
owner
concept
and
values,
and
so
all
the
objects
that
we
think
they
are
completely
standalone
and-
and
obviously
we
might
want
in
the
future
to
have
so
some
sort
of
information
that
that
keeps
the
links
between
the
things
that
have
been
synced
on
the
thing
target
and
and
and
remember
that
they
were
linked
exactly
the
same
way
as
they
were
linked
or
always
a
similar
way,
as
they
were
linked
on
the
kcp
layer.
C
Yes,
now
I
have
something
so
phil
if
it'd
be
possible,
could
you
write
up
a
very
specific
concrete
example?
C
I
know
you
mentioned
like
a
secret
or
a
custom
resource,
but
if
you
could
put
together
some
pictures
or
yaml
or
something
for
a
deployment
and
whatever
it
references
and
what
breaks
specifically
when
there's
movement,
I
think
that
would
help
me
at
least
because
I
guess
I'm
some
of
this,
I'm
thinking
which
should
just
work
without
issues.
So
a
concrete
example
would
be
helpful
if
you
could
put
something
together.
D
C
One
I
mean
I
would
love
one
where
we
can
say
like.
Yes,
we
can
go
solve
this
problem
and
if
that
is
a
big
enough
problem
that
covers
some
large
percentage
of
use
cases,
then
you
know
maybe
we've
we've
done
some
help.
We
might
not
have
completely
solved
the
problem,
but
you
know
anything's
better
than
nothing
and
trying
to
be
exhaustive,
isn't
necessarily
something
we're
going
to
be
able
to
do.
You
know
in
one
step.
A
D
An
open
question
that
we're
discussing
at
the
moment,
so
our
fairly
blunt
sort
of
approach
to
it
at
the
moment
is
has
the
dns
propagated,
is
kind
of
where
we're
going
with
it.
The
more
long
term
is
we
well,
I
mean
we
might
get
sort
of
lost
in
the
weeds
of
this,
but
we
are
looking
at
a
more
sort
of
complex
way
of
testing
the
application
to
see
if
it's
healthy
or
not
before
removing
it
from
the
losing
cluster,
but
that's
still
sort
of
up
in
the
air
for
us
right
now.
A
Also,
just
one
note
everything
we
are
talking
about
is
the
case
right,
so
it's
acceptable
that
the
user
has
to
specify
things
just
as
a
general
rule
of
thumb.
It
doesn't
have
to
be
magic,
so
we
cannot
guess
the
right
thing,
because
it's
just
not
obvious,
then
specifying
something
whatever
this
means
can
be
some
I
mean
the
owner
referencing
or
some
some
similar
concept
that
we
talked
about,
but
this
would
be
exactly
acceptable
purpose.
D
D
Yeah
at
the
moment
we
do.
We
are
writing
up
the
opportunity
for
users
to
tell
us
how
to
do
a
dns
health
check
so
that
we
can
take
out
targets
that
are
unhealthy.
So
the
idea
we
had
was
to
reuse
those
health
checks
to
query
a
cluster
to
see
if
it
was
ready
to
point
dns
at
it
yet
yeah,
but
certainly
some
sort
of
health
check.
I
think,
has
to
be
involved
here.
Yeah.
D
A
A
People
are
on
storage,
so
guy,
what's
his
name
he's
not
here,
I
think
he's
here.
Oh.
G
So
you
mean
you
mean
you
meant
me.
G
E
G
So
that
you
know
we
have
some
some
feature
going
forward
versus
a
design
that
we,
you
know
we
create.
While
we
work
on
that.
A
A
A
All
right,
so
I
think
that's
all
looking
to
paul
again.
Is
there
anything
we
have
to
do
for
planning.
B
I
would
love
if
any
of
the
folks
that
have
0.8
design
items
in
the
work
packages
who
have
discussed
them
already
could
talk
to
the
group
about
them
if
they're
prepared.
So
it
looks
like
we've
got
links
for
api
evolution
in
there,
as
well
as
kcp
network
downstream
namespace
translation.
A
A
F
Yeah
the
inverse
thinking
you
mean
yes,
I
I
didn't
start
thinking
about.
A
F
I'm
sorry
mainly
this
would
probably
I
mean
be
possible
for
me
next
week,
following
the
you
know,
brainstorming
and
thinking
about
the
synchro
transformations
and
also
the
current
state
of
the
synchrony,
because
you
know
a
number
of
changes
here
have
to
occur,
and
I
think
this
would
mainly
fit
just
after
that.
A
So
yeah
we
do
that.
I'm
not
sure
we
can
summarize
it
at
the
moment,
but
there
are
discussions.
What
what
the
problem
actually
is
around
I
mean
namespaces
is
one
thing
like
how
to
implement
dns.
A
The
other
thing
where
I
have
seen
discussions
is
around
hot
identity
and
whether
we
need
ips,
which
are
in
a
common
scn
of
some
kind.
So
there
are
discussions
going
on.
I
think
it's
not
really
here
for
reporting
and
api
evolution.
We
had
a
meeting.
I
think
any
we
need
another
one,
maybe
to
get
more
concrete.
A
A
Yeah
we
have
to
check
so
steve
is
out.
I
think
next
week
also.
C
Yeah
he's-
I
don't
know
if
he's
available
today,
but
I
know
he's
out
starting
tonight
for
some
time.
A
All
right
and
the
last
one,
the
first
here
in
the
list
is
mike
here.
I
don't
think.
No,
I
don't
see
him
superior
janice
would
be
good
to
discuss
that
anyway.
So
he's
not
here
paul,
that's
all
we
have
to
talk
about
at
the
moment.
B
C
Sure
so
you
can
create
apis
and
export
them
in
kcp
and
they're
very
similar
to
crds,
but
we
don't
support
conversion
web
hooks
like
you
have
with
crds.
So
if
you
create
a
new
api
for
widgets
and
it's
v1
alpha
1
and
you
want
to
go
to
v1
alpha,
2
or
v1
or
whatever,
we
don't
currently
have
any
mechanism
that
really
makes
that
super
possible.
C
Mainly
because
it
can
reduce
the
burden
on
implementation
so
that
you
don't
have
to
have
web
hooks
up
and
running.
Also,
we,
if
we
can
do
everything
server
side,
it's
a
little
bit
easier
and
I
mean
mainly
right
now.
We
don't.
We
do
support
web
hooks,
but
we
don't
support
service
based
web
hooks,
or
at
least
not
the
ones
that
you
like,
typically
see
where
you've
got
a
service
insert
manager
generates
certs
and
like
everything
you
get
with
controller
runtime,
for
example.
A
And
we
are
building
a
distributed
system
here,
so
it's
not
like
cube,
which
has
a
uniform
network
environment,
so
this
is
distributed
so
there
might
be,
there
might
be
shards
in
us
east,
west,
in
europe
and
conversion
web
groups.
They
are
crucial,
like
they
have
to
work
if
they
don't
work,
things
break
like
garbage
production
or
something
like
that.
Just
breaks
thank
you,
and
in
this
distributed
world
having
web
books
and
forcing
everybody
to
deploy
them
in
an
available
way.
A
B
B
A
C
And
also
the
majority
of
the
apis
that
kcp
has
added
are
done
using
api
exports
and
api
bindings
right
now
and
api
resource
schemas.
So
there's
a
couple
that
are
pure
crds
that
have
to
be
but
any
place
where
we
are
able
to
use
the
schema
plus
export
plus
binding
combination.
We
do
so
it
is.
It
would
definitely
be
dog
fooding.
A
A
A
A
G
G
Initial
scope
is
that
rwx
is
really
it
doesn't
require
exclusivity
between
clusters,
and
it
makes
this
flow
really
focus
on
the
apis
that
we
need
to
to
expose
on
one
hand
and
on
the
the
flows
of
sharing
information
between
the
workload
clusters
and
the
nkcp
and
the
control
pane.
G
So
this
is
this
overview
and
the
the
the
objectives
here
are
about
round
they're
setting
up
storage
in
the
the
first
one
is
like
setting
up
the
storage
in
the
workloads
clusters
with
nfs
as
the
example
storage
for
all
of
these,
and
we're
we're
assuming
here
that
there
it's
a
single
location
so
that
there
is
a
movement
between
clusters
in
the
in
that
location
and
that
it's
the
same
network
storage
that
all
these
clusters
are
connected
to.
G
So
it's
not
hyper
converge
or
any
anything
that
creates
a
disaster
point
where
it's
not
like.
The
data
is
not
available
or
anything
like
that.
So
storage
is
external
and
you
know
in
the
one
location
kind
of
case,
then
the
deployment
of
application
will
will
invoke
dynamic
provisioning.
So
the
application
starts
without
having
storage
provision
and
then
through
kcp
get
storage
provisioned.
G
Then
we
want
to
show
the
placement
changes
for
that
application,
so
how
that
application
can
move
between
those
clusters.
So
if
a
cluster
goes
down
or
if
we
on
demand
want
to
change
it,
I'm
not
sure
if
ondemand
is
really
a
use
case,
but
I
guess
it
is,
but
cluster
failure
is
really
simple
to
to
fit
to
understand
as
the
use
case
here.
So
it's
that's.
That
could
be
the
demo
and
at
the
end
we
want
to
also
support
de-provisioning,
the
storage
as
well.
G
So
how
do
we
get
rid
of
storage
from
when
when
kcp
decides
to
it?
Not
when
one
of
the
clusters
is,
you
know
not
when
we
actually
just
take
the
workload
or
the
application
out
of
one
of
the
clusters.
So
these
are
the
objectives.
G
Pretty
much
going
through
this,
what
is
what
I
just
said,
but
with
a
little
bit
more
detail,
so
I
I
don't
think
I
should
really
repeat:
there's
one
detail
here
about
in
point
six:
that
when
we
move
things
around
we
we
actually
do
it
a
little
bit
different.
So
we
we
actually
need
to
be
able
to
change
the
mode,
so
it
doesn't
go
back
to
to
dynamic
provisioning
again.
G
So
we
want
all
the
information
that
tells
us
that
this
pvc
is
already
bound
and
we
already
have
the
information
of
the
pv
so
that
we
now
can
sync
those
together
as
as
a
bound
couple
and
it
becomes
in
the
storage
terms.
It's
a
static,
provisioning
mode
where
these
things
get
get
bound
together
by
the
controller
that
puts
them
in
the
cluster.
G
So
transformation,
I
I'm
pretty
sure
we
will
require
some,
I'm
not
sure.
If
it's
going
to
be
complex,
it's
it
might
be
scoped
to
this
one
like
the
pvc
will
just
have
internal
internal
transformations
or,
but
I
guess,
as
we
go
along
the
rest
of
the
you
know,
the
things
that
are
out
of
scope
here,
we
might
have
more
to
to
handle
inverse
thinking.
You
mean
the
status
thinking
or
no,
that
you.
G
Yes,
so
that
we,
this
is
the
thing
that
we
have
to.
We
have
to
support
here.
So
if
you
go
a
little
bit
further,
I
I
didn't
really
know
what
to
put
in
action
items,
so
I
I
kind
of
kept
it
by
the
template,
but
the
stories
I
tried
to
go
and
describe
that
we
need
the
pv
information
in
kcp
and.
F
Yeah
yeah
yeah
and
probably
include
new
guy
on
in
in
the
session
of
next
week,
about
inverted
thinking,
yeah.
G
It
I
remember
we
didn't
conclude
any
specific
direction,
but
yeah.
A
Yeah-
let's
talk
about
that
next
week,
so
sure
I
would
move
to
the
next
one
for
the
moment.
Thank
you.
It
was
really
good
to
see
that
there's
progress
and
we
get
storage.
Finally,
at
least
in
first
steps
very
cool.
Next
one
clarify
oh
yeah,
this
one
steve's,
not
here
right
so.
F
Yeah,
I
looked
a
bit
into
it.
Obviously
we
didn't
go
into
those
details
so.
A
Yeah
he
did
the
math,
which
we
didn't
write,
or
just
very
briefly
so
bucket
number
and
right,
yeah
and
so
default
is
two
two.
A
F
Yeah
yeah,
that's
exactly
what
what
you're
saying
I
mean
we
just
limited
us,
calculating
the
maximum
number
and
minimum
number
for
both
the
size
and
the
length,
but
not
the
interaction
between
all
and
obviously,
according
to
to
the
cases
you
end
up
with
a
number
of
users
in
a
bucket
which
is
too
high.
A
F
Should
it's
mainly
a
question
of
deciding
if
we
manage
this
mainly
by
documentation
by
just
validation
of
the
right
combinations
in
in
in
the
command
line?
For
example,
I
mean
we
have
to
decide
we'll
manage
that
so
that
to
to
avoid
users,
to
you,
know,
setting
inconsistent
values.
A
So,
let's,
let's
talk
it's
like
also
offline,
sure,
certainly
not
not
blocking
anything,
but
yes
he's
right.
So
many
of
those
values
don't
make
make
sense.
Okay,
so
to
be
the
next
one.
C
It
basically
makes
it
hard
to
go
up
and
down
between,
like
when
you're
in
a
homework
space
and
you
go
up
and
then
try
to
get
back
into
it.
It's
a
bit
problematic.
I
also
think
something
might
be
broken
like
you
can
get
cluster
workspaces
and
see
them.
But
if
you
try
to
get
workspaces,
you
don't
always
even
in
the
same
space.
F
Yeah,
oh,
and
still
in
any
case.
Obviously
it
seems
to
me
that
there
is
the
you
know
more
more
general
question
of.
Do
we
want,
I
mean
when
you
do
a
cube
city
lws.
F
You
know
till
day
you
end
up
in
you
want
your
homework
space
and
you
end
up
in
a
in
in
the
workspace
with
the
full
path.
But
then,
if
you
want
to
go
one
level
up,
you
don't
have
the
right,
because
any
no,
no
user
has
the
right
to
see
anything
in
the
bucket.
So
yeah.
C
A
A
F
C
A
A
Oh
yeah,
that
came
out
of
a
discussion
and
it's
like
so
we
talked
about
api
evolution
earlier
today
and
basically
we
postponed
a
number
of
changes
so
like
we
have
path
and
workspace
mixed,
so
sometimes
a
reference
over
space
uses
paths
sometimes
workspace,
and
this
is
of
course
not
uniform.
You
want
to
change
it,
but
we
postpone
it
until
we
move
to
v1
alpha
2
or
v1
beta1,
and
we
want
to
use
api
evolution
so
cl
convergent.
Probably
to
do
that,
so
we
don't
break
users.
That
was
our
decision.