►
From YouTube: Community Meeting, May 3, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah
welcome
everybody
to
the
community
meeting
on
may
3rd,
so
we
have
an
agenda,
a
couple
of
items
looking
forward
to
them.
I
would
propose
we
go
backwards
in
the
agenda.
If
anybody
has
another
topic,
please
edit,
we
come
back
to
that.
I
think
we
start
with
the
first
one
very
positive,
positive
one.
We
attacked
0.4
today
and
I
think
I
will
give
the
voice
to
paul.
If
he's
here.
B
Yeah,
I'm
here
so
awesome
work.
I
think
it's
a
good
time
to
just
kind
of
step
back
and
recognize
some
of
the
things
that
this
group
has
done,
and
the
first
thing
that
I
just
wanted
to
mention
is
we
had
four
new
contributors,
stefan
caught
them
out
in
our
public
channel
there
and
I'm
not
gonna,
try
and
pronounce
everyone's
name,
but
thank
you
to
the
new
folks
that
are
contributing
to
the
project.
It's
really
awesome
to
see.
B
So
we've
got
a
great
big
list
of
things
that
were
emerged
and
I
won't
go
through
and
read
them
all,
but
I
just
want
to
talk
about.
We've
had
new
items
merged
that
help
people
interact
with
the
system
a
little
better.
We've
had
some
feature:
gate,
slides
and
we're
starting
to
use
them
tendency
api
validations
that
are
in
there
the
location
and
placement
api
first
draft
event
has
been
merged
and
that's
huge,
enabling
validation
and
mutating
web
hooks.
B
That's
in
there
I
haven't
been
at
every
community
called,
but
I
saw
an
awesome
demo
for
proxy
and
p-cluster
level
command
like
logs
and
execs
down
from
workloads.
We
had
a
really
cool
demo
about
syncing
strategies
and
workspace
resource
transformations
and
worked.
We
started
in
0.4
and
has
moved
over
to
0.5,
for
some
of
the
advanced
scheduling
integrations
as
well
big
work
on
the
sinker
during
this
release,
but
removing
the
whole
push
mode
of
that.
So
that
was
a
huge
one.
B
B
There
were
quite
a
few
ci
cd
pieces
when
I
reviewed
them
all
which
is
awesome
to
see,
and
we
still
had
time
for
process.
We
made
some
new
agreements
on
how
we're
going
to
plan
our
future
releases
to
make
sure
that
we
don't
over
scope
them.
We
also
added
the
add
to
project
integration
from
github
beta
apis
and
that's
all
just
within
the
kcp
repo.
We've
got
folks
in
this
group
who
are
working
outside
the
repo
on
kcp
things
as
well,
and
specifically,
all
the
code
generation
pieces
that
are
being
worked
on
for
controllers.
B
A
A
A
D
D
D
A
Yeah,
even
even
with
this
long
list
of
things
in
0.4
that
paul
showed
this
major
work
which
is
not
ready
to
merge,
but
not
far
from
it.
So
I
think
this
will
keep
us
busy
for
at
least
half
of
the
milestones,
so
everything
about
virtual
virtual
workspace
and
advanced
scheduling
integration
with
the
location
api.
So
a
couple
of
people
at
least
myself,
david
jorgem,
are
busy
with
that.
E
A
B
We
did
push
the
controllers
that
point
back
to
kcp
versus
tests
and
things.
It
would
be
probably
good
to
finish
cleaning
up
those
items.
A
There's
a
there's,
an
issue
for
missing
end-to-end
test,
so
we
have
service
accounts
kind
of
it
worked
in
a
demo,
but
in
end
to
end
at
least
marvel
said
it
doesn't
work,
so
somebody
who's
interested
in
that.
So
this
is
low
level
put
spec
mutation,
transformation,
yeah.
I
think
somebody
has
to
dive
into
that
and
try
to
find
out
what
is
missing
and
write
the
end-to-end
test.
B
There
are
some
discussions
about
storage
use
cases
and
I
can
link
the
document
on
there.
It
has
really
been
mostly
about
refining
what
we
would
like
to
attempt
and
we
haven't
yet
split
any
of
those
out
into
actual
issues.
So
this
is
a
public
document.
If
folks
are
interested
in
the
storage
area,
I
think
what
we
really
need
to
do
is
figure
out.
What
is
that
very
minimal?
Here's,
a
workload
it
actually
has
persistent
storage
doesn't
run,
and
if
so,
what
are
the
mechanisms?
B
We
need
to
keep
it
running
and
possibly
move
so
I
can
consume
storage,
let's
pin
to
a
single
cluster,
and
how
does
that
integrate
with
the
location
and
placement
apis
that
are
currently
being
worked.
A
A
D
We
could,
as
you
mentioned
to
me
yesterday,
privately
stefan,
we,
we
haven't,
you
know
written
any
actual
demo
scripts
and
I
think
that's
fine.
I
think
that
it.
It
takes
a
lot
of
time
to
get
them
and
open
a
pr
and
get
them
reviewed.
D
I
still
think
it's
worth
the
time
to
do
the
demos.
I
think
we
can
record
them
as
youtube
or
ascii
cinema
or
whatever,
but
I
think
that
it's
the
sort
of
thing
where
they
can
be
short,
ideally
so
pick
a
feature
or
a
sub
feature
and
record
a.
I
don't
know
one
to
five
minute
video
for
it,
and
so
that's
the
sort
of
thing
that
yeah,
it's
still
gonna,
take
some
time
to
put
together,
but
hopefully
it
won't
feel
like
as
much
of
a
burden.
D
A
I
would
also
like
to
see
us
keeping
the
doc
writing
down
those
user
stories
in
demo.
Script
form
sure
I
agree.
The
value
of
the
demo,
which
is
committed
to
github,
is
limited
and
also
another
topic.
I
mean
more
people
outside
of
the
core
team
of
this
group
of
people
here
play
with
kcp
and
we
see
more
and
more
that
they
just
take
a
random
demo
from
somewhere
from
a
document
or
from
github
try
to
replay
that.
Obviously
it
doesn't
work,
because
we
are
much
further
ahead
already
in
apis.
A
F
Yes,
so
I
wanted
to
say
two
things
about
demos:
yeah,
I'm
not
sure
what
the
solution
is
but
steph
and
I
completely
agree
with
how
they're
consumed
a
common
mode
is.
You
know,
I'm
outside
this
group,
I'd
like
to
be
able
to
just
run
a
demo
just
to
see
that
you
know
put
the
system
through
some
particular
scenario
and
and
understand,
what's
going
on,
be
able
to
stop
at
any
point
and
probe
in
more
detail?
F
Also
for
for
the
recordings
you
know,
I
I
see
I
hear
I
hate
video,
it's
it's
it's
usually
too
fast
or
too
slow.
It
never
runs
at
the
pace
that
I
want.
I
like
to
read
rather
than
watch
in
my
preferred
form
of
recording
is
a
text
document
with
you
know,
appropriate
screen,
captures
or
text
captures
to
show
the
important
state.
F
D
Yeah,
I'm
happy
to
go
that
route
as
an
alternative
or
both
certainly
recording
a
demo
is
time
consuming
and
writing
a
blog.
G
D
I'd
I'd
like
to
make
sure
that
we
have
the
user
stories
codified
in
e
to
e's
that
we
maintain,
like
you,
know,
just
make
sure
that
we're
not
losing
any
coverage
there
and
then
delete
them
from
the
repo
and,
if
there's
certain
things
that
we
do
want
to
keep
up
to
date
in
terms
of
walk-throughs
as
documentation
like
we.
We
can
think
about
that.
But
I
definitely
don't
want
the
demos
to
live
on,
given
that
they
break
as
soon
as
we
make
changes,
and
we
don't
come
back
to
them.
F
So
yeah,
I
think
there
should
be
maintained
things
that
run
as
well
as
blog
posts
or
videos
and
or
videos
that
capture
things.
So,
if
there's
maintain
things
that
run,
you
may
be
just
somewhere
just
you
know,
rather
than
a
list
of
demo
scripts
that
don't
work
a
list
of
pointers
into
edw
tests
that
people
can
read
to
understand
what
the
system
can
do.
F
A
A
A
So
there
was
a
ask
for
a
logo
for
kcp
people
start
talking
about
kcp
in
the
community,
which
is
great
so
having
a
demo
which
people
can
recognize
would
be
very
good.
I
think-
and
some
people
discussed
it
already
inside
of
that
in
other
slack
discussions
and
yeah,
so
what
I
would
propose
to
yeah
to
come
to
some
conclusion
to
to
have
one
document
where
we
have
discussions
that
we
that
we
use
this
document
here,
it's
just
a
slideshow
where,
for
the
slideshow
it's
a
couple
of
slides
one.
Basically
one
slide
per
concept.
A
Plus
the
front
page
where
you
have
all
of
them,
those
are
the
current
designs.
You
see
there's
space
for
more.
So
if
you
have
ideas,
please
put
them
here
and
make
a
copy
of
the
template
and
very
important
start
discussing
like
if
you
don't
like
a
color,
if
you
don't,
if
you
think
something
is
too
similar
to
something
else.
A
A
C
Yeah
I'll
throw
out
there.
This
is
part
of
like
a
larger
effort,
but
this
is
just
the
beginning
of
it
to
get
you
know,
web
page
up
and
all
that
kind
of
stuff
that
you
know
move
some
of
this
content
in
a
more
friendly
place
than
github
for
some
audiences,
and
so
this
is
step
one.
A
D
Or
to
dvd
yeah
a
combination
of
the
two
and
I
like
I
said
earlier,
I
want
to
go
through
everything,
that's
currently
in
zero
five
and
make
a
hard
decision
if
it
belongs
in
0.5
or
dbd.
A
So
those
are
the
new
ones,
the
one
we
just
discussed
clean
up
demo
script,
that's
obvious!
There's
one
ticket
about
power
pc
and
not
only
in
ask
to
do
something,
but
to
my
understanding
the
power
team.
I'm
not
sure
this
is
a
red
hat
power
power,
pc
team-
probably
they
want
to
invest
in
making
this
compatible.
D
So
yeah,
I
think,
room
time
in
perspective.
This
sounds
great
to
to
get
it
in,
but
for
core
functionality.
It's
not
not
a
priority
compared
to
everything
else,
so
I
think
putting
it
in
the
tvd
milestone
and
it
can
land
when
it
lands
is
probably
the
right
thing
to
do
here.
That's
the
intent.
A
D
A
D
Days,
yeah,
the
main
one
that
we're
dealing
with
is
that
the
for
some
reason
in
our
sinker
test
the
deployment
that
we're
trying
to
get
synced
down
to
the
workload
cluster
is
not
getting
the
label
to
schedule
it
to
the
workload
cluster,
so
it
never
gets
synced
and
the
test
fails.
D
While
I'm
talking
right
now,
I
have
an
endless
loop
running
to
try
and
reproduce
it
again
and
can't
stefan
and
I
and
others
have
been
adding
doc,
documentation
extra
debugging
to
our
test
fixtures
and
test
code
to
try
and
track
down
what's
going
on.
D
We
have
also
enabled
it
so
that
admins
for
the
repo,
so
folks,
like
stefan
and
me,
can
can
merge
pull
requests
if
the
shared
server
test
is
failing
on
this
flake,
because
we
don't
want
to
halt,
merge
progress
so
that
we
are
going
to
to
be
using
some
manual
overrides
until
we
can
nail
the
source
of
this
flake
and
get
rid
of
it.
A
Yeah
so
speak
up
when
you,
when
your
pr
is
broken.
Just
by
this
I
mean
you
see
it,
how
it
looks
like
test,
end-to-end,
syncer
and
then
something
or
deployment.
That's
it
speak
up.
One
of
us
can
overwrite
the
test
and
purge
prs.
That's
fine,
and
if
anybody
has
interest
to
dig
deeper
here,
our
suspicion
is
that
there
are
several
things,
because
sometimes
the
department
is
even
synced,
but
it's
not
running.
A
So
you
will
see
there
are
more
things
which
might
be
broken
and
yeah
on
this
route.
We
clean
up
stuff,
which
is
also
good.
So
if
you
have
more
time
or
time
to
in
interest
to
look
into
that
as
well.
Please
join
us
wherever
I
come
and
I
put
an
urgent
here.
I
think
that's
correct
right,
yeah.
A
So
formally,
I'm
not
sure
what
we
put
here.
It
doesn't
really
matter
something
we
cannot
wait
with.
So
it's
it's
painful,
maybe
also
what's
important
here.
We
got
kind
support
in
end
to
end,
so
we
are
running
a
kind
cluster
in
parallel,
so
this
is
really
deploying
real
pot.
This
busy
box
inside
so
in
the
past,
everything
was
fake.
So
we
had
fake
compute
cluster
and
now
it's
real
and
maybe
it's
connected-
why
we
see
those
issues.
D
Only
saw
that
once
if
somebody's
got
spare
time
and
wants
to
look
into
it,
I'm
fairly
familiar
with
the
test,
but
would
love
to
get
more
books
familiar
with
tests
as
well.
D
A
A
So
before
you
could
delete
the
workspace
object,
but
every
object
inside
was
staying
in
the
cluster
and
controllers
got
crazy
about
that
because
they
saw
a
namespace,
but
the
cluster
workspace
below
was
gone,
so
we
got
deletion
which
is
cool,
but
we
also
have
a
flake.
So
as
a
controller.
D
D
D
And
then
you
manually
apply
the
syncer
to
your
workload
cluster,
so
that'll
give
you
a
name
space
and
service
account
and
secret
deployment
and
some
other
stuff.
So
I
think
just
thinking
through
the
ux
here
would
be
useful,
and
you
know
if,
if
there's
clarification
that
we
need,
we
can
ask
what
specifics.
A
Is
it
the
uid
of
the
workload
cluster
which
I
don't
think
it
should
be,
and
this
is
connected
here?
If
you
delete
the
object,
then
its
identity
would
be
gone
like
then
the
thinker
must
go
as
well,
and
the
question
was
whether
maybe
identity
is
something
on
top
similar
to
api
exports,
so
you
can
delete
the
vertical
cluster,
but
the
identity
is
stored
somewhere
else
and
you
can
recover
from
the
situation.
H
I
I'm
sorry,
I'm
not
beginner
here.
I
think
there
is
responsible
for
a
number
of
resources.
Right
so
think
a
number
of
resources,
and
so
you
could
have
two
thinkers
responsible
for
two
different
resources.
So
the
problem
is
to
avoid
that.
I
see
the
two
thinkers
think
the
same
resources
right.
D
E
Sorry,
okay,
I
was
about
to
say
that
that,
in
the
short
in
the
near
future,
with
the
virtual,
you
know
synchro
virtual
world
space
full
picture.
E
If
I'm
not
mistaken,
the
thinker
would
not
even
have
to
you
know,
know
the
resources
it
has
to
sink,
because
on
the
endpoint
that
is
dedicated
to
each
sinker,
you
would
only
get
the
the
the
even
the
apis
that
you
have
to
sync.
So.
A
A
A
D
A
Speaks
up
and
wants
to
take
such
an
epic.
Of
course
we
can
move
stuff
into
a
milestone,
but
we
need
the
person
basically
excited
about
the
topic
and
wanting
to
drive
it.
D
I
would
rather
try
and
find
a
way
to
get
it
up
to
the
workload
cluster
status
or
something,
but
there
needs
to
be
some
easily
diagnosable
reason
why
syncing
is
not
happening
in
this
situation.
We.
I
A
Let's
maybe
think
afterwards,
we
have
a
couple
of
other
similar
topics,
so
you
should
see
which
are
most
important,
but
everything
around
okay.
So
this
one
is
also
such
issues,
but
it
it
includes
something
about
design.
I
think
api
design.
So
it's
not
immediate
how
to
implement
it.
That's
why
we
need
a
discussion
first,
I
think
where
we
want
to
go.
A
D
This
is
different,
so
if
you
have,
let's
say,
you're
syncing
deployments
and
there's
a
deployment
that
is
synced
from
kcp
to
the
physical
cluster,
but
then
there's
other
deployments
in
the
physical
cluster
that
were
there
for
whatever
reason
the
syncer
is
going
to
try
and
update
the
status
in
kcp
and
it's
going
to
fail
because
it
can't
find
it.
So
we
need
to
only
sync
status
back
on
resources
that
we
know
were
synced
from
kcp.
In
the
first
place,
the
famous
config
maps.
E
Yeah,
that's
quite
strange:
it
might
even
be
fixed
by
the
the
upcoming
commits.
You
know
about
advanced
scheduling,
because
normally,
when
we
think
something
downstream,
it
should
keep.
It
would
keep
the
the
sink.
The
label,
I
mean
the
the
the
cluster
label,
and
so
when
we
we
know
what
is
yes.
When
we
watch
things
from
on
downstream,
we
would
watch
with
the
same
level
and
so
only
get
to
to
sing
back
to
upstream
things
that
initially
came
from
upstream
through
this
level.
So
yeah.
E
E
E
A
D
Yeah,
so
this
one,
we
stopped
updating
the
status
field
in
the
workload
cluster
for
synced
api
resources
when
we
got
rid
of
push
mode
and
it
kind
of
goes
back
to.
Where
are
we
going
to
configure
things,
and
do
we
want
to
get
rid
of
this
for
now
and
bring
it
back
later
or
do
we
want
to
actually
fix
it
now?
D
A
D
Yeah
I
mean
we
just
there's
the
start
of
things.
I
need
to
check
one
or
two
of
those
boxes
off.
This
is
gonna.
D
D
So
we
probably
need
to
get
to
a
point
where
either
a
whole
bunch
of
people
are
gonna
prep
for
rebases
once
this
lands,
or
we
have
people
kind
of
pause
working
on
things
we
get
this
in
and
then
folks
start
working
again
because,
like
I,
I
don't
want
to
have
to
rebase
again
and
again,
as
commits
are
added
to
kcp
to
get
this
updated
and
I
don't
want
to
make
other
people
have
to
rebase
a
whole
bunch
of
time.
So
I
think
we
just
need
to
figure
out
the
timing
here.
A
So
background
is
that
we
cannot
import
any
cube
library
right.
So
it's
a
pretty
minimal
package,
that's
what
we
found
out.
Otherwise
you
get
cycle
dependencies.
So
it's
just
luxury
cluster,
just
the
struct,
which
we
have
already
that's.
Why
we
want
to
move
it
under
kcp,
dev
logic,
cluster
dot
name.
Somebody
has
other
proposals.
A
G
Yeah,
I
was
just
wondering
so
if
this
doesn't
land
in
zero
five,
would
we
just
try
to
find
another
place
for
the
current,
like
client
library
work
to
go
since
that's
sort
of
paused?
Since
we
can't
add
the
kubernetes
imports
to
that
repo.
D
Yeah,
so
124
is
coming
out
in
theory
today
may
3rd.
I
think
if
we
wait
to
do
the
rebase
in
june,
like
after
we
get
zero
five
out
the
door
like
we're,
not
missing
out
on
anything
new,
that's
in
124
that
I'm
aware
of.
If
we
wait
a
few
weeks
to
do
it.
A
Yeah
there's
just
this
detail:
everybody
has
noticed.
Cluster
name
got
renamed
into
something
super
ugly,
because
everybody
remember
that
said:
that's
deprecated,
don't
use
this
cluster
name
also
a
super
ugly
identifier,
which
means
oh,
and
it
also
wiped,
I
think
in
the
storage.
So
we
cannot
use
it
anymore,
which
means
we
have
to
move
to
annotations.
D
Yeah,
I
think
now
is
a
good
time
to
try
and
do
that
too.
I
mean
it's
not
glorious
work
by
any
means,
but
it
does
future-proof
us.
A
D
In
our
cube,
there's.
D
Fabian's
got
the
one
for
the
customizable
key,
funk
that
we're
gonna
need
for
the
informers
and
the
other
ones.
I
think
we
need
to
review
them.
I
mean
there's
something:
that's
been
opened
for
over
a
year.
Everything
else
is
a
few
months
old,
so
we
should
spend
some
time
and
go
back
and
look
at
them.
A
D
I'm
also
I'm
happy
to
do
it
or
I'm
happy
to
hop
on
google
meet
with
somebody
and
walk
through
it.
If
you
know,
if
there's
anybody
who's
interested
in
seeing
ways
to
update
our
fork,
I
think.
G
D
It
would
be
a
good
learning.
Experience
too
sounds
good.
So
do
we
meet
the
bar
for
zero
five?
I
think
it
needs
to
happen.
D
A
D
Okay,
yeah,
if,
if
folks
want
to
take
a
look
at
the
I'm,
just
going
to
paste
a
link
here
to
name.go.
Basically,
this
is
what
we're
proposing
tagging.
A
A
H
I
I
D
Yeah,
the
github
org
invitation
is
still
pending.
A
I
Yes,
so
I
I
notice,
if
you
create
a
two
workspace,
so
if
you
want
to
you
are
in
one
workspace
and
I
would
like
to
jump
to
another
workspace,
you
have
to
pass
by
the
parent
and
then
come
back
to
the
lower
level.
D
E
And
as
far
as
I
know
so
they
remember-
I
also
tried
you
know
just
thinking
that
it
would
work
and-
and
finally
I
don't
remember
if
it's
with
the
slash
or
with
the
column,
but
you
went
into
a
state
where
it's
quite
broken.
In
fact,
you
have
to
really
come
back
to
roots,
to
be
able
to
find
back
your
your
stuff,
so
I
mean
it
it
it's
just
not
only
it
doesn't
work,
but
but
if
you
try
to
do
that,
you
know
please
open
an
issue
yeah.