►
From YouTube: SIG Cluster Lifecycle - Office Hours - 20230613
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
Tuesday
13th
of
June
2023.
This
is
a
seed
cluster
life
cycle
office
hour,
I'm,
going
to
share
the
agenda
doc
and
give
the
word
to
lubomir,
which
has
the
first
Topic
in
agenda.
B
Hey
I
just
wanted
to
give
a
heads
up
to
say
that
we
started
discussing
the
next
API
version
of
cubadium,
which
is
V1
beta
4.
I
can
post
the
link
in
Zoom
chat.
I
cannot
edit
the
doc
right
now.
B
This
is
basically
a
tracking
issue
for
the
API
changes
which
are
proposed.
Some
of
these.
We
are
dragging
for
multiple
releases
without
either
consesos
of
we
are
not
agreeing
on
them,
or
maybe
somebody
just
not
taking
ownership,
to
complete
the
feature,
but
going
through
this
list,
it
could
be
a
very
small
API
release
with
potential
to
drop
like
50
of
these
punch
card
items
and
one
one
interesting
idea
was
to
potentially
work
on
this
API
multiple
in
multiple
releases.
B
B
We
can
work
on
it,
multiple
releases
and
eventually
it
will
be
added
in
the
schema,
which
means
it
becomes
part
of
the
cubeadm
API
or
dash
dash
config.
On
the
you
know,
the
CLI,
if
you
have
any
comments
or
demands
for
features
that
are
missing
in
higher
level
components
or
just
anything
in
general.
Please
comment
this
particular
candidate
lists
of
features
is
owned
by
Dave
Dave
Chen,
who
is
a
fairly
active
qbdm
contributor.
B
So
basically,
everything
is
about
for
discussion
at
this
point,
something
that
I
personally
have
some
favorites
here
and
we
can
get
to
them
to
discuss
but
like
again,
if
you
have
any
comments
on
this
list
or
wanting
it
in
new,
add
it
to
the
list.
Please
add
your
comments
and
also,
if
you
have
any
questions,
please
go
ahead.
A
Yeah
from
a
size,
first
of
all,
thank
you
for
raising
attention
on
this.
One
I
will
try
to
go
through
the
list,
provide
feedback
and
also
advertise
that.
C
A
Office
hours
I
was
internet
from
where
that
this
is
happening,
so
we
try
to
collect
the
give
you
feedback
and
as
soon
as
possible.
What
will
be
interesting
for
me
looking
at
this
list
is
that
we,
if
we
are
the
a
note
about
which
change
are
breaking
and
we
change
our
just
additions,
because
so
we
can
start
focusing
on
the
breaking
changes,
because
this
is
these
are
the
one
that
will
impact
the
users.
A
While
additions
usually
are
smooth,
it
is
just
they
are
there,
and
people
can
start
using
from
when
they
are
available.
But
I
will
comment
on
this
issue,
asking
to
update
the
list
with
breaking
or
not
breaking
and.
A
Cecile
Justin:
do
you
want
to
comment.
C
Yeah
I
guess
just
one
question
from
my
site:
has
there
been
any
discussions
in
the
past
to
get
2bm
out
of
beta,
or
is
that
not
in
this?
Your
turn.
B
I
think
it's
just
like
the
confidence
and
the
main
blocker
is
the
web.
Walker
is
the
so-called
almost
infamous
node
specific
component
config,
and
it's
a
it's
a
problem
where
we
don't
know
how
we
can
configure,
for
example,
a
control
plane
machine
in
a
specific,
unique
way.
Accordingly,
we
have
a
Costa
configuration
which
is
basically
applies
to
all
the
control,
plane
machines
and
treats
them
as
replicas.
B
In
the
past
users
like,
for
example,
people
who
have
something
like
a
and
also
sort
of
a
hybrid
self-hosted
custom,
Hardware
stuff,
like
that
they
require,
like
a
slightly
more
unique
configuration
for
each
control,
plane
machine.
B
It's
certainly
not
the
use
case,
for
you
know:
I
click,
a
button
in
a
call
provider
and
some
high-level
tooling
deploys
a
cluster
for
me,
and
it
has
been
a
bit
of
a
niche
use
case,
but
certainly
also
applies
not
only
to
control
brain
but
also
to
Kublai
like
how
do
we
ensure
we
have
workers
and,
like
controlling
machines,
managed
in
a
instant
specific
way.
Currently,
we
just
write
Flags
store
patches
on
like
files
on
disk,
and
it's
really
not
an
API.
It's
just.
B
We
write
stuff
and
there
have
been
ideas
in
the
past,
especially
near
Fabricio.
We
have
been
discussing
this
for
a
long
time.
Just
how
do
we?
How
do
we
turn
cubadium
in
something
more
declarative
like
how
do
we
store
instant
specific
configuration?
Perhaps
in
config
Maps?
There
have
been
ideas,
but
until
this
happens
and
who
knows
when
it's
going
to
happen,
I
think
it
might
be
safer
for
us
to
continue
with
beta
or
I
see
by
the
way.
B
Kublet
folks
have
decided
that,
like
a
V1
is
on
the
way,
despite
all
the
noise
inside
the
current
implementation
of
couplet
configuration,
which
is
like
a
mixture
of
local
Flags
fields
and
Global
Fields,
it's
a
bit
of
a
mess,
but
they
have
to
say.
Maybe
oh
maybe
it's
a
good
idea
to
just
release
V1,
and
maybe
we
could
do
the
same
in
Cuba,
ATM.
Actually,
I
personally
don't
mind,
and
maybe
we
can
say,
okay
in
a
future,
V2
is
going
to
be
quite
breaking
or
a
complete
redesign
of
what
we
have.
B
So
just
again,
we
lack
the
confidence,
and
perhaps
we
just
should
vote
and
the
maintainers
available
tools.
We
should
just
gather
and
decide
what
to
do.
C
Thanks
yeah,
that
makes
sense,
I
think
main
reason
I
was
asking
is
because
I
think
I
saw
somewhere.
That
kubernetes
was
trying
to
the
project
as
a
whole
was
trying
to
move
away
from
like
Perma
beta
apis,
because
that's
like
a
I,
think
an
issue
across
many
apis
that,
once
it
reaches
beta,
it's
hard
to
gain
the
confidence
to
go
to
V1,
but
then
folks
are
still
using
it.
C
Anyways,
like
tons
of
users,
rely
on
Cuban
in
production
despite
the
beta
status,
but
then
yeah
and
I
think
I
I
thought
I
saw
somewhere.
That
kubernetes
was
going
to
enforce
like
a
maximum
number
of
releases,
that
an
API
could
be
in
beta,
after
which
it
either
needs
to
be
deprecated
or
it
needs
to
be
graduated.
C
So
I
was
wondering
if
that
would
affect
cubadium
in
any
way.
Given
this
discussion.
B
B
You
know
with
respect
to
new
features,
but
we
can
certainly
follow
some
of
the
more
sane
rules
and
I'm,
definitely
plus
one
to
not.
You
know
indefinitely
saying
beta
it's
unfortunate,
that
this
hesitation
has
propagated
into
the
whole
project
and
if
we
look
at
some
of
the
other
apis,
for
example,
Cube
controller
manager,
your
proxy,
those
are
still
stuck
in
Alpha
and
I
must
completely
Mass
used
in
everywhere
in
in
production
software
all
over
the
place.
B
So
yes,
I,
agree.
I
agree
with
the
whole
notion.
A
personally
a
personal
trigger
for
me
would
be
if
the
Kublai
just
decides
to
move
to
V1
API
I
think
we
probably
should
just
move
kiberium
as
well
and
then
think
about
this
redesign.
If
we
have
to
in
the
future.
D
Yeah
I
certainly
don't
mind
us
moving
Canadian
forwards.
I
think
the
beta
thing
applies
to
built-in
types
primarily
and
that's
because
their
versioning
story
is
so
much
weaker
than
crds,
so
they're
very
restricted
in
well.
I
mean
they've
enforced
their
own
Like
rules
where
they
want
to
get
rid
of
the
older
versions.
D
B
Just
gather
ideas
from
core
apis
and
try
to
follow
your
rules.
We
do
similar
with
feature
Gates
which,
by
the
way,
I
think
Daniel
Smith,
wants
to
hear
a
proposal
to
completely
remove
as
a
concept
in
a
cap.
So
cubadio
has
like
this
mimic
Concept
in
terms
of
what
we
do
with
our
component.
Config
API
and
the
feature
set
life
cycle
and
deprecation
policy
in
general,
yeah,
I,
definitely
plus
one
two
follow.
Even
if
it's
for
core
apis
corporate
config,
we
may
just
apply
the
same
policy
at
least
in
Cuban.
B
Nowadays
we
don't
have
this
component
Standard
Group,
which
used
to
delegate
responsibility
to
all
components
in
terms
of
company
config.
So
we
are
kind
of
freestyling.
Cable
component
is
making
a
decision,
I'm
Different
to
swap
for
Cuban
to
just
follow
such
a
policy
to
get
out
of
beta
as
soon
as
possible.
C
A
I
I've
only
let
me
say
one
concern
is
that
query
stands
for
things
like
cluster
API.
What
is
happening
is
that,
while
you
upgrade
the
control
plane,
then
there
is
a
period
and
you
have
a
breaking
change
like
the
change
of
the
API
version
in
cupboard
mean
for
for
some
period.
You
cannot
join
basically
from
the
moment
that
control
plane
is
upgraded.
You
cannot
join
anymore
nodes
with
the
previous
version
of
kubern
mean.
D
B
So
yeah,
that's
like
a
separate,
separate
problem.
Essentially
you
just
like
I
I,
think
I
said
it
on
the
Costa
Rica
meeting
yeah.
It
requires
like
a
like
a
cap
and
we
have
to
establish
what
we
have
to
do.
Essentially,
we
can
continue
releasing
apis,
including
a
V1,
but
this.
How
do
we
handle
the
the
skew
is
a
completely
different
beast
and
it's
it's
complicated.
I
haven't
spoken
to.
B
Actually
there
were
a
couple
of
people
in
the
last
question
life
cycle
meeting
I
think,
but
they
did
not
those
equipments.
They
did
not
like
take
immediate
ownership
of
this.
Let
me
say:
oh
yeah,
okay,
I'm
gonna
handle
this.
It's
it's
complicated
and
nobody
has
started
to
keep.
Maybe
after
we
released
this
API,
maybe
IO.
Somebody
else
can
simply
start
writing
a
cap.
How
do
we?
How
do
we
do
this?
How
do
we?
How
do
we
handle
this
skew
problem?
Yeah.
A
Then
that's
fair
I
think
that
I
I
was
thinking
along
the
lines
that
you
are
throwing
to
that
other
thinking
to
split
the
implementation
in
multiple
reviews.
So
maybe
it
is
just
in
the
first
release.
We
make
a
join
capable
to
read
the
new
the
new
format
in
the
follow
in
the
next
release.
We
make
cover
mean
actually
migrate,
and
so,
when
we
migrate,
we
have
a
minus
one
version
that
can
read
but
yeah
that
that's
fair.
We
have
to
write
this
down
and
discuss.
B
Oh
yeah
see
we'll
see
what
you
mean,
but
basically
this
was
more
about.
How
do
we
put
as
much
features
as
possible
in
multiple
releases,
but
the
API?
What
the
QE
proposed
is
that
we
lock
the
to
keep
the
API
locked
hidden
in
because
it's
not
part
of
the
schema.
It's
not
it's,
not
convertible,
it
just
we
add
features
to
it,
but
we
don't
make
it
like
the
default.
Api
I'm.
B
I'm
not
sure
how
well
this
is
going
to
work,
but
I
think
I
like
his
idea,
because
it's
kind
of
it's
kind
of
removes
the
pressure
from
us
to
try
to
just
release
an
API
in
One
release,
which
is
it's
doable
nowadays
for
four
months,
put
a
cycle
it's
doable,
but
for
which
I
had
a
question
for
you.
So
how
imagine
that
we,
like
we
do
a
breaking
change,
such
as
the
the
extra
arcs
duplicates
problem?
B
So
if,
if
we
completely
change
the
structure
of
so
now,
it's
a
key
value
which
is
limited
by
goal.
Basically,
we
can
have
multiple
keys,
so
we
have
to.
We
have
to
change
the
structure
we.
It
could
mean
like
a
list
of
structs,
something
like
that,
which
is
a
breaking
change.
Essentially,
it
uses
like
how
can
cross
API
migrate
users.
You
know
Cuban
has
cubic
config
migrate,
which
is
CLI
well,
how
can
how
can
cost
API
migrate?
Just
the
API
object
in
the
in
the
cursor.
A
B
I
see
great
well
sometimes
I
wish
we
centralize
this
the
external
conversion
idea,
but
you
just
it's
just
now
limited
with
the
whole
shower
copy
and
like
mimicking
conversion
outside
of
the
component
yeah.
But
what
can
we
do?
It's
a
API,
Machinery
limitation,
essentially.
D
A
A
This
is
part
of
the
biggest
effort
to
reduce
pain
spending,
Slash
use
credits
that
AWS
provided
to
cncf
so-
and
this
is
basically
the
next
step
after
the
migration
of
the
registry.
A
I
didn't
have
the
time
to
look
at
to
to
read
it
and
go
through
it.
It
seems
pretty
easy,
but
I
will
take
a
look
and
discuss
in
the
office
hour.
My
gutter
reaction
to
this
is
that
in
copy
we
should
defer
these
to
the
next
that
it
is
because
the
CI
team
is
doing
a
huge
job
in
in
getting
rid
of
flakes
in
order
to
get
ready
for
the
release
and
I.
A
C
Yeah
just
a
note
that
for
all
the
jobs
that
are
running
that
are
creating
external
resources
in
clouds,
so,
for
example,
for
example,
sorry
cluster,
API
providers,
those
can't
be
migrated
yet
since
most
of
them
depend
on
gcp
Secrets
for
off,
so
just
yeah
that
the
migration
should
only
be
at
this
point
for
anything
that
is
running
local
tests
like
lint
or
unit
tests
or
build,
and
the
other
thing
to
note,
I.
C
Think
from
what
I've
seen
there
are
a
lot
of
PRS
that
were
opened
by
the
testing
for
folks
themselves
already
so
make
sure
if
you're
printing,
a
PR
that
there
isn't
already
a
d
duplicate
and
I
think
most
the
only
like
tricky
thing
that
people
are
running
into
is
that
we,
a
lot
of
jobs,
didn't
have
like
resources,
requests
and
limit
sets
and
those
need
to
be
set
now.
So
it's
a
bit
of
iteration
to
determine
what
are
the
proper
requests
and
limits
values
to
set.
B
Justin
I
guess
this
means
that
we
cannot
migrate.
Cops
I,
know
that
the
generative
jobs,
but
because
of
the
cloud
provider
credential
stuff.
We
cannot
do
it
yet.
D
Correct
and
actually
there's
a
a
further
wrinkle,
which
is
that
we
want
to
get
support
for
effectively
what's
called
bortleta.nt
or
irsa,
so
that
there
are
no
like
today
we
use
credentials
that
are
baked
in
a
secret
and
there's
another
effort
at
going
on
to
like
try
to
address
that
at
the
same
time.
But
it's
not
trivial
and
cops
I'm,
not
sure
if
cop
support
is,
but
certainly
the
testing
jobs
don't
support
it.
Today,.
B
I
see
all
right
for
qpdn,
specifically
I,
think
it's
possible
for
us
to
migrate.
We
have
generative
jobs
with
tooling.
We
just
have
to
change
the
cluster
field,
value
for
in
the
tool
that
we
that
we
use,
because
even
if
this
influence
nprs
we
our
tool
would
basically
stop
the
whatever
values
they
feed.
B
So
I
think
it's
Global
for
cubadium
I
think
they
already
traded
one
of
the
jobs
we
just
stomp
it
with
every
time
we
iterate
something
here
so
I
will
log
an
issue
and
I
think
it's
doable
for
cubadian.
We
have
a
good
number
of
tests
so
at
the
jobs,
so
I
think
we
can
start
saving
some
money
at
least
some
Cuban
side.
For
now,.
A
A
So
this
could
be
interesting,
but
I
I
think
that
also
kind
is,
is
already
being
migrated,
so
I'm
pretty
sure,
that's
Ben
and
the
testing
results
are
taking
care
of
what
they
need.
Okay,
there
are
no
other
topics
in
agenda.
I,
don't
know
if
some
of
you
want
to
bring
some
project
upgrades.
Otherwise
we
have
some
time
back
today.