►
From YouTube: Kubernetes SIG Apps 20230612
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Now
well,
hi
everybody
I'm
Kenneth
Owens.
Welcome
to
the
Monday
morning,
June
12th
meeting
of
seyapps.
We
have
a
couple
of
items
on
the
agenda,
so
why
don't
we
just
Jump
Right
In?
So
does
anyone
want
to
talk
about
starting
a
cap
for
mutable
scheduling
directives.
A
A
B
B
Initially
in
The,
Proposal
I
would
love
to
have
support
for
job
from
seek
apps
and
HPA
from
Chicago
to
scaling
group.
But
this
thing
is
generic,
so
it
would
be
great
if
we
could
agree
how
to
handle
this
type
of
request
going
forward,
so
whether
we
go
with
a
field
in
in
the
object
or
maybe
with
label
or
annotation
or
what
would
be
the
naming
here
so
that
all
other
requests
of
this
guide
should
could
follow
the
pattern.
B
Be
able
to
the
same
thing
like
a
scheduler
name
but
for
other
objects,
so
scheduler
name
allows
you
to
specify
that
some
other
scheduler
non-default
one
should
handle
that
potent
apply,
specific
logic
or
possibly
I,
don't
know,
use
some
no
standard,
metrics
or
a
schedule
posts
in
a
very
popular
way.
But
we
did
so.
We've
got
this
type
of
feature
for
pause.
We
don't
have
it
for
any
other
controllers.
B
Apparently
apart
from
endpoints-
and
this
is
the
problem,
because
if
you
want
to
provide
a
custom
Logic
for
any
API
object,
you
need
to
disable
the
controller
in
controller
manager.
That
is
possible
if
you
have
access
to
kubernetes
master
node.
But
if
you
are
running
on
some
form
of
manage
environment
like,
for
example,
Google
kubernetes
engine,
that
your
options
are
pretty
much
limited,
you
need
to
Fork
the
API,
which
is
usually
bad
idea.
If
you
want
to
have
why.
A
Stepping
back
for
right,
I
mean
the
scheduler
doesn't
really
control
the
Pod
resource
right,
like
the
scheduler
watches,
Cloud
resources
establishes
node
placement
and
from
the
design
of
the
system
from
day
one.
The
idea
of
supporting
multiple
schedulers
was
baked
into
the
architecture.
The
concept
of
how
resources
worked
was
always.
It
was
never
an
idea
to
support
multiple
controllers
or
redundant
controllers
for
the
same
resource,
and
it
was
always
like
the
the
design.
The
intention
was
always
to
make
the
resources
forkable.
A
So
I'm
not
saying
we
shouldn't
do
this,
but
to
say
that
we
shouldn't
Fork
it
when
like.
If
it
was
like
I
went
to
me.
The
arduous
portion
of
implementing
your
own
API
is
primarily
getting
the
reconciliation
logic
correct
within
the
controller
right
and
then,
if
you're
going
to
say
that,
like
we're,
going
to
do
something
different
to
adopt
like
like
create
using
koopy
Builder
to
generate
a
custom
resource
definition
and
then
deploying
that
into
the
API
Machinery
doesn't
seem
particularly
arduous
for
the
end
user.
B
Implementation
yeah
sure,
so
this
is
not
a
problem.
If
you
are
submitting
resources
manually,
the
problem
starts
when
you
want
to
integrate
this
thing
with
all
other
systems.
So,
for
example,
you
have
a
system
that
submits
jobs
and
looks
for
job
resources,
but
you
would
like
to
handle
jumping
in
a
slightly
different
manner
now
either
you
change
this
system
to
support
both
standard
jobs,
API
and
your
new
jobs
API
or
you
allow,
for
example,
to
switch
some
some
new
functionality
in
job
using
some
some
method,
while
keeping
the
API.
So
the
API
definition
still
stands.
B
You
need
to
work
within
the
API
constraints,
but
you
can
provide
a
slightly
different
implementation.
If
that's
your
need,
without
this
option,
you
need
to
Fork
the
API
provide
another
controller
that
is
more
or
less
similar
to
having
like
controller
name,
but
then
everyone
else
who
uses
the
API
needs
to
start
to
adapt
to
the
new
crd
and
new
API
Group
and
new
API
version
and.
A
Now
we
have
entities,
that's
like
kind
of
intentional,
like
I,
think
the
things
that
I
see
that
are
really
problematic
with
this
approach
are
one
there's
a
set
of,
particularly
for
V1
apis.
We
have
a
set
of
performance
Suites
which
that
Define,
the
behavior
of
a
kubernetes
implementation
with
respect
to
how
those
resources
are
handled.
A
There's
a
difference
in
semantics
between
saying
I'm
going
to
disable
this
resource
entirely
and
when
this
resource
is
applied
to
the
API
Machinery,
it
will
not
be
reconciled
to
I'm
going
to
allow
for
different
versions
for
the
same
version
of
the
resource
to
have
varied
semantics,
depending
on
how
the
Machinery
is,
and
maybe
The
annotation
is
enough.
But
I
just
see
those
as
two
kind
of
hurdles
that
are
like
not
addressed
in
the
cap.
B
A
Endpoint
space
doesn't
have
multiple
controllers.
Endpoint
slice
is
a
different
API
resource,
so
the
limitations
of
endpoints
precluded
us
from
yeah.
Basically,
it
was
a
scalability
limitation
that
Tim
always
wanted
to
address.
So
we
introduced
endpoint
slices.
We
introduced
a
completely
separate
resource.
In
fact,
we
didn't
like
override
it
and
say
you
have,
and
then
you
know
there
there's
a
whole
upgrade
policy
inside
of
the
cluster
in
order
to
release
that
resource
to
make
it
generally
available
have
all
the
other
components.
So
it's
maybe
someone
is,
but
it's
certainly
not
like.
A
We
put
a
switch
in
and
says
like.
Okay
there,
there
there's
the
whole
thing
we
put
in
for
the
conversion
so
that
you
could
retrieve
an
endpoint
slices,
an
endpoint
object
to
maintain
backward
compatibility
and
vice
versa
and
convert
them.
There's
a
bi-directional
conversion,
but
that's
API
resource
bi-directional
conversion.
It's
not
deploying
a
different
operational
semantic
via
a
reconciler
under
the
hood
right,
it's,
which
is
what
seems
to
be
proposed
here
again
like
I'm
not
like.
A
If
we
did
this
via
annotation
and
then
we
released
it
generally,
it
doesn't
prevent
users
from
the
community
from
developing
and
building
a
conformant
cluster,
but
the
yeah
kind
of
semantic
conformance
and
then
versioning
and
handling
of
resources
and
what
it
means
to
be
V1
or
B
V1
beta
one
in
terms
of
what
the
promises
to
the
community
like
that,
you,
you
gotta,
that
has
to
be
addressed
in
some
reasonable
way.
If
we're,
in
my
opinion,
if
we
were
going
to
consider
this
okay.
A
Alpha
V1,
so
Alpha
is
experimental
at
this
point
off
by
default
right.
So,
if
you're
using
output
features
in
your
cluster,
like
you
had
to
explicitly
enable
them
in
opt-in
beta,
like
beta
level,
stability
has
to
like
go
to
GA
relatively
rapidly,
given
our
new
our
new,
but
our
promise
around
V1
is
that
once
you
go
V1,
it's
stable
and
Backward
Compatible
and
the
semantics
of
the
resource
and
the
implementation
are
stable
or
Backward
Compatible
until
a
V2
is
released
or
yeah
right.
A
So,
if
we're
going
to
say
like
well,
now
we
add
an
annotation
and
it
completely
changes
the
semantics
of
it.
Maybe
that,
like
that,
could
be
a
thing,
but
I
mean
that's
very,
very
different
from
the
promises
that
we've
made
to
the
community
so
far
right
so
like
I
mean
like
there's
got
to
be
a
discussion,
and
I
would
definitely
want
to
pull
an
API
machinery
because
they
particularly
like
I,
mean
every
Sig
has
been
a
participant
in
conformance,
but
API
Machinery
is
kind
of
really
driven
it
and
they
also
really
care
about.
A
They
have
a
vested
interest
in
that
as
well,
so
I
won't
pull
them
in
as
a
side
just
from
Sig
apps
and
save
Auto
scaling.
A
A
A
Yeah,
so
this
will
probably
I
imagine
this
would
be
like
Sig
architecture
right,
like
you're
you're,
saying,
like
we're,
gonna
change,
kind
of
a
fundamental
principle
and
again
like
I'm,
not
saying
we
shouldn't
but
I'm
saying
like
we
definitely
have
to
proceed
carefully.
If
we're
going
to
implement
a
mechanism
to
allow
multiple,
well
I,
guess
like
the
way
it's
proposed
here,
isn't
simultaneous
right
like
you
would
have
one
or
the
other.
B
Implementations,
like
you,
can
hear
from
multiple
schedulers
and
to
identify
it
with
schedule.
Learning
you
can
have
the
default
one,
and
you
have
your
customer
here
yeah.
If
the
proposal
in
this
form,
you
could
have
like
the
default
controller
and
the
custom
controller
that
should
still
operate
within
API
definitions,
yeah.
A
I
imagine
it
would
be
Sig
architecture
for
making
a
fundamental
change
to
the
architecture
of
the
entire
system
as
well.
Right
like
we're
saying
here
now,
we
have
multiple
reconcilers
for
API
research,
which
I
mean
you
could
always
kind
of
do
that.
You
could
always
try,
try
to
chain
them,
but
because,
like
there's,
nothing
saying
that
you
can't
have
multiple
controllers
watching
the
same
resource,
so
you
can
only
have
one
owner
right
and
that's
the
other
thing
like
the
owner
reference
for
the
controller.
A
B
A
Let
me
there
are
a
couple
people
out
again:
ping,
audio
and
yeah.
I
would
like
to
see
what
Daniel
thinks
about
this.
Maybe
even
what
Tim
thinks
about
this
as
well,
because
this
is
kind
of
a
big
shift,
but
I
do
get
the
use
case.
Right
like
it's
like
I,
want
I
want
something
like
volcano
and
I.
Not
only
do
I
want
the
the
scheduler
to
behave
differently
with
with
pods,
but
in
terms
of
the
way
the
job
itself
and
the
resources
handled
I
want
something.
A
C
When
we
have
scheduler
running,
what
do
you
mean
by
like
both
the
schedulers
are
going
to
be
in
place?
C
So
is
it
assuming
that
only
one
scheduler
is
going
to
act
on
the
or
act
on
the
part,
because
that
is?
That
is
how,
if
I
remember
correctly,
if
you
specify
the
scheduler
name
and
the
Pod
spec,
only
one
scheduler
is
going
to
pick
it
up.
If
not,
it's
going
to
be
default
scheduler,
or
are
you
thinking
about
chaining
as.
A
Chaining
works
today.
Right,
like
you,
there's
nothing
in
statue.
The
built-in
controller
would
act
as
would
inject
the
controller
ref
and
act
as
the
the
owner
for
the
object
right,
but
you
can
have
multiple
reconcilers
acting
on
the
same
object.
This
is
more
like
we're
going
to
have
multiple
potential
owners
and
then
some
annotation
or
some
API
mechanism
that
declares
in
a
resource
should
be
owned
by
oh
yeah.
There's
Heba,
there's
a
a
cap,
it's
actually
Linked
In
in
the
midi
minutes.
A
It's
cap,
40
called
controller
name
but
yeah,
so
that
that's
yeah
I
mean
it's
ambitious,
but
we
just
got
to
think
through.
C
Yeah
I,
agree,
I,
think
yeah
I
think
we
wanted
to
do
something
similar,
but
most
of
the
folks
that
I've
seen
are
forking
the
API
and
then
building
their
own
controllers.
It's
not
bad
as
such
that
mechanism,
but
we
have
already
defined
a
particular
life
cycle
for
the
objects
that
we
are
interested
in.
Why
are
the
controllers?
C
So
if
we
are
trying
to
be
conformant
or
if
you're
saying
that
we
are
going
to
allow
non-core
controllers
to
act
upon
the
core
objects,
there
has
to
be
some
way
to
say
it
is
not
going
to
be
conformant
anymore,
like
the
Destroyer
is
not
going
to
be
conformant
anymore.
That
is
where
I
think
I
would
draw
the
line.
I.
A
Mean
pretty
much,
but
it's
kind
of
interesting
in
the
sense
that,
like
you,
you're
you're,
literally
saying
I,
don't
want
to
be
conformant
at
that
point
too
right,
because
if
you
implement
it,
if
you
had
a
I
mean
aside
from
like
prayer,
perhaps
you
could
do
something
like
I'm
going
to
implement
a
conformant
deployment
controller,
but
I'm
going
to
do
it
in
such
a
way
that
I
get
a
performance
benefit
right,
so
it
passes
into
conformance
Suite,
but
it
actually
has
better
performance
right.
A
A
You
want
to
evolve
to
a
place
where
the
built-in
resources
are
no
longer
special
and
that
all
resources
are
kind
of
just
resources,
whether
they're,
you
know
part
of
conformance
or
whether
you're
bringing
your
own,
but
you
know
it
very
much
is
like
there
will
be
one
kind
of
controller
that
owns
a
resource,
particularly
for
workloads
controllers
that
you
can
chain
as
many
as
you
want.
But
if
you
want
to
change
the
fundamental
Behavior,
you
know,
go
experiment
work,
the
controller
for
the
API.
A
C
I
think
at
some
point
of
time
in
architecture
we
wanted
to
have
not
too
many
controllers
within
the
code
or
too
many
apis
within
the
code
right.
So
wouldn't
we
be
going
against
that
principle
at
that
point
of
time
where
everything
has
to
stay
in
the
core
KK
kubernetes.
A
The
entire
purpose
of
of
custom
resource
definitions
would
suggests
one.
They
have
resource
definitions
and
then
minimize
a
set
that
were
built
in
and
allow
people
to
kind
of
choose
their
own
Journey
with
the
resources
that
they
put
into
the
cluster,
then
that
would
be
both.
You
know:
providers
of
kubernetes
of
distributions,
as
well
as
end
users
right.
A
So
it
is
kind
of
yeah
not
the
way,
but
I
mean
again
if
the
community
at
large
feels
that
there's
value
in
it
I'm,
not
gonna,
say
like
you
know,
and
we
can
find
a
story
about
how
you
can
do
this
in
a
way
where
conformance
still
makes
sense
for
API
versioning
still
makes
sense
and
where
the
end
user
of
a
cluster
can
be
confident
in
what
they're
getting
when
they
deploy.
Then
you
know
it's
not.
C
Makes
sense,
I
think,
would
a
good
starting
point
be
something
like
we
are
still
going
to
be
conformed
by
that
what
I
mean
is
we
are
going
to
honor
the
life
cycle
of
whatever
the
workload
controller?
Sorry,
whatever
the
workload
is,
which
is
maintained
by
the
controller
and
say
that
having
an
annotation
or
a
particular
field,
and
let
the
the
new
controller
that
is
being
created,
be
conformant
with
what
we
have
had
in
the
past
or
what
we
have
currently.
Would
that
be
a
good
starting
point.
C
C
A
A
This
is
how
you
extend
the
API
Machinery
by
adding
custom
resource,
and
this
is
how
you
add,
a
custom
reconciler
into
the
cluster
and
here's
how
you
can
do
it
for
cluster
scoped
resources,
as
well
as
namespace
scope,
resources,
and
you
know
all
the
things
there,
because
there's
a
lot
of
prior
art.
This
would
be
maybe
something
one
day
if
it
involves
something
that
we
could
Point
them
at
too
is
a
different
direction.
C
D
Yes,
I
have
a
topic,
and
that
was
added
to
the
agenda
by
Aldo,
so
but
I
was
a
little
bit
light
so
come
about
it!
That's
why
you're
stuck.
A
E
Yeah
just
want
to
add
that,
like
some
of
the
crds
are
already
doing
this,
like
that,
they
have
one
type
and
they
have
like
an
annotational
label
and,
for
example,
the
postgres
operator
has
it
so.
You
have
a
few
postgres
SQL
like
objects,
and
you
can
specify
a
controller
which
which
will
control
this
resource
and
I
heard
also
from
our
monitoring
team
that
they
would
like
to
know
if
it
is
possible
to
have
like
Define
this
somehow
on
API
level.
E
So
you
could
like
expect
the
like
the
same
like
for
other
crds,
that
if,
like
that,
you
would
you
could
you.
C
A
Okay,
are
we
ready
to
move
back
to
readable
scheduling,
directives.
B
D
So
basically,
this
is
a
need
that
came
out
very
recently,
so
we
don't
have
yet
a
draft
cap
or
anything
but
just
want
to
collect
like
quick
feedback.
So
this
is
also
needed
for
for
batch
and
specifically
for
Q.
D
So
Q
has
the
ability
to
control
drop
life
cycle,
so
it
can
stop
or
we
call
it,
suspend
a
job
and
unsuspend
and
also
Q
assigns
node
selectors,
so
Q,
let's
say
decide
so
user
creates
a
job
and
then
Q
is
injecting
node
selectors
on
the
moment
of
unsuspending
like
starting
the
job,
however,
and
then
when
it
suspends
back,
it
restore
the
original
user
node
selector.
So
this
is
like
for
context.
D
D
Currently,
three
operations
like
three
requests:
the
external
controller,
like
queue
in
this
case-
needs
to
send
so,
first
of
all,
it
suspends
the
job,
but
it
also
needs
to
reset
the
start
time,
and
this
is
the
tricky
part,
the
trick
area
in
the
status.
The
start
time
listed
the
status
and
then,
and
only
then
it
can.
D
We
restore
the
node
selectors,
and
this
is
three
API
calls.
So
one
is
overhead,
but
also
this
trick
area
and
we
would
like
to
improve
it.
So
the
idea
is
that
the
first
point
of
the
Improvement
is
that
you
will
be
able
to
modify
the
port
template
in
job
or
on
the
suspend
call.
So
when
you're
suspending
at
the
same
time,
you
can
already
inject
the
non-selectance,
and
this
is
currently
I,
think
block
and
yeah.
That's
that's
the
main
thing,
but
there
is
also
another.
Regarding
the
start
time,
yeah.
D
D
So
that's
why
Q
does
like
Fast
stopping
then
resetting
the
start
time
so
that
it
can
in
The
Next
Step
suspend-
and
this
is
like
problematic
and
that's
why
we
came
up
with
this
with
this
idea
of
of
relaxing
this,
so
we
will
allow
modifying
the
node
template,
even
if
the
start
type
is
not
new,
and
this
will
allow
to
do
the
suspend
and
restoration
of
the
of
the
node
selector
in
one
call.
D
Potentially-
and
here
it
is
mentioned
by
Aldo-
that
one
problematic
case
is
that
when
we
use
suspend
the
pot
with
the
old
template
injected
by
Cube
or
Modified
by
Q,
may
still
be
terminating
or
are
still
essentially
running
for
a
while.
So
then,
if
a
user
or
the
external
controller,
like
you
unsuspense,
then
you
may
end
up
having
to
types
of
paths
with
different
at
the
same
time
living
and
the
proposal
to
solve
this
is
the
job
controller
would
weigh
until
all
the
parts
with
the
old
template
are
deleted
before
creating
here.
D
So
there
is
a
little
bit
of
thickering
to
solve
the
race
conditions
yeah,
but
in
general
we
would
like
to
maybe
get
some
quick
thoughts
if
this
is
the
way
to
go
or
how
otherwise
to
improve
the
API,
so
that
there
is
no.
This
like
this
is
like
a
more
of
a
it's
still
vague,
but
it's
a
proposal,
but
in
general
we
would
like
to
like
from
the
bigger
picture
you
go
with
the
API,
so
you
don't
need
to
issue
this
free
calls
and
yeah
and
do
status
update
in
here.
D
A
Sorry?
How
would
you
make
the
note
selector
mutable,
because
it
sounds
like
like
as
far
as
I
know,
node
selectors
on
pod
spec
are
only
mutable
in
an
additive
way
like
you
can
add
additional
nodes,
but
you
can
never
remove
a
node
from
a
node
selector
and
you
can
never
mutate.
A
D
A
A
Even
the
application
of
the
pi
template
spec,
but
you
don't
expect
that
mutation
to
propagate
to
a
running
pod.
You
expect
that
that,
since
the
job
is
suspended,
when
you
unsuspend
the
job,
you
could
create
a
new
pod
and
that
pod
would
match
the
node
selection
specification
in
the
pi
template
spec
right
like
you're,
not
trying
to
mutate
the
node
selector
of
a
running
pod
or
a
pod,
that's
in
any
way
sure
yeah.
Okay,
that
makes
more
sense.
Yeah,
okay,
I
get
you.
D
But
then
it's
a
question
of
how
problematic
it
is
that
like
two
types
of
parts
will
be
running
for
a
while.
So
let's
consider
now
the
scenario
that
you
suspend
the
job
with
the
old
node
selector,
then
you
unsuspend,
so
there
are
like
different
set
of
selectors.
So
if
we
don't
wait
for
the
old
pods
to
terminate
for
a
job,
then
it
means
that
the
job
owns,
but
with
the
old
for
a
while
with
all
the
new
and
then
the
question
is:
should
we
wait
until
all
the
old
parts
are
terminated
like
deleted.
A
Check
patient
with
the
semantics,
you
want
all
right
like
if
you're,
if
you're
going
to
wait
like
so
how
long
would
you
wait
for
because
you
can't
well
all
right
so
if
you're
going
to
wait,
till
they're
terminated
and
like
there's
an
unintentional
disruption
and
the
node
is
gone
altogether,
you
may
have
to
wait
until
that
node
gets
evicted
from
the
cluster.
In
order
for
that
pod
to
be
like
you
know
effectively,
that
weight
could
be
substantial
right
if
you're
going
to
wait
under
normal
operating
circumstances.
A
That
seems
reasonable
as
long
as
they
guarantee
you
provide
to
the
end
user
is
that
you
know
this
is
kind
of
the
best
effort
and
in
the
event,
that
the
waiting
part
is
tricking
if
they're
going
to
allow
for
some
degree
of
temporary
parallelism
whereby
you
have
like
a
terminating
or
potentially
running
pod.
That
needs
to
be
cleaned
up
later
on
a
different
node
and
you've
scheduled
one,
but
what's
the
implication
of
that
like?
Why
would
that
be
bad?
Is
it
just
a
resource
consumption
thing.
D
I
think
the
so
again,
that's
unfortunately,
Aldo
cannot
be
with
us,
so
I'm
like
on
behalf
of
him.
So
that's
what
I'm
thinking
but
I
think
he's.
His
issue
was
not
with
so
much
with
the
resource
consumption
but
more
of
a
user
confusion.
Let's
say
that
you
can
have,
but
maybe
that's
not.
A
I
can
understand
precisely
why
that
would
be
problematic
right
if
you
have
job
zero,
if
you're
two
jobs
zero,
but
there
are
other
fencing
mechanisms
in
that
case,
to
prevent
it
as
far
as
I
understand
I'm,
not
an
index
job
expert,
if
her
job
in
and
of
itself
like
parallelism,
has
always
been
to
some
extent
best
effort
right
like
we
try
to
eventually
get
there.
So
you
know
if
it's
temporarily,
not
the
exact
parallelism.
That's
in
the
users
declared
intent.
A
A
Is
horrible
to
me
right
like
it
sounds
like
this
is
kind
of
like
what
semantics
do
we
want
and
what
is
the
best
like
you
do
either
of
these
things
it
wouldn't
be
like.
Well,
that's
horribly
wrong,
like
it's
kind
of
ambiguous
as
to
what
the
right
thing
is.
A
D
Cool
yeah
I.
Think
that's!
That's!
That's
what
we
want
at
this
point,
so
we
we
just
want
to
solve
this
problem
and,
like
probably,
we
need
still
to
drive
the
cup
and
go
for
review
Etc.
There
are
discuss
different
options
to
define
the
semantics
best,
but
just
wanted
to
mention
in
terms
of
you
see
like
some
problems
with
this
like
more
fundamental,
but
if,
if
not,
then.
A
Are
you
saying
that
in
well
the
only
problem
I
would
see
it's
actually,
that's
not
a
problem
so,
and
you
know
for
a
fact
that
the
the
the
new
selector
we
don't
validate
immutability
of
the
node
selector
on
the
pi
template
spec
of
a
job
like.
D
We
do
allow
mutation
under
certain
conditions
that
so
when
we
suspend
or
unsuspend
the
job
so
when
we
make
the
transition
so
that
Q
can
inject
or
restore
the
node
selector,
but
when
suspending
we
also
require
that
the
start
time
is
is
let
me
check
yeah
the.
D
A
The
only
the
only
concern
that
would
happen
it
doesn't
sound
like
we
really
have
that
here.
Is
that
so
like
if
you,
if,
let's
say
it,
was
100
immutable
to
begin
with,
and
then
you
want
to
make
it
mutable
in
the
past.
Leave
the
API
reviewers
have
said
that
that's
fine
right,
making
an
immutable
object.
Mutable
doesn't
represent
a
backwards
incompatibility
because
if
you
couldn't
mutate
it
before,
but
now
you
can
people
weren't
depending
on
the
mutability.
A
However,
changing
the
semantics
of
the
mutability
has
to
be
done
in
a
Backward
Compatible
way,
so
in
general
you
can't
make
it
more
restrictive.
So
if
you
were
allowing
mutation
of
a
particular
type
before
you
can't
make
a
breaking
change
into
a
subsequent
release
that
makes
it
more
immutable
or
mutable
in
a
different
way
right.
You
can
make
it
less
immutable.
You
can
allow
different
types
of
mutation,
but
you
can't,
you
know,
make
the
mutation
more
restrictive
or
disallowed.
After
that
point,.
A
Right-
and
that
seems
reasonable-
it
doesn't
seem
like
it's
like
super
problematic
I
mean
you
can
open
a
cap.
If
you,
it
wouldn't
be
a
bad
idea
to
open
a
cap
just
so
that,
like
all
of
this
is
captured,
and
we
you
know,
we've
demonstrated
that
we
talked
to
thorough,
but
I
would
expect
it
to
be
like
not
controversial
and
relatively
fast
to
get
through.
D
Okay,
cool,
it's
still
just
like
couple
of
days
till
deadline,
so
we
will
consider
if
we
have
capacity
but
yeah.
It's
good
to
know
that
I.
D
Cool
I
think
that's
exactly
what
we
wanted
to
to
get
us
feedback
for
now.
Any
other
comments
or
questions.
D
A
A
Okay,
I
will
take
silence,
as
we
have
nothing
else
to
discuss
all
right.
Thank
everyone.
Thank
you
all
for
attending
and
contributing
we'll
see
you
during
the
next
meeting
and
have
a
happy
Monday
and
a
good
week.