►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review Meeting - 22 April 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
And
how
we
wanted
to
bring
a
conclusion
to
that
so
far,
what
we
decided
was
credential
rotation
using
access
tokens,
which
is
a
access
key
and
secret
key.
It
just
makes
the
design
very
complicated
and,
given
that
right
now,
even
the
cloud
providers
do
not
support
that
form
of
authentication
and
also
given
that
we
can
do
it
in
a
much
simpler
and
cleaner
fashion
when
we
use
service
accounts.
A
I
think
the
decision
we
made
to
not
do
credential
rotation
for
access
keys
is
a
is
something
we
can
stand
behind.
I
just
wanted
to
open
up
with
that,
to
give
everyone
a
chance
to
also
comment
on
it.
C
A
Yeah
so
yeah
sds,
yes,
secure,
token
service.
Yes,
so
I
looked
up.
You
know
I
looked
into
a
little
bit
more.
I
haven't
looked
it
up
for
anything
other
than
aws
yet,
but
it
looks
like
the
token
refresh
request
is
actually
done
by
the
client
and
the
client
always
talks
to
the
link
local
ip
address,
which
is
one
six,
nine
two
four
one,
six
nine
two
four
and
x
asks
the
there's
a
particular
endpoint.
It
hits
http
endpoint
and
asks
for
a
new
token,
a
new
session
token.
A
Okay,
so
the
fixed
address
is
so
in
in
ec2,
if
you're
running
any
workload
on
ec2
ec2
can
be
created
with
some
roles.
A
How
does
it
yeah?
I
don't?
I
don't
know
about
other
other
vendors,
yet
I
have
to
still
understand
a
little
bit
more,
but
with
aws
I
was
able
to
spend
some
time
and
like
vyani
was
mentioning
last
time.
Sts
is
the
way
to
go
about
it,
but
there
are
some
restrictions
with
sts,
also,
which
is
you
can
only
create
credentials
with
expiry
of
12
hours
or
24
hours.
A
So
so
yeah
there's
a
little
more
discussion
needed,
but
things
are
still
possible
given
that,
if
the
client
is
the
one
originating
the
request
for
refreshing,
it
solves
that
problem
of
how
does
the
client
you
know,
take
the
new
credentials?
And
secondly,
if
given
that
service
accounts
can
be
rotated
in
kubernetes,
we
have.
We
have
an
actual
mechanism,
for
you
know
putting
putting
new
tokens
in
there
and
having
something
like
one,
six,
nine,
two,
three
one,
six
and
two
four
always
return
the
latest
one.
C
Sorry,
I
I
wanted
to
ask
so
when,
when
the
when
we
are
minting
credentials
within
cozy
right
and
providing
it
to
the
workload,
does
that
does
that
process
have
to
change
somehow
somehow
to
support
that?
Or
is
it
just
by
having
the
the
end
point?
So
so
I
essentially,
I
don't
even
need
to
to
pass
on
the
credentials
right
to
the
workload
it
just
needs
to
be
available
through
that
end
point
is
that
kind
of.
A
Yeah
something
like
so
yeah,
so
the
clients
for
aws
at
least
have
a
mechanism
where
they
talk
to
instance,
roles,
which
is
that
link
local
ip
address,
to
figure
out
what
its
credentials
are
and
what
it.
What
its
access
is
to
a
particular
resource.
B
Well,
yeah,
that
would
be
the
sticking
point
I
think
is:
is
you
know
users
need
to?
If
you
just
want
that
particular
functionality,
there
has
to
be
a
way
to
request
it
in
a
way
that
they
will
be
guaranteed
to
get
it
so
it
if
it's
aws
specific.
That
makes
perfect
sense
to
me
that
there
would
be
an
aws
protocol
that
you'd
ask
for
and
you
get
it
and
it
would
work.
B
C
B
So
so,
for
the
protocols
that
are
that
are
meant
to
depend
on
some
sort
of
built-in
token
that
the
pod
just
has
then
yeah.
The
bucket
access
is
just
an
identifi,
identifying
the
principle.
That's
supposed
to
have
access
that
so
that
that
can
be
granted
by
the
back
end
and
then
everything
else
is
supposed
to
be
handled
through
your
regular
token.
B
So
so
yeah,
you
wouldn't
have
to
provide
any
credentials
down
to
the
down
to
the
workload
if
it
was
like
gcp
or
aws
or
or
azure,
or
one
of
the
one
of
the
cloud
provider
style
protocols.
A
B
Temporary
yeah,
it's
it's
not
clear
that
that
the
model
could
be
implemented
because
because
it's
this
is
to
the
extent
that
you're
running
in
an
environment
where
your
actual
pod
security
tokens
are
identical
to
your
object,
storage,
access
tokens
and
that
wouldn't
be
true
anywhere
other
than
one
of
the
hyperscalers.
B
Like
I,
I
may
want
to
use
gcs
bucket
storage
from
my
on-premises
kubernetes
cluster.
My
on-premises
kubernetes
cluster
isn't
going
to
have
google
security
tokens
for
all
of
my
pods
right.
B
It's
just
going
to
have
whatever
I
set
up
with
my
kubernetes
distro,
and
so
in
that
context,
I'm
going
to
have
to
use
some
other
protocol
to
talk
to
gcs,
probably
the
s3
protocol
and,
in
that
context
I'll
be
using
access,
keys
and
secrets,
and
you
know
the
regular
way
of
doing
it
because
there's
no
way
to
like,
unless
you
get
like
gke
on-prem,
which
I
think
is
a
product
they
have.
You
know
you,
you
can't
get
google's
security
tokens
that
that
are
identical
to
your
gcs
security.
A
Yeah,
I
mean
yeah
the
clouds
specifically
they
have
it,
but
but
in
gke
they
have
that
so
so
in
gke,
a
pod
is
associated
with
with
a
particular
security
token.
What
that
means
is,
if
any
request
is
made
with
the
security
token
of
the
part,
which
is
not
necessarily
the
same
as
the
gke
token.
If
the
security
token
of
the
part,
then
it
is
understood
that
it
has
the
same
role
as
what
was
defined
in
in
the
equivalent
account
for
that
security
token.
A
This
is
this,
for
instance
like
if
I,
if
I
create
a
pod
with
a
with
a
security
account
whatever,
like
you
know,
secured
account.
One
someone
in
the
back
end
associates
that,
with
a
with
a
role
in
in
gke
and
any
request
going
from
that,
pod
will
have
the
security
token
that
it
always
has
and
and
gke
knows
how
to
gke
knows
how
to
associate
that
token,
with
a
with
a
role
that
that's
been
defined
in
it
and.
B
Right
but
but
in
the
general
case
you
don't
have
unified
im
between
your
your
kubernetes
cluster
and
your
object.
Storage.
Absolutely
yes!
So
you
need
something
else
and
that
that's
what
the
regular
s3
with
the
access
key
and
secret
is
for.
I
don't
know
if
there's
going
to
be
other
flavors
of
unified
im
other
than
the
ones
that
the
hyperscalers
offer,
because
it
seems
like
it's
unlikely
to
happen
in
in
general,
that
you'll
have
a
kubernetes
distro
and
an
object,
storage
solution
provided
by
the
same
company
with
unified,
unified
roles
and
everything.
But.
C
What
is
what
is
the
problem?
I
mean
it's,
not
it's
not
even
if
it
if
the
identity
management
is
provided
by
by
the
storage
itself,
that
can
still
be
provided
by
by
the
storage
I
mean,
even
if
it's
not
provided
by
the
by
the
you
know
by
the
cloud
it
still
has
meaning,
but.
C
C
E
Instead,
instead
of
returning
access
key
secret
keys,
we
could
we
could
return
a
no
idc
token
and
the
pod
could
directly
assume
the
role
with
web
identity,
for
instance.
So
it
would
be
the
same.
Basically,
your
bucket
request.
A
I
think
I
think
ben's
asking
yes,
that's
one
possibility
yeah.
F
A
Something
that
works
across
the
board
either
we
should
implement
it
or
or
it
should
be
something
that's
so
universal
that
we
can
rely
on
yeah.
E
Kubernetes
support
their
support
to
adc
right.
A
Standard
for
you
know,
authentication
and
authorization,
so
I
think
all
the
oauth
providers
are
oidc
compliant.
It's
just
like
a
standard
for
authentication.
So
it's
like
the
protocol.
I
believe
yeah,
that's
as
much
as
I
know
about
it
and
I
know
kubernetes
supports
the
protocol.
E
If
you
have
an
identity
provider
inside
your
cluster
or
outside,
let's
say
your
storage
driver
can
can
be
trusted.
Okay
and
ask
for
a
specific
odc
token
for
the
bucket
and
then
into
this,
and
the
pod
can
use
this
token
to
access
temporary
credentials.
And
then
it's
up
to
the
application
to
renew
the
tokens
and
stuff.
B
I'm
not
saying
you
can't
do
the
other
way,
but
I'm
saying
like
like
this
is
a
fundamental
question
of
like
when
we
were
designing
the
cozy
protocol
at
the
beginning.
The
whole
reason
we
had
bars
and
bas
was
based
on
the
assumption
that
it's
cozy's
job
to
mint,
a
credential
and
and
own
that
process
like
what
you're
saying
now
is,
is
that
you
know
credential
minting
is
no
longer
necessary
right
because
you
just
you
have
some
external
authorization
provider
and
all
you
have
to
do
is
register.
B
Some
principle
is
having
access
to
a
bucket
and
you're
done,
and
then
all
of
the
actual
you
know
it
authentication
is,
is
between
the
workload
and
the
oidc
provider
and
the
object,
storage
and
cozy
gets
out
of
the
way
right.
You
wouldn't
even
have
to
have
to
provide
any
information
to
the
pod
other
than
the
bucket
identifier.
C
C
But
I
think
there
is
something
to
provide
which
is
currently
there.
It
is
not
clear
what
is
the
standard
to
do
it?
If,
if
that's
what
you
want
for
object,
storage
right,
no.
A
Sorry,
I
don't
think
we
should
go
the
oidc
route.
I
think
I
think
there
is
a
simpler
way,
even
something
that
we
can
implement,
which
is
which
is
a
metadata
service,
that
that's
what
aws
calls
it.
That's,
what
gce
calls
it
and-
and
what
is
this
openstack
also
had
this
a
metadata
service
that
is
on
the
link,
local
ip
address,
which
is
169,
254,
169
254,
you,
you
query
it
and
you
get
to
get
some
responses
back
about
yourself
about
about
your
workload.
B
B
B
A
I'm
just
saying
how
things
happen
in
amazon
and-
and
you
know
I
think
right
now-
our
current
position
of
going
with
access,
keys
and
secret
keys
is
good
enough
for
alpha.
F
A
And
then
credential
rotation
is
not
something
we
we
are,
you
know
locking
ourselves
out
of
just
like
we
just
discussed
just
now.
We
we
have
multiple
options,
one
is
oidc
one
modus
metadata
service
and
I'm
sure
more
more
options
will
come
up,
but
but
I
don't
know
if
we
should
spend
more
time
on
it
right
now,.
E
A
F
A
True
true-
and
it's
not
the
highest
priority
right
now
so
yeah.
I
think
it's
okay
to
push
it
back
a
little
bit
okay.
So
the
next
important
thing
is
actually
api
review,
so
we're
still
waiting
for
tim
to
get
back
to
us.
If
any
one
of
you
get
a
chance
to
talk
to
tim
or
if
anyone,
if
you
can
help
with
reaching
out
to
tim,
it's
much
appreciated.
A
B
B
Is
I
typically
just
resend
the
email
after
a
week
if
he
hasn't
answered
understanding
that
you
know
it
probably
got
cued
and
then
fell
off
the
back
of
the
queue?
And
so
you
gotta
push
it
back
onto
the
head
of
the
queue.
A
A
Don't
think
he's
ever
gotten
angry
he's
he's
really
nice.
Actually,
I
think
he's
gotten
angry.
Once.
B
B
A
Okay:
okay,
good
luck!
We've
been
on
the
same
boat
right
now,
yeah,
okay,
so
so
you
know
it's
going
on
in
parallel,
reaching
out
to
tim.
That
is,
I
think
I
think
the
cap
is
in
a
decent
place.
Let's
ask
jeff
jeff:
is
here
hey
jeff?
Are
there
any
updates
left
to
be
made
on
the
cab?
A
D
I
no,
I
haven't
updated
it
in
a
while,
in
fact,
I
haven't
addressed
directly
sebastian's
comments
from
a
week
or
two
ago,
although
we've
discussed
them
some
a
little
bit.
D
But
really
my
personal
feeling
is
the
kep
is
the
pr
for
the
cap
is
getting
too
large
and
unfocused,
and
what
I'd
like
to
do
is
just
get
that
pr
merged,
and
then
we
can
go
on
from
there
and
make
additional
changes.
D
B
A
First
of
all,
aren't
all
caps
pretty
much
so
so
the
two
large
part
ban-
I
I
believe,
is
from
the
number
of
comments
that
are
there
on
it
yeah,
and
it
takes
a
long
time
for
the
page
itself
to
load
in
terms
of
content.
I
think
I
think
we're
pretty
standard
just
as
much
as
any
other
cap
yeah,
okay,
yeah.
D
Yeah,
I
agree
I
agree
city
I
mean
the
kept
is
large,
but
the
problem's
big.
So
it
seems
okay
to
me,
it's
just
this
pr.
That
makes
that
imp
that
defines
the
api
change
that
we
did
related
to
buckets
and
bars
right
is
is
described
in
the
ket
pr,
but
it
hasn't
been
merged.
So
if
someone
was
looking
at
cozy,
all
they'd
see
is
the
cap
and
they
would,
you
know,
just
be
missing
the
last
month
or
so
of
dialogue
that
this
okay.
A
Maybe
I
see
so
you
mean
you
mean
the
updates.
After
we
had
the
discussions
about
bucket
mutation
and
right
that
kind
of
thing.
Okay,
I
know
we
updated
the
api.
Maybe
we
haven't
updated.
D
The
text
about
it
there's
just
been-
I
mean
there,
there's
been
the
change
to
the
api
that
we
we
went
over
quite
extensively.
There's
minor
changes
to
the
rpc,
you
know
messages
and
there's
some
improvement
or
clarification
in
in
general
workflow.
Just
just
things
like
that.
You
know
there
was
fields
that
were
we
did.
You
know
I
think
ben
said
there
wasn't
a
particular
field
that
was
needed
anymore.
We
agreed,
maybe
it
was
anonymous
access
or
public
anyway.
D
All
these
all
these
discussions
we've
had
in
this
meeting
are
reflected
in
that
pr
over
the
last.
Maybe
it's
two
months
now
and
but
the
pr
hasn't
been
merged,
yet
so
they're
not
actually
in
the
cap.
D
D
A
D
A
Okay
makes
sense,
so
so
I
okay,
so
what
updates
needs
to
be
need
to
be
made
in
the
in
the
pr?
Do
you
know?
Are
there
any?
I,
you
know
it
needs.
D
I
I
perhaps
we
want
to
directly
address
some
of
sebastian's
points.
We
could
discuss
that
separately,
but
then
I
it
needs
the
approval.
The
pr
needs
to
be
merged
into
the
cap.
B
D
A
No,
I
think
we
can
set
the
implementable
to
be
true
and
then
and
then,
if
they
approve
it,
then
it
just
becomes
true.
A
H
Yeah,
you
should
have
that
section
actually
and
then
you
probably
already
have
those
things
right.
You
probably
already
have
the
section
there
just
didn't
fill
out.
B
H
Well
right,
this
is
a
yeah
you,
you
probably
already
have
that.
I
think
you
already
have
that
somewhere.
B
A
I
A
Sure,
yeah
yeah,
we
can
do
that,
so
I'm
guessing.
Let's
take
something.
That's
already
been
done:
local
processing
volumes
and
production
readiness.
B
H
So
yeah
for
this
you
actually
what
you
do
is
you
know
once
you're
ready
for
them
to
review,
you
just
go
to
the
the
slack
channel
and
and
then
you
pin
them
okay
and
then
one
of
them
will
come
and
do
it.
So
I
think
you
can
actually
add
the
name
later.
Things
should.
A
It
okay,
all
right,
so
so
everyone
here,
though,
can
you
take
a
look
at
the
comments
made
by
sebastian
and
see
if,
if
it's
something
that
we
should
respond
to
and
if
so
help
us
respond
to
it?
I
think
sebastian
joined
us
a
few
times
here.
His
his
handle
was
hansler
and
or
hans
larry
s.
He.
He
joined
us
at
the
meetings
a
few
times.
A
Yeah
hassler
sn
yeah,
that's
it
yeah!
So
what
was
this
question
about?
If
ba
cannot
exist
without
br,
then
why
does
ba
exist
in
the
first
place?
Contingency
simply
merge,
b
and
b
are
into
a
single
resource.
Yes,
because
we
want
to
separate
the
actual
secret
from
the
request
for
it.
A
Pv
exists
because
pv
can
access
our
plc
either
because
of
stats,
okay,
statically
provisioned
or
because
you
wanted
to
okay
bucket
access,
because
okay
can
exist
without
a
bucket
request,
either
because
it's
statically
provisional,
because
you
want
okay,
can
neither
be
clear
without
a
br.
Nor
is
there
a
need
to
retain
it
as
there's
no
reproducible
state
associated
to
ba.
A
I
see
what
he's
saying,
but
we
want
to
do
this
in
order
to
in
order
to
kind
of
yeah,
so
so
this
this
brings
back
the
question
that
we
had
last
week.
You
know
our
original
idea
was:
we
want
to
put
ba
in
the
global
namespace
so
that
not
everybody
can
just
read
it,
but
really
they
can't
read
it
in
the
same
name
space
by
just
mounting
the
bucket.
A
That
that
ba
is
separate
from
bar,
I
mean
I'm
sure
the
additions
we
made
had
good
reasons
to
and
his
argument
probably
doesn't.
You
know
he
probably
hasn't
thought
through
everything,
but
we
need
to
go
back
and
remember
why
we
made
this
decision.
A
A
A
While
we,
while
we
reach
out
to
tim
yeah,
so
so
I
I
would.
I
would
like
you
all
to
just
take
a
look
at
the
cap.
I've
pasted
a
link
to
it
in
the
chat
and
and
just
go
through
it
see
if
there's
any
question
or
comment
that
catches
your
eye,
there's
any
section
that
catches
your
eye
and
and
and
make
sure
everything,
looks
good
and-
and
you
know
bring
it
up.
If
you
have
any
questions
or
suggestions
like
I
just
said
that
that's
all
we
need
in
order
to
move
this
forward.
A
All
right,
so
next
is
development,
so
we've
actually
been
moving
pretty
at
a
pretty
good
speed
in
terms
of
development.
We
created
the
repo
that
vyani
and
someone
else
were
suggesting
guy.
I
think
so.
We've
created
a
sample
repo
called
cozy
driver,
minio.
A
And
we've
already
started
moving
the
driver
code
into
this,
and
in
this
repository
we
we
have
scripts
that
show
you
how
to
deploy
a
driver
along
with
the
site,
car
and
anyone
who's
looking
to
write
their
own
driver
can
actually
look
at
how
this
is
it's
done
here
and
then
and
then
just
follow
the
same
process
for
the
driver.
Now
the
sidecar,
which
originally
had
this
driver
repo
now
doesn't
have
it
anymore
and
doesn't
have
a
deployment
yaml
right
now,
because
you
can't
just
deploy
the
driver.
Let
me
say
the
sidecar
with
other
driver.
A
All
right,
so
we
so
today
a
pull
request
was
made
by
janis
he's
not
here
right
now,
but
he
made
a
full
request
to
to
add
to
to
implement,
grant
access
and,
I
believe,
also
revoke
access.
A
So
I
would
like
more
people
to
be
involved
in
in
the
in
the
review
process
so
that
you're
all
familiar
with
the
code,
especially
as
some
of
you
might
start
implementing
your
own
drivers
or
you
know,
making
more
changes
to
the
architecture.
A
It'll
really
help
and
speed
things
up
if
you're
already
familiar
with
the
code,
so
this
is
a
good
pr
to
start
with
and
and
and
if
you
also
subscribe
to
our
projects,
you'll
you'll
be
able
to.
You
know,
get
more
notifications
about
the
pull
request
that
we're
making
and
you
know
I
welcome
everyone's
reviews.
I
mean
as
long
as
we're
all
aligned
with
the
fact
that
we
want
to
move
forward.
Everyone's
reviews
are
welcome
because
it's
it's.
You
know.
A
It's
happened
before
where
prs
go
stale
simply
because
you
know
people
are,
people
are
contentious
over.
What's
the
right
approach,
then
you
know
either
one
might
work
so
yeah.
So
so,
in
terms
of
development,
that's
what
we've
got.
The
thing
that
we
haven't
finished
developing
is
the
finalizer
lifecycle,
for,
I
believe
buckets
and
bucket
accesses
krish.
Could
you
could
you
bring
up
again
what
you
told
me
yesterday.
J
Yeah
so
right
now
in
the
sidecar,
we're
not
really
creating
finalizers
for
buckets
and
bucket
accesses,
so
like
things
can
just
be
deleted,
so
we
added
the
finalizer
logic
to
the
csi
adapter
that
was
kind
of
the
first
step.
J
So
now,
when
you
create
your
workload
and
you
mount,
you
know
the
secrets
for
your
bucket
access,
it
adds
those
finalizers
onto
the
bucket
access
for
the
for
the
pod,
with
the
name
in
the
name
space,
and
then
it
removes
it
when
the
the
volume
is
unpublished,
and
so
we
just
have
to
add
similar
logic
to
the
sidecar
in
order
to
add
finalizers
everywhere,
where
we're
relevant.
A
J
Oh,
I
guess
that
one
other
thing
that
it'd
probably
be
good
to
do
is
also
add
finalizer
onto
the
secret
that
we
create.
A
Right
now,
that's
how
it's
done,
but
it
needs
to
be
improved.
I
agree
so
yeah.
I
ran
some
tests
so
ben
last
time.
My
pushback
was
that
it
should
just
work,
and
you
know
unless
there's
a
problem
actually
optimize
and
I
ran
some
tests
and
I
can
tell
you
it
leads
to
a
lot
of
conflicts,
so
it
probably
won't
work.
It's
probably
going
to
take
10
minutes
before
you
can
actually
make
one
update
to
a
part
finalizer.
So
that's
not
going
to
work.
A
So
you
said
you
said
we
maintain
a
local
data
structure
which
is
based
on
the
informer
of
the
current
pods
that
are
using
it
right.
B
B
You
have
exactly
one
finalizer
that
and
and
then
the
controller
is
responsible
for
knowing
comprehensively
you
know
if
there's
any
pods
using
it
or
not,
and
then
when
the
last
one
is
gone,
that's
when
you
remove
the
finalizer
and
you
have
to
you-
have
to
ensure
the
property
that
like
when
it's
in
the
deleting
state,
nobody
can
bind
that's
the
key
right.
So
you
have
a
one-way
ratchet
on
the
way
down
once
it
enters
the
deleting
state,
you
can
only
only
unbind
and
not
not
bind.
A
A
No,
it
was,
it
was
the
fact
that
I
was
doing
the
number
of
updates
that
were
happening.
Number
of
parts.
A
Well,
yeah,
there's
that,
but
also
yeah
and
the
same.
You
know
chance
of
conflict.
Yes,
it
doesn't
make
a
difference
at
all
in
in
in
terms
of
the
conflicts,
it's
very
heavy
on
fcd
if
it
needs
to
lock-
and
you
know
check
that
whatever
update
you're
making
is
actually
you
know
the
right
update,
given.
What's
what
the
current
status.
A
It's
better
to
do
the
it's
better
to
do.
The
local
data
structure
like
ben
is
suggesting.
I
I
see
you
know
that's
kind
of
the
only
approach
that
can
work
at
this
point
and
scalably.
A
I'm
not
so
sure
patch
will
do
the
job
for
us,
because
patch
doesn't
account,
for
it
doesn't
make
sure
that
any
updates
you
make
is
on
you
know
is
only
done
if
the
previous
version
of
the
resource
matches
the
version
that
you're
sending
back.
B
Okay,
there's
there's
like
an
ordinary,
merge
patch,
there's
the
strategic,
merge
patch
and
there's
the
json
patch
and
with
the
json
patch,
you
can
put
a
bunch
of
test
statements
in
your
json
to
say
test
if
this
equals
that
test.
If
this
equals
that
and
then,
if
any
one
of
those
fails,
the
whole
patch
fails.
So
what
you
can
do
is
structure
your
json.
B
You
can
use
json
patch
and
structure
your
json
patch
to
enforce
whatever
invariance
you
want,
like
the
resource
version
must
be
this
or
the
old
value
of
this
field
must
be
this.
Otherwise
the
patch
fails
and
then,
of
course,
you'll
still
race
with
other
things
and
sometimes
you'll
lose
those
races
and
your
patch
will
get
kicked
out
and
you'll
have
to
patch.
F
H
A
F
A
Drives
so
what
we
do
is
you
know
we
match
drives
to
a
request
and
for
this,
if
the,
if
a
part,
requests
100
drives
of
the
same
kind,
then
all
of
them
match
to
the
same
resource,
that's
available
and,
and
the
problem
is
when,
when
the
retry
happens
in
the
of
the
create
volume,
in
this
case,
when
the
retry
happens,
the
reason
for
all
hundred
when
you
know
when
you
ask
for
100
only
one
succeeds,
so
the
retail
for
the
rest
of
the
99
all
happens
the
same
time.
K
A
So,
in
this
case,
in
the
problem
that
she's
describing
too
finalizes,
you
know
get
get
into
this
problem
because
we
start
with
the
version
of
the
resource
and
then
and
then
on
top
of
that
we
apply
the
patch
it's
the
starting
version
of
the
resource
that
should
be
different
from
what
everyone
else's
starting
version
was
so.
H
So
so,
okay,
so
so
I
guess
the
answer
I
was
trying
to
get
is
that
I
know
that
in
if
it's
like
a
if
it's,
not
a
crd,
it's
entry
resource
right.
If
you
use
patch,
actually
it
will
reduce
the
number
of
failures
when
you
modify.
I.
H
B
To
modify
like
there
are
certain
types
of
of
changes
where
you
only
want
the
change
to
succeed
if
if
the
resource
hasn't
been
touched,
since
you
looked
at
it
because
you
you
looked,
you
know,
because
the
change
is
dependent
on
certain
fields
being
set
to
certain
values
and
if,
if
anything
changed,
you
want
to
take
another
look
at
it
before
you
decide
what
to
make
the
change.
But
if
all
you're
trying
to
do
is
like
remove
a
finalizer,
you
don't
care
what
the
rest
of
the
state
of
the
object.
B
Is
you
only
care
that
that
the
list
of
finalizers
hasn't
been
touched
since
you
looked
at
them
and
you
remove
one
element
from
that
array
and
and
it
and
reduce
the
length
by
one
and
any
other
change
of
the
object.
You
don't
care
because
all
you're
trying
to
do
is
remove
a
finalizer,
and
so
you
can
structure.
F
B
A
B
So
I'm
gonna
see
if
I
can
dig
up
my
example
of
this,
because
I
I
spent
a
lot
of
time
goofing
around
trying
to
get
this
right.
A
Okay,
now,
if
you
prefer
yaml
to
js
on
the
passion
pixels
in
yams
format,
okay,
multipatch
demo-
let's
see
patchy
spot
okay
target
group
version
name.
Kindness
is
level.
Okay,.
B
Okay,
you
mind.
B
A
B
B
This
can
either
add
or
remove
finalizers
from
a
list
and
the
want
means
it
should
be
added
in
that,
in
which
case
there's
only
one
element
of
the
json
patch,
which
is
an
ad,
but
the
not
want
it
does
a
test
that
the
element
was
is
so
it's
looking
at
the
you
know
it
scans
through
the
list
of
of
finalizers
in
in
the
list
it
finds
the
index
of
of
the
one
that
it
wants
to
remove.
B
B
A
B
The
same
so
then,
what
you
would
want
to
do
is
have
a
test
for
the
resource
version
or.
B
B
A
B
The
problem
is,
is
what
you
would
have
to
test.
Is
that
all
of
the
other
10
elements
don't
have
the
value,
because
because
an
arbitrary
number
of
changes
could
have
happened,
you
know
if,
if
at
the
time
you
looked
at
the
list
of
finalizers
there
were,
but
between
the
time
that
you
did
that
get,
and
you
send
your
patch
there
could
have
been
ten
updates.
F
It
seems
to
me
that
an
update
might
be
good
because
you
do
a
like
get
of
the
resource.
You
look
through
the
finalizers.
If
it's
not
there,
you
add
it
and
then
you
update
and
if
the
update
fails,
I
think
there's
a
like.
You
can
figure
out
if
it's,
because
the
you
don't
have
the
latest
version
of
the
resource,
I
forgot
what
they
call
that
error.
B
Type
yeah
yeah,
the
the
the
problem
with
updates
is,
is
if
you,
if
you
have
an
older
version
of
of
the
resource,
then
the
api
server.
Has
you
just
end
up
zeroing
out
fields
that
you
didn't
want
to
zero
out,
and
so
it's
not
an
error.
It's
just
a
loss
of
data
on
the
on
the
object
and
it's
I
don't
even
think
kubernetes
has
a
way
to
to
protect
against
that.
It's
just
a
a
bad
possibility.
B
B
Right
right,
we're
talking
about
two
kinds
of
versions:
there's
the
there's
the
fcd
version,
which
is
like
the
generation
number
of
the
object
and
yeah,
and
if
you're
doing
an
update
versus
an
older
generation
of
the
object,
it
will
fail.
Just
because
updates
have
to
be
against
the
most
recent
generation.
B
But
there's
there's
a
separate
problem
where
an
actual
versioning
problem,
where,
like
the
the
struct
that
represents
the
object,
got
a
new
field
and
and
so
so,
the
crd
that
this
that
the
api
server
knows
about
has
a
field
that
your
code
doesn't
know
about,
because
you're
running
with
an
older
version
of
the
of
the
struct.
I
B
It
will
omit
the
whole
field
because,
because
your
your
code,
gen,
you
know,
is
using
an
older
version
of
the
struct,
and
so
the
json
marshaller
will
just
omit
the
field
entirely.
And
so
there
could
be
data
that
you're
that
you'll
be
dropping
from
the
object,
because
because
it
was
a
different
code
version
and
that
that's
what.
B
Update
is
looked
down
on
because
update
causes
that
problem
and
nothing
detects.
When
that
happens,
it
just
happens
and
then
the
field
is
empty,
now
gotcha,
so
so
people
prefer
patch
because
it
works
around
that
particular
problem.
Basically,
you
just
don't
touch
fields
that
you
don't
want
to
touch
right,
but
but
yeah
I
think
you
still
maybe
can't
get
away
from
doing
a
test
on
the
resource
version
for
complete
safety.
A
In
which
case,
if
this
function
can
race.
B
A
Right
or
concurrent
updates
on
the
same
resource,
yeah.
A
Here,
so
that's
why
the
randomness
thing
so,
but
I
have
a
question:
what
was
the
reason
again
for
not
using
updates
and
preferring
patches.
A
A
Yeah
so
yeah
okay,
so
we
addressed
that
by
actually,
you
know
in
in
all
of
the
api
requests,
incline
go
in
the
options
you
can
provide
the
api
version
and
it
only
works
with
that
api
version.
B
Right
but
the
the
problem
is,
is
sometimes
people
add
new
fields
without
bumping
the
api
version,
then,
if
you're,
if
you're,
scrupulous
about
always
updating
the
api
version,
you're
fine,
but
the
problem
is
once
you
reach
v1
like
there's
nowhere
to
go
right.
You.
B
A
B
B
A
B
A
Right
makes
sense,
so,
okay,
so
we're
almost
out
of
time
and
in
terms
of
implementing
the
finalizer.
I
think
I
think
you
know
we'll
do
an
experiment,
we'll
we'll
try
the
patch
method,
we'll
see
how
it
works
out,
but
but
really
the
only
way
to
avoid
conflicts
and
and
reduce
the
amount
of
time
it
takes
for
this
to
converge
is
to
space
them
out
either,
even
with
this
patch
you're
going
to
conflict
anyways.
A
If
you
test,
and
even
at
the
update
you're
going
to
conflict
anyways,
if
you
you
know,
if
you,
if
you
have
multiple
updates
to
the
same
object,
so.
B
Right
so
so
for
singh's
concern
that
she
raised
a
minute
ago.
But
maybe
maybe
the
right
investigation
to
do
is
to
see
why
there's
concurrent
updates
happening
or
concurrent
patches
happening,
and
if
we
have
the
same
problem
in
cozy
like
first
investigate
the.
Why,
rather
than
the
mechanism
for
working
around
it
right,
yeah.
H
A
And
that's
all
it
took
anyway,
so
we're
almost
out
of
time.
I
think
we
can
end
here.
So
you
know
ben
or
anyone
else.
If
you
want
to
work
on
this
finalizer
logic,
we
need.
We
need
people
to
start
contributing
code
for
stuff
like
this.
Then,
since
you've
already
worked
on
this,
would
you
like
to
take
a
stab
at
it
and
in
cozy.
B
To
finalize
the
thing
why
so
so
I've
just
realized
after
this
conversation,
I
need
to
fix
my
code
in
the
populator
thing
yeah
like,
if,
if
I
get
this
to
work
better
in
the
data,
populator
library,
yeah,
there's
no
reason
we
couldn't
copy
paste
it
into
cozy.
I'd
be
happy
to
help
with
that.
Okay,
I
don't
want
to
sign
up
for
any
heavy
lifting.
A
B
Tasks
because
I'm
I'm
also
signing
myself
up
for
a
whole
new
feature
in
122.,
so
I'm
gonna
have
three
things:
three
frying
pans
on
the
fire
and
it's
gonna
be
great.
A
Yeah
yeah,
I
understand
it's
okay,
but
anyone
else.
If
you
have
some
cycles,
your
your
contribution
is
is
welcome.
Just
like
janis
has
been
making
pull
requests.
He
is
not
able
to
join
the
meetings
because
he's
in
ireland
and
it's
really
late
for
him,
but
he
still
makes
contributions.
A
So
more
people
are
encouraged
to
do
the
same
and
yeah
you
can
reach
out
to
me
or
just
message
on
the
slack
channel,
storage,
cosy
and,
and
someone
will
help
you
or
if
not
I'll
help
you
get
started,
that's
about
it
for
now.
One
last
thought
is,
if
you
all,
if
any
of
you
get
a
chance
to
reach
out
to
tim
or
if
you
can
help
out
with
reviewing
the
cap,
please
do
so.
A
That's
all.
Let's
talk
again
on
monday.