►
From YouTube: SIG Cluster Lifecycle - Cluster Addons 20210428
A
Hi
friends
welcome
to
a
admittedly
late
recording
of
the
cluster
adams
call.
Today
is
some
day
of
the
month,
apparently
april
27th.
A
We
made
it
at
our
normal
time
on
a
tuesday
in
u.s
time
zone
and
we've
got
a
bunch
of
cool
friends
here
from
all
around
the
community
to
sync
up
and
talk
cluster
allen's
specifics
if
you're
joining
us
from
youtube
or
watching
the
reporting
later
on.
I
know
you're
always
welcome
to
hop
in
the
slack
channel
and
ping
us
to
talk
about
cool
stuff
or
jump
on
this
call
and
talk
about
cool
stuff.
A
We've
got
an
agenda
item
and
not
much
in
it
for
today,
so
lots
of
already
about
40
minutes
of
unstructured
discussion
has
already
occurred
since
I
joined
super
late,
that's
on
me
and
a
little
bit
more
of
a
free-flowing
agenda
where
we
usually
just
take
notes.
I
suppose,
since
we
have
no
concrete
items
for
today,
does
anyone
else
want
to
take
the
floor
now
on
what
we've
been
talking
about.
B
I
I
can
briefly.
If
I
recall,
we
talked
a
little
bit
about
patch
versus
update.
We
talked
about
oci
images
and
trying
to
figure
out
an
appropriate
format
for
wrapping
bundles
of
configuration,
and
I
don't
recall
if
we
talked
about
anything
else.
Does
anyone
recall
anything
else?
A
So,
regarding
one
on
patch
versus
update,
I'm
just.
B
Curious
talked
about
why
patch,
why
update
whether
or
not
it
is
true
that
if
you
use
a
typed
client
with
update
you're
in
trouble,
we
talked
if,
like
a
field,
isn't
present,
does
it
doesn't
get
dropped.
A
B
Yes-
and
we
also
talked
a
little
bit
about
how
update
sorry
apply,
is
now
available,
client
side
apply
is
now
available
as
a
library
and
that
same
library
can
also
do
server
side
apply.
So
that
may
be
a
nice
way
to.
A
Yeah,
certainly,
I
think
the
ecosystem
has
gotten
to
the
point
where
the
managed
fields,
style
or
last
applied
configuration
style
apply
either
of
those
tends
to
be
more
safe
and
useful,
even
though
it's
way
more
complicated
than
the
update
verb.
A
Interesting
yeah,
I
think,
there's
just
enough
apis
where
the
structs
that
you're
trying
to
update
are
varied
and
contain
information
from
multiple
sources
like
if
you
don't
own
an
entire
struct.
An
update
tends
to
be
a
bit
dangerous,
since
you
have
to
do
a
two-phase
commit.
A
I
guess
that
okay,
cool
cool
discussion
on
that.
A
And
how
did
we
get
on
that
discussion?
Elia?
Are
you
like
doing
something
with
add-on
management
and
go
code
and
an
operator
or
yeah
sort
of
yeah?
I
see
likely
for
a
popular
cni
provider.
A
C
We
had
a,
we
had
an
internal
discussion
about
how
we're
going
to
implement
it,
and
I
thought
I
thought
that
else
here,
because
I
kind
of
liked
some
information
and
what
you
just
said
lee.
That's
that's
interesting
that
that's
something
I
was
missing
that
yeah.
I
guess
I
guess
if
it
is
something
like
a
deployment
or
a
demon
set
that
you
are
likely
to
encounter,
even
if
you
set
on
a
reference
you're
likely
to
encounter
other
owners
poking
things
into
it,
that
you
weren't
fully
aware
of.
D
A
The
field
manager
stuff
from
the
new
server
side
apply,
and
even
the
old
client-side
apply,
with
the
classified
configuration.
Annotation
tend
to
be
like
the
safest
form
of
appending
to
a
map
or
something
yeah
or
a
collection
for
those
complex
objects
that
make
sense.
A
I
don't
know
if
this
is
really
related,
but
sometimes
controllers
have
patterns
where
they
like
copy
label
updates
from
parent
objects
like
into
child
objects,
and
then
that
has
a
consequence
of
like
changing
the
label.
Selector
of
like
a
stateful
set
or
something.
We've
noticed
this
in
the
flux
project
like
when
we
add
labels
to
track
garbage
collection
in
a
performant
way
on
a
high
level
custom
resource
and
then
those
garbage
collection
labels
get
copied
down
to
child
resources.
A
Objects
cool
so,
regarding
oci
image
format,
how
we
actually
store
bundles.
There
was
definitely
an
effort
here
to
try
and
come
up
with
a
common
understanding
of
how
to
unpack
these
things
and
how
to
store
them.
Certainly,
the
docker
registry
format
or
the
docker
image
format
storing
in
the
registry,
just
as
a
bunch
of
tarball
layers,
has
been
the
solution
that
people
have
taken
with
the
manifest
cap
from
the
olm
team
trying
to
make
that
more
standard,
as
well
as
basically
the
image
package.
Folks
from
the
carville
team.
A
Doing
the
exact
same
thing:
did
we
I'm
sorry?
I
missed
this
discussion.
A
E
I
think
I
think
we
talked
about
some
of
the
work
nick
is
planning
to
do
to
prototype,
support
for
for
bundle,
images
and
various
tooling
yeah.
I
think
we
talked
much
about
the
formats
themselves.
Okay,
oci
and
and
or
as
really
is.
The
storing
arbitrary
artifacts
in
oci
is
the
project
that
standardizes
that
and
it
looks
really
attractive.
But
we
sort
of
stayed
away
because
of
the
runtime
concerns.
A
Yeah,
certainly
oraz
being
used
directly
in
helm,
is
a
very
attractive
reason
to
look
at
doing
that.
I
believe
that
a
lot
of
folks
over
in
azure
are
also
using
oraz
for
other
custom
media
type
assembly
and
various
reasons
to
do
that.
I
think
also
the
what
is
it
the
cnab
bundles
as
well,
maybe
oraz
related,
but
then
you've
got
over
on,
like
the
is
it
the
containers,
orc
primarily
contributed
to
by
red
hat
folks.
A
You
have
yumaki,
I
believe,
right,
umoci
and
the
builda
ecosystem,
and
then
there's
the
google
container
devtools
folks
like
with
co
and
crane.
A
They
have
the
their
own
layer,
library
as
well,
and
there's
a
separate
library
or
no.
The
google
containers
library
is
partially
used
at
a
higher
level
inside
of
the
image
package
code.
A
I
guess
I
should
probably
write
all
that
down
but
yeah.
So
it's
there's,
like
mainly
three
libraries
and
then
some
high
level
usage
that
we
could
learn
from
inside
of
image
package
for
unpacking
those
kinds
of
things.
Also.
I
learned
that
the
folks
who
are
doing
docker
containers
as
firecracker
vms
that
fly
io.
A
So
five
enumerated
ways
of
working
with
manifests
excellent,
certainly
the
pattern.
There
is
something
that
we
would
be
interested
in:
supporting
for
source
controller
from
flux
to
do
an
unpack
there
and
yeah.
A
I
imagine
that
the
most
generic
unpack
available
with
some
slight
options
to
change
explicitly
based
off
of
the
thing
that
you're
pointing
at
would
be
powerful
enough
for
the
initial
use
cases
right
like
if
you
know
that
you're
working
with
a
specific
type
of
image
format
that
needs
a
different
unpack
than
a
docker
image,
then
you
can
specify
that
maybe
in
the
source
controllers
like
oci
image
spec
but
then
like
perhaps
there's
a
more
powerful
heuristic
behavior,
where
we
could
start
to
pick
up
on
common
media
types
right
like
with
nick's
proposal.
A
If
there,
you
know,
is
kind
of
two
storage
mechanisms,
one
is
the
docker
image
way
and
then
one
has
like
a
specific
media
type
and
like
uses
tarballs
or
whatever
in
a
simpler
way
than
just
docker
images,
and
that's
like
indexable
with
oci
registries
that
support
arbitrary
media
types,
which
is
basically
all
of
them,
except
for
docker
hub.
A
Then
that
might
be
an
attractive
solution
with
an
alternative,
heuristic
unpack
and
that's
something
that
could
work
for
source
controller
and
image
package.
And
you
know
all
of
these
other
ways
of
doing
things.
D
Yeah,
I
guess
I
guess
the
only
thing
that
we
brought
up
that
we
didn't
just
bring
up
was
whether
or
not
there
is
a
requirement
for
the
node
to
be
the
thing
that
pulls
the
image
as
like
a
baseline
everywhere,
which
discounts
any
sort
of
custom,
media
type
solution
or
anything.
That's
like
non-conforming
to
what
cry
can
unpack.
A
A
But
when
we
start
talking
about
config
images,
if
we
have
these
arbitrary
media
types
and
we
want
that
to
be
unpackable
like
doing
that
in
cri
implies
that
you
could
create
a
container
from
that
config
file
system.
That
would
be
useful
for,
like
the
csi
volume
driver,
but
maybe
it's
not
that
useful.
A
For
somebody
who
accidentally
puts
a
config
image
reference
into
their
deployment,
then
you
would
get
a
really
weird
error
message
or
something,
because
there's
all
of
this
missing
data.
You
know
that
that
image
cannot
be
created
into
a
container
because
it
doesn't
have
an
entry
point.
E
Yeah
anything
that
cublet
tries
to
instantiate.
I
assume
it's
it's
only
going
to
check,
for
you
know,
runnable
images
and
check
those
those
like
five
media
types.
I
guess
that
are
allowed.
A
Absolutely
yeah
I
mean
if
we
wanted
to
be
able
to
support
those
kinds
of
arbitrary
media
types
for
config
images,
then
the
thing
that
unpacks
those
images
inside
of
like
the
kublet's
cri
interface
and
down
into
the
container
runtime
we're
talking
container
d
and
cryo.
A
You
know
any
virtual
kubelet
implementation,
all
that
stuff
needs
to
know
how
to
do
this
generic
unpack,
and
that
is
not
realistically
even
for
sure
going
to
happen.
If
we
agree
on
as
a
group
that
that
it
should
be
done
that
way
and
if
even
if
we
could
convince
everybody,
you
know
to
have
the
generic
unpack
supported
or
to
even
convince
like
a
few
key
players.
Then
it
would
likely
take
a
while
for
those
things
to
merge
and
then
percolate
into
actual
clusters.
A
Now,
certainly,
I
think
that,
with
the
source
controller
approach
in
flux,
if
we
want
to
be
able
to
support
these
sorts
of
arbitrary
unpacks
and
have
different
drivers
like
we
can
control
that
thing,
and
we
can
have
a
shared
library
right
so
that
image
package
and
the
csi
volume
driver
potentially
is
an
alternative
code
path.
Could
do
these
unpacks
in
a
separate
process?
A
You
know-
and
there
are
the
concerns
about
sharing
like
image,
auth
configurations
if
you're
in
cluster
there's
good
paths
for
that,
if
you're
on
a
node
it's
more
complicated
since
the
file
formats
are
not,
you
know,
standard
and
there's
trade-offs
there
that
we've
discussed
before.
D
A
D
I
guess
my
point
here
is
that
if
we
want
to
be
able
to
support
things
like
flux,
unpacking
these
images
on
cluster
with
that
requirement,
then
for
the
immediate
future,
it
discounts
doing
things
with
custom,
media
types
and
sticking
to
the
docker
v22
format.
Yeah,
you
know
ci
or
docker
or
whatever
for
like
csi.
D
D
A
A
That
kind
of
gets
into
what
evan
and
I
were
talking
about
because
I
think,
like
last
week
or
the
other
week,
we
mentioned
that
there
were
was
a
lot
of
overlap
in
the
purpose
and
design
guidelines
that
flux
has
implemented
and
where
ruck
pack
is
looking
to
go
for
context.
Right,
flux,
fetches
sources.
A
It
applies
them
inside
the
cluster
and
it
notifies
people
about
the
state
of
those
things
and
the
proposed
ruck
pack
solution
has
a
few
different
guidelines,
but
generally
like
has
some
kind
of
bundle
ref
to
get
a
source
into
the
cluster
and
then
a
provisioner
with
a
provisioner
class
style
api
that
can
be
extended
to
apply
the
bundle
and
then
there's
like
status
updates
and
things
inside
of
the
custom
resources
so
similar
in
scope
to
like
what
you
would
want
out
of
flux.
A
It
made
me
wonder
what
kinds
of
things
in
the
ruck
pack
design
could
be
done
to
allow
for
interoperability,
because
flux
is
very
composable,
perhaps
source
controller,
since
we're
talking
about
supporting
oci,
bundles
and
that
kind
of
thing
in
source
controller
and
there's
already
all
of
these
other
artifacts,
that
we
support
that,
if
ruck
pack
provisioners
and
the
ruck
pack
bundle
api
could
reference
something
from
source
controller
and
the
provisioners
had
a
library
that
knew
how
to
fetch
things
from
source
controller.
A
Then
we
might
have
a
good
way
to
make
these
systems
come
together
in
a
way
that
would
be
good
for
users
of
both
and
yeah.
So
that's
that's
just
kind
of
what
we
were
digging
into
the
technical
details
of
it.
E
A
Yeah
I'd
be
happy
to
move
those
things
that
we
were
just
happening
to
dm
on
into
a
thread
on
the
slack
there's,
nothing
necessarily
private
about
them:
cool
thanks.
Friends,
sorry
that
I
joined
late
and
we
didn't
record
more
of
this,
but
it's
always
good
to
catch
up
with
all
of
you.
Yeah
yeah.
Take
care
of
yourselves
stay
healthy
thanks
for
watching,
see
you
all
in
two
weeks.
If
anything,
so
yeah
got
it
bye.