►
From YouTube: SIG Cluster Lifecycle - Cluster addons 2021-08-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
and
welcome
to
the
bi-weekly
cluster
add-ons
meeting.
I
am
justin
santa
barbara.
I
am
your
moderator
facilitator
for
today.
Today
is
july,
20th
2021..
A
We
have
a
very,
very
light
agenda,
so
we
will
likely
not
have
a
full-length
meeting,
but
please
do
nonetheless
be
mindful
of
our
code
of
conduct,
which
boils
down
to
being
a
good
person
and
remember
this
meeting
will
be
put
on
the
internet
for
posterity.
We
do
have
an
agenda.
It
is
very
light.
As
I
mentioned,
I
faced
a
link
in
the
chat.
Please
do
feel
free
to
add
your
names
to
the
attendees
list.
If
you
would
like
to
that
can
be
helpful
for
people
watching
the
video
afterwards.
A
I
actually
missed
the
last
meeting,
but
it
looks
like
we
mostly
talked
about
how
we
did
not
have
uptake
on
g
s,
google,
summer
of
code,
and
we
talked
a
little
bit
about
oci
support,
working
with
flux
and
ruckpack.
A
It
sounds
like,
and
on
that
note
I
think
nick
you
wanted
to
mention
a
imminent
as
in
two
weeks
demo
and
some
a
pr
yeah
just.
B
Make
sure
that
you
can
hear
me
yeah,
okay,
good,
oh
host,
disabled
participant
screen
sharing!
Well,
if
we
just
go
to
that
link
that
I
put
in
the
doc,
we
can
probably
follow
along.
A
B
Yeah,
so
I
just
wanted
to
show
this
up
here
for
people
to
review.
It
is
a
sample
provisioner.
For
this
thing,
we've
been
talking
about
quite
often
in
this
meeting
called
ruckpack,
which
is
like
attempting
to
generalize
a
lot
of
the
concepts
of
just
like
unpacking
something
to
a
cluster
and
then
being
able
to
apply
that
to
a
cluster
and
talk
about
content.
B
Like
you
would
image
like
you
would
sort
of
pod,
specs
or
images
for
pods,
so
this
project
defines
sort
of
an
api
for
those
things
and
and
several
on
cluster
apis.
One
of
them
is
the
bundle
which
is
just
this,
like
collection
of
arbitrary
external
content
that
you
want
made
available
to
your
cluster.
B
And
this
pr
is
an
implementation
of
that
api
and
hopefully
provide
some
missionary
to
to
suck
in
that
content,
make
it
available
via
some
http
api
and
then
sort
of
create
references
to
the
locations
where
the
content
is
accessible.
On
your
poster.
B
There's
a
sort
of
a
lot
going
on
here,
but
tim
who
was
working
on
this,
hopefully
we'll
be
able
to
give
a
demo
on
it
by
the
next
meeting.
So
that's
about
that!
Maybe
I
can
go
into
some
of
the
specifics
of
what
this
is
going
to
do.
B
Thank
you,
yeah,
okay,
so,
basically,
since
this
is
talking
about
arbitrary
content
and
not
just
a
manifest
bundle
or
an
oci
bundle
or
a
helm,
chart
or
repo
or
whatever
this
is,
we
decided
to
make
the
input
format
for
this
be
a
this
particular
implementation
of
this
api
be
a
a
docker
container
image
with
a
root
file
system
that
has
a
set
of
bundles
in
a
flat
directory.
B
B
Hopefully,
by
the
time
it's
finished,
is
basically
fire
up
jobs,
taking
advantage
of
cry
to
pull
and
impact
the
images
to
a
persistent
volume
on
the
cluster
and
then
serve
that
content
over
http
api
as
a
tarball,
and
then
reference
like
the
location
through
a
uri
on
the
bundle
resource
in
its
status,
so
that
other
things
in
the
cluster
can
read
off
of
that
to
just
grab
that
content
and
apply
it
after
after
that.
B
The
next
step
is
to
codify
how
that
content
is
applied
to
the
cluster,
so
the
other
piece
of
a
provisioner
will
be
applying
that
content.
That's
served
up
through
this
sort
of
like
really
loose
api
like
go
to
this.
It
knows
how
to
go
to
this
uri
and
that's
what
kind
of
content
to
expect,
and
it
knows
how
it
wants
to
apply
it,
so
that
should
be
coming
up
next,
but
I
realize
I've
been
rambling
for
a
little
while
on
this.
It's
really
up
here
for
people's
reviews.
B
So,
if
anybody's
interested
it's
the
link.
A
A
Otherwise,
on
the
notion,
why
does
it
unpack
to
a
pvc
rather
than
just
like
you
know,
serving
it
from
the
run,
the
image
as
it
were.
B
So
for
certain
clusters,
this
comes
from
our
experience,
working
on
specific
distributions
of
kubernetes,
where
image
pulling
is
restricted
to
things
like
cry.
B
So
there
could
be
things
happening
in
that
distribution
like
the
image
locations,
get
swapped
out
by
cry,
and
if
you
pull
that
directly
using
something
like
container
d,
doctor
distribution
or
the
ammo
ci
emoji
packages
and
machinery,
you
won't
get
that
image
swapping
capability
or
whatever
other
middleware.
B
So
the
choice
of
pvc
is
not
super
important,
but
it
could
really
be
anywhere
like
that
is
accessible
on
the
cluster.
C
B
Yeah,
I
think,
that's
still
a
little
up
in
the
air.
I
think
there's
probably
room
for
this
to
be
configurable,
so
this
also
adds
something
called
the
original
provisioner
class
api,
which
is
following
sort
of
the
storage
quest
pattern
where
you
can
provide
sort
of
arbitrary
configurations
to
things
that
implement
the
behavior
in
this
case,
like
the
specific
provisioner.
B
So
there
could
be
a
world
where
you
have
a
provisioner
class
that
says,
use
this
type
of
volume
like
host
path
and
then,
if
the
provisioner
supports
that
it'll
use
that
instead,
but
I
think
for
the
short
term,
we
want
to
stick
with
things
that
are
at
least
for
this
implementation,
things
that
are
compatible
across
all
cube,
distros
and
all
configurations.
D
I'm
if
there's
a
concern
on
the
number
of
pvcs
that
a
provisioner
like
this
might
end
up
consuming
or
whether,
like
is
the
pvc
supposed
to
be
shared.
You
know
across
several
of
these
provisioners.
B
Yeah,
I
think
that
was
one
of
the
things
that
tim
was
going
to
work
on.
Next
was
looking
at
consolidating
the
storage
of
these
different
bundles,
because
right
now,
I
think
this
code
is
spitting
out,
separate
persistent
volume
claims
and
perhaps
separate
persistent
volumes
for
each
bundle
that
gets
pulled
down
to
the
cluster
and
he
was
looking
at
different
ways
to
sort
of
consolidate
that
into
maybe
a
single
versus
the
volume
maybe
implement
through.
B
What
is
the
storage
api?
I
can't
remember
what
it's
called
container
storage
interface,
something
like
that:
a
persistent
volume
provision
or
storage
class,
whatever
the
nomenclature
is
for
having
sort
of
an
aggregate
volume
that
all
the
bundles
are
impact
to
insured.
But
I
think
the
simplest
thing
for
now
is
going
to
be
to
just
put
it
into
a
single
violent
volume
that,
like
only
this
provisioner,
is
going
to
use
and
expose
since
the
interface
itself,
rather
than
being
the
volume.
D
This
is
like
very
similar
to
flux's
source
controller
architecture
right
or
yeah,
but
we're
not
by
default
backing
anything
by
pvc.
The
source
controllers
simply
fetches
things
to
a
temp
folder,
that's
backed
by
an
empty
dirt,
and
you
can
change
that
out.
You
know
yourself
with
the
pvc
that
has
some
immediate
constraints.
One
is
that
you
know
you're
using
the
same
directory
structure
to
partition.
D
D
It
all
you
know
has
to
be
accessed
from
the
same
device
right.
So
maybe
there
are
limitations
in
size
and
iops
and
whatever,
if
you're
talking
about
lots
and
lots
of
configs,
which
it's
probably
infeasible.
But
you
know
when
you're
talking
about
container
images,
you
could
potentially
be
impacting
something.
That's
quite
large
accidentally
even
yeah.
B
Yeah,
I
think,
we're
running
into
almost
exactly
the
same
problems,
because
it's
such
a
similar
space-
and
I
know
we've
been
talking
about
like
how
do
we
converge
this
with
flux,
make
it
useful
to
flux
or
have
flux,
integrate
pieces
of
flux
into
this
and
they're
they're
very
similar,
but
the
overlap
and
what
uses?
What
isn't
exactly
clear
to
me?
B
Yeah
so
maybe
like.
If
you
have
the
chance
to
look
at
some
of
this
stuff
and
like
you,
can
get
your
thoughts
down
on
get
up
here.
So
I
can
like
we
can
grock
it
with
the
rest
of
people
working
on
this.
D
Yeah,
the
use
of
the
uri
is
to
retrieve
the
files
over
a
server
is
something
that
we
do
with
source
controller
and
consuming
controllers
as
well.
It's
communicated
via
status
and
the
security
consideration
there
is
that
not
all
clusters
have
a
network
policy
controller,
so
sometimes
it
can
be
difficult
to
lock
that
down.
Since
it's
a
network
accessible
endpoint
inside
the
cluster-
and
you
can't
find
the
you
know-
enforcement
of
those
policies
everywhere
universally,
but
that
would
be
a
good
sign.
D
B
Was
really
helpful,
I
think
the
major
difference
between
this
and
source
control,
though,
is
that
source
controller
and
flux,
duct
types
they're,
your
your
common
types
so
like
each
source
is
expected
to
be
a
separate
api,
but
sharing
common
structure
where,
as
this
is
a
single
api,
where
the
behavior
is
the
thing
that
changes
between.
C
If
we
wanted
to
dig
in
more
to
source
controller
and
like
how
we
could
collaborate,
is
that
best
done
through
julie?
Or
do
you
have
someone
else
that
you
think
we
should
talk
to,
because
I
could
see
some
I
I
know
in
a
way
it
feels
like
rook
pack
is
just
an
alternative
api
for
the
same
type
of
thing.
That
source
controller
is
doing
it's
it's
following
the
provisioner
model,
instead
of
the
you
know
like,
like
nick
just
said,
the
duct
type
model.
C
So
I
don't
know
if
there's
opportunity
there
or
if
that's
kind,
of
at
odds
with
what
the
source
controller
maintainers
think
about
things.
I'm
just
curious.
If
you
have
like
a
jumping
off
point.
D
Yeah,
I
think,
having
a
conversation
with
some
of
the
other
flex
maintainers
would
be
interesting.
I'll,
probably
I
can
do
more
communication
of
these,
like
public
conversations,
the
cncflux
channel
and
make
sure
that
everyone
sees
it.
We
have
the
public
flex
meeting
as
well.
Certainly
you
know
I
can
invite
some
of
those
people
here
to
talk
about
it.
C
Oh
yeah,
I
think
it
seems
it
seems
like
there
could
be
a
cool
best
case
scenario
here
where
that
we're
trying
to
get
bundles.
So
the
main
thing
we
care
about
right
now
is
bundle
support,
so
we
could
add
a
bundle,
type
source
controller
and
then
use
source
controller
to
back
all
the
provisioners
and
still
get
this
api.
So
then,
if
you're
using
this
you'd
have
the
choice
of,
do
you
use
duct,
typed
apis,
or
do
you
use
a
single
bundle?
Api?
C
D
See
many
different
types
of
provisioners
that
will
be
exposing
sources
via
this.
Like
uri
format,
you
know
file
system
like
more
than
just
oci
buckets
repos
home
charts.
C
Yeah
I
mean
for
the
initial
one,
since
this
is
part
of
operative
framework,
we're
looking
at
various
iterations
of
storing,
manifests
and
bundles,
that
that
includes
some
like
om,
specific
ones,
but
also
more
generic
ones
like
just
manifests
and
absolutely
things
like
helm,
but
then
you
know
it
kind
of
begs
the
question
you
know
if
this
is
just
exposing
content.
C
What
why
wouldn't
you
want?
You
know
I
get
back
end
or
anything
like
that,
and
at
that
point
it's
like.
Should
we
just
pull
in
source
controller
for
that
and
make
these
things
work
nicely
together?.
D
Yeah,
because
I
mean
source
controller
is
very
useful
independently,
you
don't
need
the
rest
of
flux
and
we
we
built
flux
so
that
it
could
be
used
and
adopted
and
extended
upon
in
that
toolkit
kind
of
way.
So
that
would
really
be
an
ideal
outcome,
and
I
know
that
michael
bridgen
in
particular
who's.
The
original
creator
of
flux
is
very
keen
on
the
oci
bundle
and
unpack
support
for
source
controller.
D
Well,
we
don't
have
an
existing
design
yet,
but
you
know
we'd
like
to
make
these
things
work
and-
and
this
is
gonna,
be
a
focus
for
me-
I'm
looking
at
a
job
change
in
two
weeks
where
I
will
likely
have
more
sponsorship
to
work
exactly
on
this
problem.
C
Oh
cool
well,
yeah
that
we
should
definitely
keep
these
conversations
going
but,
like
I
said
I
don't
know
the
direction
of
the
source
controller
project,
but
another
option
would
just
be
if
we
like.
If
I
don't
know,
if
we
like
this
model
versus
the
duct
typing,
I
don't
know
I
I
hesitate
to
suggest
something
like
that,
because
source
controllers
definitely
like
planted
a
flag
in
the
ground
with
how
they
want
the
apis
to
be.
C
This
model
has
some
value
in
that
the
creators
and
consumers
of
it
don't
need
to
know
the
types
of
the
artifacts
that
that's
kind
of.
Why
it's
interesting
to
us
from
operator
framework,
because
we'll
have
collections
of
artifacts
that
we
want
to
instantiate
together
and
we
don't
want
the
thing
creating
them
to
have
to
care
what
the
artifact
type
is.
D
Let's
see
so
because
the
thing
that's
consuming,
it
should
be
relatively
generic,
and
so
you
don't
want
people
to
specify
like
a
kind
or
anything
you
can.
D
D
So
you're
able
to
just
say,
like
hey,
you
know,
give
me
a
git
repo
at
this
key
right
or
with
this
name
from
this
namespace,
or
give
me
a
bucket
from
here
but
you're,
saying
like
just
having
a
generic,
a
single
api
kind
that
is
able
to
represent
those
things
and
have
it
backed
by
something
that
is
responsible
for
producing
that
thing.
And
then
you
can
organize
yourself
by
your
labels
or
that
sort
of
thing
it's
more
attractive.
C
Yeah
pretty
much
I
mean
I
mean
I
don't
know
if
it's
valuable
to
do
this
here,
but
the
the
reason
for
that
is.
We
have
collections
of
artifacts
in
operator
framework
called
indexes
or
catalogs,
and
we
want
to.
We
have
a
dependency
resolver.
That's
you
know
pretty
generic
and
doesn't
really
know
much
about
what
it's
picking
out
from
the
catalogs
to
install
based
on
user
requirements,
and
so
you
know
we
have
requests
here
and
there
for
things
like
yeah.
C
I
can
install
operators
this
way,
but
I
want
to
when
I
install
this
operator,
I
wanted
to
pull
in
this
helm,
chart
and
stand
it
up
or
I
want
to
not.
I
want
to
have
some
pre-configured
set
of
manifests
that
consume
apis.
That
operators
provide,
but
then
have
it
depend
on
the
operators
that
it
provides
and
pull
in
the
operators
if
they're
missing
that
kind
of
thing.
C
So
there's
from
the
perspective
of
the
resolver,
it
just
wants
to
pick
up
a
set
of
things
and
say
these
are
the
things
that
satisfy
the
user
requests
and
hand
them
to
this
other
api
and
say:
okay,
make
these
things
exist.
The
way
that
they
need
to
based
on
what
type
of
artifact
they
happen
to
be.
D
Certainly,
I
could
see
enabling
a
flag
in
source
controller
that
produces
owned
objects
like
child
objects,
that
are
that
have
an
owner,
ref
or
a
git
repository
or
a
helm
chart,
or
something
that
simply
store
things
in
the
same
exact
form
or
like
in
the
in
a
uniform
format.
D
That
would
be
a
way
to
have
the
controller
or
reflect
that
information.
You
know
in
a
readable
way.
It
sounds
like
the
requirement
to
use
a
single
api
is
driven
not
really
from
the
authorship
requirement,
which
makes
sense,
because
different
types
of
sources
need
to
be
authored
in
a
different
way.
They
have
different
information
to
make
them
work,
but
just
from
a
reading
standpoint,
you
need
a
client
that
wants
to
consume
them
in
some
uniform
way.
D
As
long
as
you
have
like
the
gbk
namespace
name
information,
you
can
dynamically
duct
type
yourself
against
those
things.
But
if
you
want
a
single
api
group
and
kind
that
can
list
things
in
a
generic
manner,
you
could
have
all
sorts
of
dynamic
controllers
producing
and
reading
their
configuration
from
formal
types
specific
to
their
thing,
but
can
reflect
that
information.
Standard
object
as
a
child
resource.
C
D
I'm
not
really
very
convinced
that
that
a
dependency
resolver
has
to
use
the
same
kind
for
different
sources.
As
long
as
there
is
an
interface
that's
implemented
in
the
status
field,
because
our
our
client
tooling
is
pretty
good
in
kubernetes
world,
but
I'd
be
interested
to
hear
about
the
constraints
or
why.
That
was
a
thing.
And
certainly
it's
possible
to
you
know,
create
a
child
resource.
C
Yeah,
I
I
don't
know
that
it's
a
strict
requirement,
it's
more
about
the
the
way
we
see
extension
happening
right.
If,
if
do,
we
want
to
teach
the
resolver
about
new
kinds
that
it
needs
to
write
out
now,
the
resolver
needs
to
be
kubernetes,
aware
and
understand
some
configuration
somewhere
about
what
kinds
are
available
and
how
it
maps
to
artifacts.
They
might
know
about
versus
the
other
direction.
C
D
C
And
it's
not
because
it
doesn't
work
to
have
kinds
for
your
volumes
right.
It
worked
that
way
for
a
long
time,
and
it
just
makes
it
easier
to
extend
if
the.
If
the
pod
api
doesn't
have
to
worry
about
all
the
different
volume
types
that
exist.
D
Yeah
that
does
make
sense.
If
you
want
something
to
I
mean
it's
it's
a
matter
of.
If
you
would,
I
think
it's
a
it's
about
whatever
implements
the
pod
api
right,
the
client
tooling,
for
that
that
implements
the
controller.
D
There
are
a
couple
of
strategies
for
collecting
and
watching
that
information
and
having
a
single
api
group
can
help,
because
then
you
don't
have
to
set
up
like
dynamic
watches
for
things
I
think
that's
kind
of
the
main
constraint
there
is.
If
you
want
to
write
a
controller
on
that
so
say
you
have
a
dependency
resolver,
that's
like
constantly
re-evaluating
information
in
the
cluster
to
so
you
get
other
reactive
results
or
alerting
or
something
then
yeah
watching
a
single
api
group
seems
like
that
would
be
what
you
would
want.
C
Well,
cool,
if
if
it
makes
sense
to
you,
it
sounds
like
we
should
talk
more
about
this.
D
Yeah,
I'm
also
interested
actually
in
this
dependencies
thing
in
relation
to
sources,
because
we
usually
in
flux,
we
don't.
We
can
talk
about
composition
of
sources
right,
so
we
have
ways
to
build
more
complex
sources
from
many
different
ones,
but
the
dependency
portion
of
it
is
done
at
the
apply
and
like
object,
management
layer
so
like
the
customizations
and
home
releases
implement
dependencies
because
they
know
about
like
the
health
of
things
and
the
state
of
whether
or
not
it
was
applied
or
pruned,
and
that
sort
of
thing.
C
Are
you
talking
about
like
runtime
dependencies
the
helm
chart?
One
made
me
think
that
you're
talking
about
you
know
checking
on
the
health
of
another
component
kind
of
a
thing.
D
Yeah,
I
I
guess
the
the
verbiage
is
kind
of
getting
a
little
conflated
because
we
use
the.
Maybe
we
made
a
poor
choice
in
saying
that
it's
a
depends
on
yeah,
it's
a
it's
like
a
runtime
style
dependency,
where
you
like
predicate
that
something
should
be
installed
after
something
else,
but
from
source
management
perspective.
D
Yeah
people
use
the
word
dependency
as
well,
and
that's
for,
and
we
just
call
that
composition.
I
think
influx
like
when
you're
taking
multiple
things
and
matching
them
together
before
you
apply
them.
Basically,
if
I
want
to
include
a
patch
directory
from
this
repo
and
put
it
onto
some
upstream
repository,
you
can
compose
those
things.
C
Yeah,
I
I
think
those
are
interesting
problems,
but
for
the
most
part,
oh
well,
I'm
trying
to
delegate
delegate
those
as
much
as
possible,
so
the
dependencies
from
the
olm
side
are
very
much
like
traditional
package
manager.
Dependencies
like
I
have
an
operator,
it's
going
to
use
cert
manager.
So
if
I
install
this
operator,
I
need
to
also
install
server
manager
and
then
once
they
come
up,
it's
up
up
to
them
more
or
less
to
check
that
they
have
what
they
need
and
report
good
statuses.
If
they
don't.
I
see
yeah.
D
D
D
And
I
can
see
why
you
would
want
to
be
able
to
talk
about
packages
in
a
way
that's
removed
from
their
transport.
D
In
in
some
ways,
when
a
package
manager,
like
only
uses
a
particular
kind
of
transport,
then
it
like
makes
things
really
uniform,
but
it
adds
restrictions
like
be
interesting
to
write
an
oci
backend
for
apt.
But
that's
not
a
thing
right.
C
If
you
just
couple
that
with
a
resolver
that
knows
about
different
artifact
types
like
you
have
an
or
as
aware
resolver,
you
can
store
and
distribute
pretty
much
anything
you
could
do
python
packages
or
java
stuff.
I
think
that's
an
interesting
problem,
but
kind
of
kind
of
out
of
scope
for
right
now.
We're
trying
to
focus
on
the
cluster
needs.
D
Yeah,
I
guess
I
was
just
bringing
it
up
that
like
I,
can
see
the
appeal
of
why
you
want
the
api
to
be
duct,
typed
and
extensible
in
that
way,
because
when
you
take
other
package
managers
that,
like
standardize
on
a
specific
type
of
transport,
it
becomes
like
hard
coded
into
the
way
that
the
package
manager
works.
D
D
So
you
don't
end
up
with,
like,
like
a
generic
transport
mechanism
like
the
all
of
that
stuff
is,
even
though
they
unpack
to
a
package
format,
that's
represented
by
something
very
generic.
D
But
people
store
kubernetes
config
in
so
many
ways
right
now,
and
there
is
no
standard
and
they're
competing
ecosystems.
So
you
want
to
be
able
to
duct
type
the
transport
essentially,
but
then
index
them
in
the
same
way,
and
that
makes
sense.
D
Luckily,
the
meeting
is
recorded,
so
we
can
just
transcribe
it
right.
A
D
I
will
I
will
reach
out
to
the
flex
team
to
make
sure
that
we
can
chat
a
little
bit
about
how
there's
like
overlapping
the
design,
and
even
if
you
know
you
build
like
your
own
machinery
for
the
rock
pack
apis,
there
might
be
some
benefit
in
source
controller
being
able
to
also
be
adapted
to
do
the
same
thing.
So
there
are
multiple
ways
to
create
a
working
system
with
these
shared
interfaces.
A
That
sounds
great
yeah
collaboration
is
always
is
the
goal
here.
So
that's
wonderful.
A
All
right
well
on
that
note,
thank
you
all
for
participating,
see
you
all
in
two
wish
two
weeks.
Hopefully
we'll
have
that
demo
from
the
rock
pack
folk.
I
put
a
tvd
there,
so
please,
you
just
fill
in
the
names
of
people
if
that
is
confirmed,
but
that
would
be
wonderful
and
otherwise
see.
Everyone
in
two
weeks
have
a
wonderful
couple
weeks.