►
From YouTube: SIG Cluster Lifecycle - Cluster Addons 20210413
A
All
right
so
we're
reporting,
hi
friends
cluster
add-ons
meeting.
It
is
april
13th.
We
have
our
normal
tuesday
time
slot
got
a
bunch
of
cool
friends
joining
us
from
various
companies.
As
always
a
couple
of
things
on
the
agenda.
A
Anybody
else
have
anything
they
wanted
to
speak
up
on
before
evan
starts
to
get
into.
Oh,
we
had
an
action
item
from
last
week.
I
guess
justin
said
that
you
were
going
to
follow
up
on
project
submission
to
cncf,
and
then
I
think
evan
you
ended
up
doing
some
of
that.
For
the
g
talk
stuff
right.
B
Yep
everything's
submitted
and
accepted
I'm.
I
still
have
a
tab
open
to
read
through
all
the
mentorship
docs,
but
I
think
I
think
we're
good
for
now
and
then
sometime
in
the
next
couple
weeks,
the
student
selection
happens.
A
Yeah,
I
would
imagine
that
we're
still
quite
ahead
of
schedule
on
that,
or
at
least
we're
ahead
of
the
ball-
I'm
not
traveling
on
deadlines
and
that
kind
of
thing
cool,
well,
yeah,
I'm
idm
to
you
about
that.
I'm
happy
to
help
out
with
the
any
of
the
mentoring
stuff.
A
If
you
need
a
formal
co-mentor,
please
just
ask
and.
B
Yeah
I
mean
I
would.
I
would
appreciate
that,
because
this
is
my,
I
know
that
you've
done
it
before
I
have.
I
have
not
so
be
looking
for
help
there.
I
was
also
thinking
we
might
want
to
reach
out
to
signature
and
see
if
someone
there
can
help
with
this
project,
because
it
really
touches
a
lot
of
node
issues.
A
B
I
think
we
can
just
update
the
pr
I'm
happy
to
to
do
that
or
not.
I
guess
the
document.
A
Yeah
I
mean,
I
suppose,
since
you
have
a
branch
open
for
it
already,
if
you
just
want
to
add
my
name
and
I'll
act,
it
sounds
good
cool
and
then
how
shall
we
reach
out
to
the
signaled
folks?
A
Should
we
just
drop
the
proposal
in
their
channel
and
ask
them
to
say
hi.
B
That
could
be
a
good
place
to
start
or
I
can.
We
can
look
up
their
meeting
schedule
and
see
if
there's
a
meeting
we
can
drop
in
on
and
see
what
they
have
to
say.
B
A
Had
I
might
put
an
action
to
make
sure
that
we
don't
lose
that
sweet?
That's
pretty
good!
On
gsoc
stuff
justin!
I
see
you
have
preempted
our
agenda
for
a
update
on
cops,
yeah.
C
I
just
wanted
to
slot
in
a
30-second
update
because,
like
I
will
take
30
seconds,
which
is
timer
is
starting
now
good.
We
have
got
sensitive
agreement
to
merge
cluster
add-ons
behind
a
feature
flag
into
our
next
version
of
kops.
So
in
the
next
couple
of
weeks
we
should
be
cutting
that
branch
and
then
it
will
go
into
master
branch
and
we
should
actually
start
to
see
progress
there.
So
it's
been
a
long
wait
and
thank
you
to
everyone.
A
Super
cool
all
right
and
also
a
very
succinct
update.
I
think
you,
you
still
got
another
15
seconds
if
you'd
like
to
keep
talking,
yeah
cool
sweet.
Well,
that's
g
sock
and
awesome
news
from
the
cop
side
of
things.
A
Hopefully,
the
oci
packaging
stuff
that
we'll
also
be
keeping
a
focus
on
in
the
coming
months
will
continue
to
benefit
the
effort
on
the
cop
side
as
well.
Sweet.
I
guess,
on
that
topic.
Evan
did
you
want
to
talk
a
little
bit
about
ruckpack?
I
was
looking
at
some
of
these
design
docs
last
week,
but
feel
free
to
take
the
floor,
and
I
think
we
have
plenty
of
time
right
now.
B
A
I
imagine
that
would
be
super
helpful
for
other
people
to
get
contacts.
A
B
Perfect,
I
don't
remember
how
much
we
talked
about
this
last
time,
but
I'll
just
assume
we
all
don't
know
anything
about
it.
So
this
is
a
sort
of
an
offshoot
of
work
that
we've
done
in
an
operator
framework
when
we're
looking
at
where
we
want
to
take
operator
framework
in
general
and
olm
in
particular,
and
one
of
the
things
we
wanted
to
do
is
find
some
of
the
common
issues
we've
had
in
the
past
and
try
and
isolate
those
into
something
that
addresses
those
needs.
B
So
ruckpack
is
about
getting
content
onto
clusters
and
packaging
content.
A
lot
of
the
same
things
that
we
talked
about
with
you
know
transmitting
manifests
and
oci
images.
It's
it's
the
same
space
but
sort
of
separated
from
a
lot
of
the
other
things
that
operator
framework
is
normally
concerned
with.
So
that's
why
it
could
be
potentially
something
that
we're
interested
in.
In
this
context,.
B
One
of
the
things
that
we
wanted
to
do
this
is
we're
basically
talking
about
writing
a
new
controller
with
a
new
api
to
serve
some
of
these
needs,
and
we
wanted
to
make
it
really
unsurprising
for
people
that
use
kubernetes
and
so
we're
trying
to
take
inspiration
from
several
other
things
like
pods
and
images,
persistent
volumes
and
csrs
and
specifically
certain
aspects
of
them
so
pods
and
images
you
know
image
is
a
reference
to
an
external
runnable
image
and
then
a
pod
says
how
that
image
should
be
instantiated
on
the
cluster
and
run
as
a
pod.
B
B
They
they
reference
a
storage
class
by
name
so
we're
just
looking
at
patterns
that
we
want
to
use
so
persistent
volume
references,
a
storage
class
by
name
storage
class
corresponds
to
some
provisioner,
that's
been
installed
in
the
cluster
and
configures
it,
and
any
persistent
volume
and
persistent
volume
claim
that
references
that
storage
class
is
managed
by
the
provisioner
that
owns
that
storage
class.
So
I
have
an
example
here
here.
This
provisioner
is
unique
id
for
the
aws
cvs
provisioner.
B
It
manages
any
persistent
volume
or
any
persistent
one
claim
that
has
this
a
reference
to
any
storage
class
that
says
this
provisioner
and
the
parameters
that
go
into
a
storage
class
are
specific
to
the
provisioner,
so
we're
going
to
try
and
use
reuse
some
of
these
concepts
for
ruck
pack
as
well,
and
then
the
final
thing
that
we
want
to
try
and
reuse
is
the
csr
approval
process,
which
is
relatively
straightforward.
B
You
create
a
certificate
request
and
if
it
has
this
approved
condition,
the
the
csr
controller
will
output
the
result
of
the
csr
somewhere
for
you
to
use,
and
whether
and
this
can
this
is
the
thing
that's
nice
about
this-
and
the
thing
that
we
like
for
ruckpack
is
that
you
can
either
have
controllers
that
automatically
approve
things
based
on
some
criteria
or
you
can
have
a
user
go
in
with
cube
cuddle
certificate
approve
manually,
approve
it
and
that's
something
that
you
get
to
decide
via.
You
know
policy
and
the
way
you
configure
your
cluster.
B
So
those
are
this
there's
three
big
things
that
we're
trying
to
take
advantage
of,
and
so
rec
pack
is
trying
to
generalize
the
idea
of
pod
and
image
to
arbitrary
artifacts.
So
we're
no
longer
talking
about
runnable
images,
we're
talking
about
content
and
we're
no
longer
talking
about
running
a
pod,
we're
talking
about
making
content
active
whatever
that
means
and
because
we
don't
always
know
what
it
means
to
make
a
thing
active
or
what
it
means
to
unpack
a
non-runnable
image.
B
We
want
this
to
be
plugable
in
the
same
way
that
storage
classes
are
pluggable
for
persistent
volumes,
and
this
is
maybe
more
driven
by
some
operator
framework
needs,
but
we
know
that
it's
useful
to
have
gating
around
what
can
get
happen
in
the
cluster
based
on
that
so
making
it
policy
driven
and
automatable
in
the
same
way
that
csrs
are
would
be
really
valuable
as
well.
B
B
So
here's
a
reference
to
an
operator
bundle
and
the
there
would
be
some
operator
provisioner
in
the
cluster
that
understands
what
olm
bundles
look
like
understands,
how
to
unpack
them
and
the
bundle
api
just
makes
the
contents
available.
So
this
course
this
corresponds
to
the
non-existent
image,
api
and
kubernetes.
B
There's
no
real
need
for
it,
because
there's
only
one
way
that
you
unpack
a
runnable
image
and
that's
you-
you
make
the
file
system
available
to
whatever
your
your
runtime
is,
but
that's
not
the
case
for
our
bundles,
because
we
don't
know
how
to
unpack
all
bundles
the
same
way,
and
so
the
idea
is
that
the
provisioner
pulls
this
content
and
makes
it
available
whatever
that
means.
In
this
case
we're
saying
you
know
it'll
list
out
some.
Maybe
this
is
an
aggregated
api
server
and
I
can
query
it
for
sub-objects.
B
B
B
A
I
see
so
whatever
is
implementing
the
the
something
class
bundle
class.
A
C
B
A
A
B
Yeah,
that's
fair
cool
thanks,
yeah,
that's
a
good
point
too.
This
could
just
be
like
configmap
and
you
don't
need
a
volume,
it's
good
feedback.
I
guess
I
should
also
say
none
of
this
is
implemented.
This
is
just
a
design
and
so
yeah.
Anyone
on
the
call
is
interested
in
collaborating
on
the
design
and
the
implementation
are
totally
open
to
that
sounds
like
you
have
good
feedback
already,
so
the
next.
The
next
thing
up
is
the
instance
which
is
sort
of
the
our
pod
idea.
B
So
this
is
the
contents
on
the
cluster
now
make
it
quote:
unquote,
active
and
again,
this
is
provisionary
specific
for
oh
I'm,
an
operative
framework.
We
have
an
idea
of
what
that
means.
That
means
basically
taking
the
manifests
that
are
in
this
bundle,
manifest
bundle
and
applying
to
the
cluster.
B
The
provisioner
has
the
context
that
it
needs
to
do
any
sort
of
pivoting
like
that,
if
it,
if
it
is
required
for
the
type
of
bundles
that
you're
using
it
might
not
be
so,
this
lets
you
find
old
and
new
versions
of
the
bundles
you
can
think
of
it.
Almost
like
you
know,
deployment
managing
replica
sets
or
something
like
that,
and
then
all
it
needs
is
a
reference
to
the
new
bundle,
because
that
will
point
to
a
bundle
object
on
the
cluster.
B
But
then,
if
you
are
creating,
if
you're,
just
interacting
with
an
instance,
much
like
you
can
create
a
pod
right
without
creating
an
image
object
because
there's
one
you
can
also
just
give
the
spec
of
a
bundle,
and
this
will
create
the
bundle,
give
it
a
name
and
make
it
active
via
the
instance
api.
According
to
the
provisioner
and
the
prisoner
class,
so
you
can
see
the
the
class
is
here
for
the
bundle.
A
A
Okay,
how
come
there's
a
bundle
spec
in
the
instance
object.
B
It's
purely
so
that
you
can
do
it
all
in
one
step.
I
can.
I
can
do
like
a
single
api
transaction
that
creates
instance,
you
could
you
could
get
rid
of
this
entirely
and
you'd
have
all
the
same
semantics.
It
would
just
be
that
now,
if
I
wanted
to
create
an
instance
of
the
thing,
I
have
to
first
create
a
bundle,
and
then
I
have
to
create
an
instance,
and
we
think
that
that's
probably
a
common
enough
thing
that
much
like
it's
valuable
to
embed
a
pod
spec.
A
I
think
the
instances
of
pod
specs
being
embedded
in
an
api
is
usually
when
it's
used
as
a
template
to
create
like
multiple
pods.
A
We
did
copy
that
pattern
for
helm
controller
inside
of
flux
and
where
it's
only
producing
a
single
chart.
A
chart
template
so
that's
very
similar
to
this,
but
it's
also
super
confusing.
B
A
Yeah
goes,
it
could
go
a
couple
of
ways
if
you
want
more
feedback
on
just
that
specific
thing.
All
right.
We
we're
having
problems
with
it
in
health
controller.
B
B
B
Yeah
I
mean
so,
I
think,
if
provisioners
work,
the
way
you
know
sort
of
imagining,
then
provisioners
would,
by
default
not
do
anything
with
an
instance
unless
it
has
the
approved
condition,
and
then
you
would,
if
you
want
something
to
be
approved
automatically
you'd
just
write
a
controller
that
approves
all
instances
or
approves
all
instances
with
a
particular
class
or
you
could
ship
an
approval
plug-in
with
your
provisioner
that
and
so,
and
then
let
you
know
the
provisioner
class
that
you're
using
defines
what
the
approval
policy
is.
All
that.
B
B
A
Yeah
and
you
could
you
could
actually
write
a
controller
that
does
that
via
labels
or
something,
but
it's
a
little
hard
to
imagine
how
that
could
be
like
what
the
security
ramifications
of
it
are
interesting:
cool,
okay,
I'm
following
still.
B
That's
a
good
sign,
we're
not
way
off
and
on.
B
I
don't
know
what
I'm
trying
to
say:
provisioners
I'll,
just
keep
coming
controllers
that
read
and
write
the
bundle.
In
instance,
you
can
configure
them
just
like
storage
class
provisioners.
They
have
some.
You
know
provisioner
specific
configuration
language
that
they
understand.
So
we
expect
this
to
be
pretty
limited
for
most
of
them,
mostly
you'd.
Probably
just
have
a
provisioner
for
a
certain
class
of
bundles
and
then
maybe
there's
a
couple
flags
you
can
set
for
your
storage
for
your
provisional
classes
and
then
just
like
storage
class.
B
B
I
think
you
know
I
mean
one
good
example
would
just
be
if
your
provisioner
ships
with
some
approval
policy
stuff
by
default
right,
like
one
of
the
ones
we've
talked
about
for
operator
framework,
is
if
your
bundle,
if
the
next
version
of
a
bundle
doesn't
contain
higher
permissions
than
the
previous
version.
Then
it's
approved
right.
You
only
have
to
review
it
if
the
permissions
are
expanding
in
scope
and
that
might
be
just
a
flag
on
the
provisioner
class
and-
and
you
say
by
default,
all
my
bundles
are
approved
in
this
mode.
B
But
if
you're
running
on
a
cluster
with
very
restrictive
permissions
or
very
loose
permissions,
you
could
change
that
setting
are
instance,
parameters
a
thing
they
could
be.
I
don't
think
we've
seen,
I'm
not
sure
that
we've
seen
a
good
use
case
for
them
that
wouldn't
be
covered
by
provisionary
class
parameters.
B
Or
are
you
saying
maybe
maybe
what
do
you
mean
by
instance,.
A
Parameters,
maybe
that's
a
better
way,
so
what
I'm
thinking
is
like
say,
provisioner
classes
like
customize
apply,
then
I
could
see
provisioner
class
parameters
for
customize
apply,
be
something
like
load
restrictor
or
enable
these
customized
plugins.
A
But
then
the
instance
parameters
would
be.
Something
like
here
is
my
like
pod
identity
label,
or
you
know
for
this
particular
add-on,
or
this
particular
bundle
or
I
want
to
turn
the
ingress
on
or
something
like
that
options
for
that
particular
bundle.
B
I
don't
know:
we've
talked
a
ton
about
those
use
cases.
I
think
they're
interesting,
but
one
of
the
one
of
the
ways
I
thought
we
could
deal
with.
Some
of
that
is
this:
the
bundle
api
reps
is
a
list
so
especially
for
for,
like
customized
things
where
you
think
you
think
about
multiple
sets
of
data.
Sometimes
if
you
could
just
have
again
like
a
ref
to
some
local,
a
config
map
that
has
some
of
your
settings
like
that,
in
addition
to
your
base
or
whatever
yeah
cool.
A
Yeah
yeah
that
just
makes
sense
that
works
very
well
for
something
like
customize
overlays.
I
guess.
If
say
you
had
a
helm
installed
provisioner
class,
then
there
would
be
options
for
the
provisioner
class
and
then
for
the
helm,
bundle
release.
Then
you
could
ref
like
a
values,
file
from
config
map
or
something,
and
that
could
be
your
in
direction.
Cool.
B
B
Let's
see,
I
think
I've
yeah,
I
add
a
couple
extra
slides.
I
don't
know
that
this
is
that
important
to
go
over.
This
is
just
kind
of
some
like
details
of
you
know
some
of
the
security
implications
of
having
these
apis.
That
can
do
the
sort
of
thing
in
your
cluster,
so
you
know
if
you
edit,
the
spec
of
a
bundle
or
sorry
if
you
edit
the
spec
of
an
instance,
then
probably
its
approval
goes
away.
B
Things
like
that
and
you
might
want
some
some
restrictions
around
who
can
use
what
provisioner
classes
and
things
like
that.
But
I
don't
know
that
all
of
that
is
overly
critical
for
the
general
overview.
A
I
would
urge
you,
if
you're
thinking
about
the
provisioner
class,
to
this
this
use
case
of
like
building
apis,
that
expand
arbitrary
bundles
of
other
api
groups.
A
It
has
a
like
a
peculiarity
with
our
back,
where
the
the
thing
that
provisions
the
bundle
should
have
some
precise
identity
and
that
identity
should
usually
be
either
like
name
space,
scoped
or
bundle
scoped.
Does
that
make
sense
evan
you?
You
want
to
be
able.
C
B
Versus
basically
having
a
cube
admin
service
account
that
can
do
all
this.
Is
that
what
you're
saying
is
that.
B
A
I
mean
I
would
imagine
that
if
you're
interested
in
things
like
approval
flows
and
that
kind
of
stuff
that
an
administrator
account
or
the
ability
to
say
that,
like
this
account
can
use
a
particular
provisioner
class
is
not
enough,
because
the
or
class
needs
to
have
some
set
of
roles
or
cluster
roles
bound
to
it
specifically,
and
it's
usually
based
off
of
the
workload
that
you're
applying,
not
the
not
the
way
that
you
unpack
the
workload
so
or
the
bundle
in
this
case
so
like.
A
If
you
are
installing
a
bundle
from
say
some
slightly
upstream
location
and
say
you
have
like
a
fember
policy
where
you're
like
constantly
want
to
apply
that
thing.
You
usually
want
the
role
to
be
kind
of
specific.
A
To
that
thing
like
oh,
I
only
want
this
to
modify
this
type
of
custom,
resource
and
services,
and
so
that
way
you
can
safely
kind
of
accept
updates
from
that
location
without
like
knowing
that
it's
not
going
to
create
pots
in
your
cluster
and
that's
not
based
off
of
the
provisioner,
which
is
like
I'm
using
customize
or
I'm
using
hell
or
I'm
using
casenet
or
whatever.
It's
it's
based
off
of
the
fact
that
that
workload
comes
from
that
place,
so
you
assign
the
identity
and
the
role
binding.
A
B
A
Yeah
the
builds
should
be,
it
should
be
separated
in
status
and
even
maybe
in
the
api,
from
like
the
concept
of
applying
those
things
to
the
cluster
or
whatever.
Whether
you
do
that,
like
inside
the
provisional
class
or
not
there,
there
should
be
a
separation
of
the
two
ideas.
A
A
It's
really
it's
it's
about
as
succinct
as
you
can
be.
With
the
detail,
that's
necessary!
You
could
probably
review
it
in
like
45
minutes.
B
B
There
we
used
something
similar
in
olm,
where
I
guess
it's
opt-in,
but
you
can
pick
a
service
account
that
all
of
them
operations
run
under
for
a
given
set
of
namespaces,
but
without
the
using
the
impersonation
apis,
which
I
think
is
is
more
or
less
what
you're
saying
we
should
be
doing
here.
B
B
You
know,
like
none
of
my
I
don't
know,
helm
charts
are
allowed
to
create
crds
or
something
like
that,
and
then
you
might
want
something
more
specific
for,
like
you
said,
for
namespaces
or
for
specific
users,
or
something
like
that.
A
Yeah,
I
think
that's
usually
the
job
of
like
pre-assigned
roles
and
cluster
rule
or
like
the
cluster
roles
in
in
a
cluster,
are
used
to
kind
of
provide
bucketed
default
rules
and
then
role
bindings
are
very
succinct
right
where
it's
just
like.
This
is
the
principle.
A
This
is
the
set
of
roles
or
cluster
rules
that
they
get
and
whether
it's
at
the
cluster
level
or
at
the
name
space
level
so
like
you,
can
roll
bind
within
a
name
space
against
the
cluster
role,
admin
right
and
then
give
whatever
is,
in
that
name,
space
access
to
a
reasonable
set
of
resources.
And
then,
if
you
give
them
cluster
admin
instead
inside
of
that
namespace,
they
can
delete
their
own
namespace,
which
is
weird
verbiage,
but
that's
how
it
works.
A
So,
like
with
flux,
we
were
looking
at
adding
cluster
rules
for
like
source
admin,
helm
admin
that
kind
of
thing
that
you
could
just
precisely
role
bind
against
without
knowing
all
of
the
inner
details,
so
that
you
don't
have
to
be
like
a
lot
of
times
like
when
people
see
are
back.
A
But
yeah
as
soon
as
you
get
into
like
defaulting
and
a
default
essay
makes
sense
like
if
that's
user
configurable
to
what
that
is.
A
But
if,
if
it's
not
granularly
configurable
then
like
you
might
as
well
not
be
super
concerned
about
like
policy
per
object
and
all
that
stuff,
because
that
just
means,
like
all
of
the
objects,
will
reconcile
with
the
same
administrator
permissions,
which
is
pretty
it
just
taints.
The
security
model
of
the
cluster
like
pretty
badly.
B
Yeah,
the
I
that's
that's
true
for
sure.
I
guess
one
of
the
things
that
maybe
is
not
as
relevant
for
ruckpack
in
general,
but
is
relevant
for
operators
is
that
they
often
need
small
escalations
of
privilege
to
install,
and
then
you
don't
want
people
to
continue
using
that
privilege.
So
that
was
some
of
the
motivation
for
the
approval.
Workflows
is
like
normally
you
know
you,
you
don't
want
a
user,
you
want
a
user
to
be
able
to
hey,
say:
hey!
B
I
want
to
install
this,
but
whether
or
not
it
can
be
approved
to
be
installed
is
not
necessarily
their
decision
and
that
that's
kind
of
where
that
that
came
from
is
well.
If
it's
not
their
decision,
then
someone
else
has
to
come
and
stamp
it
for
approval,
and
that
might
be
an
automated
process
or
it
might
be
an
administrator
who
just
adds
the
flag
to
it.
A
And
nick
mentions
in
the
chat
that
there's
a
difference
between
permissions
that
are
necessary
for
installation
of
software
versus
the
permissions
that
the
software
or
content
like
runs
for
its
lifetime,
which
is
certainly
true,
but
just
that,
like
depending
on
where
the
software
comes
from
or
what
type
of
software
it
is
like.
You
would
probably
want
different
installation
permissions
themselves,
which
is
what
the
provisioner
class
is
cool,
sounds
good.
Sorry,
I
don't
want
to
slow
you
down
too
much,
and
I
know
that
we
have
other
things
to
get
to,
but.
B
Yeah,
no,
that's
that's
pretty
much
it
I
mean
to
take
up
the
the
whole
time.
There's
more
pieces
to
this,
like
we've
been
something
we've
been
talking
about
in
operating
framework
for
a
while.
There
are
other
sort
of
layers,
but
this
is
the
the
first
one
and
the
one
that
I
thought
would
be
most
relevant.
So
this
crowd.
B
So
if
you're
interested
in
any
of
the
other
stuff,
you
can
come
to
the
operator
framework
meetings
and
then,
if
anything,
there
comes
up
that
might
be
valuable.
Here
too,
we
can
definitely
bring
that
back,
but
I
think
that
it's.
This
is
the
first
thing.
B
D
I
I
think
I
just
wanted
to
add
real
quick.
I
think
we're
kind
of
mentioning
this
is
like
not
operator
framework
specific
but
as
like
a
separate
project
that
we
operate.
Your
framework
can
use-
I
don't
know
I
haven't
mentioned
that
earlier,
but
I
think
that
distinction
is
kind
of
important.
A
Certainly,
it
seems
like
a
generic
mechanism.
I
I
think
that
there's
a
lot
of
value
in
that
in
what
you've
proposed
and
that
the
api
shape
is
quite
different
from
things
like
argo
influx.
I'd
encourage
you.
If
you
haven't
looked
at
those
projects
to
at
least
examine
what
lessons
can
be
learned
from
those
apis
but
yeah.
The
goal
of
flux
is
to
be
a
generically
extensible
like
project
that
can
be
integrated
into
many
third-party
systems,
and
there
are
a
lot
of
components
here
that
do
what
you're
looking
for.
A
So
I
I'd
be
curious
and
like
what
kind
of
role
flux
could
play
if
we,
you
know,
had
good
integration
with
these
oci
bundles
that
I'm
I'm
very
keenly
interested
in
like
getting
good
interrupts
between
all
of
the
artifacts
and
that
kind
of
stuff
knowing
how
to
unpack
them.
Providing
extensibility
for
when
people
want
to
do
fun
things
with
their
artifacts
that
aren't
supported
in
flux
that
all
of
those
things
are
are
super
interesting.
A
A
So
there's
there's
a
lot
of
bits
in
here
where
it's
like
at
least
we'll,
be
happy
to
help.
You
learn
the
lessons
from
there,
but
maybe
consider
using
like
source
controller
or
some
of
the
other
applied
controllers
to
do
that
sort
of
stuff.
B
Yeah,
I
think
we
can
definitely
take
a
closer
look
at
flux.
Might
I
mean
I
guess,
having
not
used
it
personally,
my
impression
was
that
it
was
serving
different
needs,
but
if
you're
saying
that
it's
very
much
in
the
same
space,
we
can
definitely
take
a
closer
look
at
it.
A
Yeah,
what
we've
done
in
the
flux
project
of
refactoring
into
flux,
2,
which
we're
working
on
making
generally
available?
We
already
recommend
people
just
use
flex2
to
not
flex.
One
is
that
the
ideas
have
been
decoupled
into
primitive
apis.
A
So
if
you
only
want
to
pull
a
source
into
a
cluster,
you
can
do
that,
like
you
can
say,
pull
a
bucket
or
a
helm
repository
or
a
git
repo
into
a
cluster
internal
cache,
and
you
know,
ignore
things
from
it
and
start
doing
stuff
on
and
make
it
available
inside
the
cluster.
That's
the
sources
apis
and
then
separately
like
if
you
want
to
reference
one
of
those
sources
and
then
reconcile
some
of
the
things
from
inside
those
sources
into
the
clusters:
api
server,
that's
a
separate
api
object
and
yeah.
Like
decoupling.
A
Those
things
has
made
flux
much
more
modular.
We
have
an
sdk
guide,
it's
all
controller
runtime
based,
so
I
could
see
like
these
ruck
pack
use
cases
fitting
in
or
extending
the
flux
apis.
A
A
That's
that's
kind
of
my
main
bullet
point
here
is
like
how
should
this
stuff
work?
You
know
if
we
were
to
propose
and
implement
it
in
flux,.
B
Yeah,
I
think
I
I
think
I
just
need
to
take
a
closer
look
like
for
for
context
from
the
operator
framework
side.
We
would
want
to
take
a
lot
of
what
olm
does
today
and
I
just
have
it
be
a
provisioner
for
bundles
that
happen
to
be
the
types
of
volume,
specific
bundles
we
support
today,
and
that
includes
a
lot
of
really
specific
logic
around
like
when
crd
updates
are
allowed.
B
A
Yeah,
I
think
if
we
were
to
compose
that
with
something
like
the
release
gates,
then
you
could
get
that
sort
of
behavior
but
yeah
at
a
minimum.
There
are
things
in
the
api
that
could
just
probably
help
influence
some
of
the
decisions.
A
I
think
some
of
the
inspiration
on
the
core
api
types
is
something
that
we've
really
looked
to
do
when
modeling
the
flux
apis.
It
looks
like
you're
trying
to
take
that
approach
as
well
with
the
ruck
pack
stuff.
So
there's
there's
really
good
crossover.
A
A
But
yeah.
A
I
I'm
I'm
keen
to
see
how
the
provisioner
bits
in
ruck
pack
can
be
made
to
be
more
generic.
Perhaps
we
can
take
some
of
those
design
decisions
and
also
implement
them
in
flux.
Maybe
the
provisioners
could
be
cross-compatible
but
yeah
for
bundle
stuff.
You
know,
there's
all
these
oci
formats
there's
there's
the
helm,
tar
balls
inside
of
the
oci
charts.
A
We've
got
the
bundle
cap.
We've
got
the
image
package
stuff
from
the
carvel
team,
and
then
people
are
just
pushing
all
kinds
of
stuff
to
oci
registries.
Now,
like
the
tinkerbell
oss
project
from
packet
is
pushing
disk
images
like
x4
file
systems
up
to
oci,
all
of
the
sudden.
It's
just
like
experimental
right
now,
so
lots
of
weird
things
happening
there,
but
you
have
these
blogs
inside
of
the
oci
descriptors
and
they
have
media
types
and
I
was
figuring
like.
A
Is
there
a
useful
generic
way
you
know
potentially
to
for
us
to
unpack
these
images
onto
a
file
system?
This
basically
gets
into
kind
of
what
you're
talking
about
evan
with
like
having
a
provisioner.
That's
like
fairly
generic,
so
I'm
talking
about
the
source
controller
part,
not
necessarily
the
part
of
the
provisioner.
That
applies
resources
to
the
cluster
and
picks
paths
and
builds
things,
but
just
the
idea
of
like,
what's
in
the
bundle
or
the
artifact
and
like
how
do
you
represent
it
on
the
file
system?
A
To
begin
with,
the
fetching
and
unpacking
part
seems
to
be
a
peculiar
problem
when
we
have
all
of
these
proposed
formats
any
thoughts
folks,
there.
B
I
mean
definitely
my
feeling
is
that
long
term.
We
would
like
to
get
cubelet
support
for
pulling
oci
artifacts,
not
because
mostly
just
because
of
the
cluster
configuration
challenges
that
you
have
without,
without
that
all
of
your
all
of
your
non-runnable
images
will
be
second
class
citizens
in
a
cluster.
If
you
don't
go
that
route,
you'll
have
separate
conv,
you
might
be
talking
to
the
same
exact
registry
or
the
same
exact
name
space
in
a
registry,
but
you'll
have
to
have
separate
configuration
for
it,
which
configurations
are
you
talking
about
like
boss
or
well?
B
It
could
be
auth,
it
could
be
like
proxies.
It
could
be.
You
have
you
know,
custom
certs
for
your
private
registry,
you're
talking
to
which
you
have
to
configure
once
for
your
all
your
nodes
to
talk
to
so
the
cube
can
can
pull
the
images
or
mirror
configuration.
That's
another
one.
If
you
have
configured
different
mirrors
for.
A
Some
of
those
infrastructure
challenges
can
be
a
little
bit
problematic
because
they
break
the
name,
spacing
model
of
pull
secrets.
A
Because
I
know
like
pull
secrets,
basically
allow
you
to
have
a
docker
config,
json
and
anything
that
doesn't
really
fit
into
that
object.
Kind
of
has
to
be
done
at
the
infrastructure
level,
but
as
soon
as
you
start
modifying
the
way
that
kubelet
talks
to
the
registry,
that's
like
for
the
whole
cluster
or
for
any
pods
that
land
on
that
node.
A
Rather,
it's
usually
done
for
every
couplet
in
the
cluster.
But
I
guess
you
could
have
different
node
groups
talk
to
different
registries
or
have
different
authentication
credentials,
but
yeah.
If
you
want
to
like
prevent
some
other
name
space
from
you
know.
Reading
your
images,
that's
just
not
doable
at
the
kubelet
level
per
se
without
other
abstractions,
like
isolating
nodules
to
a
particular
namespace.
B
Yeah,
I
guess
if
the
cubelet
understood
that,
though,
that
understood
oci
artifacts,
then
you
could
just
use
regular,
like
quote-unquote,
regular
pull
secrets
for
your
artifacts
too
right
and
then
the
cubelet
would
do
unioning,
whatever
the
pull
secret
creds
are
with
whatever
the
node
creds
are,
and
then
it
gets
to
make
decisions
around
what
mirrors
it
knows
about
based
on
its
configuration
right.
A
That's
that's
interesting.
I
I
don't
know
if
I'm
convinced
that
the
kublet
is
the
right
place
to
store
that
configuration.
I
do
agree
that
there's
a
challenge
and
like
centrally
representing
the
config
in
a
way
that
the
kubelet
can
use,
I'm
not
I'm
not
sure
if,
just
because
the
config,
you
know,
is
canonically
configured
on
the
kubelet
right
now
that
it's
the
right
place
for
it,
because
I
guess
what
I'm
saying.
B
Yeah,
I
think
I
think,
that's
totally
fair.
I
didn't
mean
to
be
that
prescriptive
with
with
the
solution
but
yeah
the
problem
is
the
configuration
sharing
and
if
there's
a
solution,
yeah.
A
Yeah,
that's
a
that's
a
good
issue
to
think
about
because
like
having
to
configure
like
every
kubelet
on
the
cluster.
Just
so
you
can
talk
to
your
config
registry
when
that
can
be
done
like
on
a
single
controller
or
like
with
a
couple
of
api
objects
like
in
the
cluster,
seems
like
a
really
high
cost
to
pay.
A
Like
I
can't
just
walk
up
to
a
cluster
now
and
then
say:
okay,
I
want
to
do
registry
ops
right
like
now.
I
have
to
go
configure
the
cluster
kublets
to
be
integrated
with
my
config
registry,
as
opposed
to
doing
it
on
the
one
workload
right
that
fetches
the
things.
B
I
I
guess
I'm
saying
you
you
should
be
able
to
do
both
is
the
problem
and
right
right
now.
Let's
say
we
build
a
controller
that
pulls
oc
artifacts
and
applies
them
to
a
cluster
you'll
have
to
hand
that
thing
some
credentials
in
a
secret
or
something
like
that.
B
But
if
your
cluster
is
being
run
by
infrastructure
owners
that
have
configured
a
policy
that
everything
has
to
go
through
a
central
proxy
or
that
all
all
docker
registry
traffic
needs
to
be.
You
know
talking
to
some
particular
registry
with
a
particular
key
with
a
particular
assert.
Then
now
you
also
have
to
configure
your
oci
puller
with
the
certs
that
maybe
you
as
a
cluster
consumer,
don't
even
have
access
to,
because
it's
been
abstracted
away
from
you
in
the
infrastructure,
yeah.
A
B
The
thing
that's
that's
tricky
for
us.
A
I
imagine
if
you
at
that
point
you
could,
whatever
like
component,
say:
source
controller
on
flux.
You
could
configure
those
same
things
on
the
source
controller
pod.
By
doing
things
like
host
volume
mounts
for
the
certs.
B
But
all
of
that
configuration
is
cri
specific
right,
that's
the
there's
no
standard
for
it.
There's
I
think
practically
there's
two
standards.
There's
the
dockery
and
red
hattie.
I
guess.
B
B
Do
I
mean
I
I
can't
remember
if
docker
and
container
d
are
the
same
config
now
they
might
be
that'd,
be.
A
We,
we
kind
of
have
this
same
problem
right
now
with
web
hooks
from
registries.
They
don't
all
look
the
same
and
that's
something
that
the
microsoft
folks
brought
up.
A
Just
all
of
these
individual
implementations
like
the
stuff
that
happens
outside
of
the
standard.
It's
like,
we
still
need
some
standard
way
of
of
having
interop
for
all
of
those
config.
So
like
events
for
like
image
push
right
is
completely
non-standard
across
all
of
the
image
registries,
and
we
almost
have
that
same
problem
for
the
configuration
of
like
mirrors
and
proxies.
B
Already
for
distributing
this
content
with
these
standard
apis,
but
now
we
can't
configure
our
clients
to
use
the
same
configuration
or
just
within
a
cd.
A
Cluster,
I
can
definitely
understand
the
appeal
of
like
reaching
for
the
kublet,
but
it's
kind
of
in
some
ways.
I
don't
I
don't
see
the
abstraction
as
complete
enough
like
you
can
get
the
kuba
to
go,
pull
the
thing,
but
then
we
have
to
add
stuff
to
it.
You
know
in
order
to
support
these
use
cases
like
kind
of
feeling
out
that
almost
feels
like
a
separate
component.
Now
we
have
this
kind
of
need
to
maybe
just
break
out
the
config
sharing
portion
of
it
instead
yeah
we
should.
A
A
Yeah,
let
us
know
if
you
know
the
folks
over
on
the
flux
project,
particularly
the
other
three
maintainers
are
brilliant
and
they
they're
thinking
about
these
kinds
of
problems
all
the
time.
So
we're
happy
to
help
kind
of
point
folks
in
the
right
direction.
With
these
apis.
A
Sweet
all
right,
I
guess
I'm
gonna
stop
the
recording
friends.
Our
meeting
notes
are
here
I'll
post
it
up
to
youtube
as
well.
Anyone
else
have
any
parting
comments.
I
know
evan
and
I
have
filled
a
lot
of
space
in
this.