►
From YouTube: SIG Cluster Lifecycle - Cluster Addons 20190528
Description
Cluster Addons meeting 2019-05-28. https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit#
A
All
right
welcome
everyone
to
the
cluster
adults
meeting
it's
a
28th
of
May,
so
just
didn't.
Let
me
know
that
he's
not
coming
today
he's
still
on
his
way
back
from
Cuba,
calm
to
home
and
yeah
I
sort
of
it
provides
the
agenda,
there's
not
much
on
there.
Yet
you
see,
we
have
actions
from
last
time.
No,
not
really
so
two
things
I,
don't
know.
A
A
You
saw
that
yes,
right
to
hell
anyway,
I
see
that's
where
everything
was
going
from
the
last
time.
Oh,
you
can
also
do
it
in
the
next
meeting.
Doesn't
really
matter
maybe
leave
or
maybe
before
leave
before
you
go,
there's
anything
else.
Anyone
of
you
wanted
to
also
pick
to
discuss
or
any
update.
Maybe.
A
B
B
I'm,
assuming
people
can
hear
me,
hello,
hello,
okay,
cool,
thank
you
so
pulling
up
the
add-on
management
dock
here,
thanks
folks
for
the
comments,
I
know
Jessica,
you
had
some
helpful
insights
here
and
then
we've
had
a
few
people,
be
able
to
jump
in
as
well
I'm
happy
to
see
that
we've
been
able
to
have
an
async
discussion,
which
was
a
big
purpose
of
this
talk.
If
you
haven't
had
the
chance
to
look
at
this
before.
B
Basically,
the
scope
of
what
it's
trying
to
address
is
that,
even
if
we
have
add-ons
able
to
be
packaged
and
run
in
some
way,
whether
that
be
operators
or
using
some
external
tooling
to
the
cluster,
we
still
need
some
privileged
component
and
some
standard
interface
around
the
add-ons
and
our
old
example
of
that
was
the
bash
thing
right.
That
would
loop
over
a
bunch
of
manifests
in
the
upstream
git
repo
and
then
apply
them
since
they're.
B
So
this
document
goes
pretty
in-depth
into
the
run
time
and
kind
of
security
properties
and
requirements
that
are
necessary
in
order
to
achieve
those
goals
in
a
generic
way,
trying
as
much
as
possible,
not
to
suggest
that
we
use
any
one
particular
thing
from
upstream
or
the
community.
That's
something
that's
worth
talking
about
is
that
we've
had
some
community
members
suggest
that
we
do
not
use
that.
We
minimize
the
number
of
things
that
we
use
from
outside
the
kubernetes
codebase
to
get
away
from
problems
of
dependencies.
B
People
may
or
may
not
have
differing
opinions
on
that,
and
so
the
first
bit
is
if
anyone
has
any
strong
feelings
about
not
using
our
back.
Justin
has
kind
of
left
the
placeholder
here
to
comment
such
as
maybe
retrofitting
some
weird
usage
of
authorizers
or
maybe
looking
into
web
hooks
or
something
like
that.
Otherwise,
basically,
in
order
to
install
an
add-on,
you
should
have
in
our
back
role.
B
Could
be
helpful
for
authorship
of
add-ons?
That's
a
late-stage
need
or
kind
of
a
higher
order
need,
not
something
that
we
need
to
start
out
with
here's
the
discussion
from
Kevin
and
Jessica
about
what
component
needs
to
be
privileged.
This
points
out
a
distinction.
If
you're
using
an
operator,
that's
based
off
of
a
CR
D,
then
the
operator
needs
to
have
that
CR
D,
as
if
prerequisite
to
running
I,
believe
a
small
number
of
operators
actually
are
responsible
for
installing
and
maintaining
their
CR
DS.
B
I
did
find
an
interesting
feature
which
allows
for
the
composition
of
our
brac
roles
using
this
selector,
so
Justin
thought
this
was
interesting.
It
allows
us
to
not
just
use
like
llamo
structure
and
comments
to
denote
what
purposes
roles
are
for,
but
to
actually
aggregate
them.
So
I
think
it's
called
cluster
role,
aggregation
and.
B
Also
Jessica
mentioned
that
apparently
I'm
guessing
Jordan
legit
wrote
a
tool
for
turning
audit
logs
into
our
rack
roles,
so
we
could
potentially
do
some
kind
of
generation
of
an
operator's
run
time
by
running
some
end-to-end
tests
on
it
with
the
cluster
admin
and
then
maybe
minifying
that
down
into
a
more
precise
hardback
role.
I
really
like
that
idea.
Thanks
for
the
comment,
I'll
have
to
look
at
that
tool.
Yeah.
B
D
So
I
think
the
way
it
works.
He
actually
has
a
coupon
talk
about
it
from
a
couple
years.
Go
for
Lux
there
how
it
works.
So
you
you
don't
grant
permissions,
you
try
and
run
the
thing
you
let
the
audit
catch
the
failures
and
then
you
generate
are
back
from
that
and
then
you
can
iterate
on
that.
And
so
there
are
no
more
failures
and
it
has
everything
that
it
needs
interesting,
so
start
with
no
permissions
and
then
build
out
the
permissions
based
on
the
failures
I.
B
B
B
What's
really
weird
about
the
kubernetes
api,
when
you
really
think
about
it,
is
that
namespace
is
a
grouping
and
isolation
like
kind
of
function,
but
it
conflates
a
lot
of
ideas.
So,
like
you
get
network
policy
in
there
being
used
to
scope
on
a
namespace
and
like
things
can't
belong
to
multiple
namespaces
it
just
it
gets
a
little
weird.
B
Also,
like
namespaces,
don't
have
any
of
these
other
package
niceties
like
having
versions
of
things
for
the
entire
collection,
so
there's
some
existing
worth
some
upstream.
Some
a
lot
in
the
community
to
have
different
ideas
of
packaging,
helm
three
is
using
the
ORS
supported
library
that
basically
stores
their
packages
and
docker
compatible
registries
and
then,
like
we've,
got
you
know
case
like
different
K
sonnet
projects.
We
have
customized
layers,
sorry
customized
overlays,
which
comes
from
git
repos
and
use
Hoshi,
Corp
libraries
and
yeah.
B
It's
there
there's
just
like
lots
of
ways
that
people
bundle
stuff
together.
Add
lube
Amir
mentions
that
there's
the
cluster
bundle
work,
which
smashes
a
bunch
of
objects
into
a
single
kubernetes
object.
So
we
we
should
find
an
approach.
My
personal
opinion
is
that
it
it
could
be
a
big
value,
add
to
the
community
if
we
keep
rolling
with
the
customized
stuff
and
we
just
load.
Those
in
from
git
repositories
keeps
it
really
simple
for
cluster
operators
and
by
I
cluster
operators.
I
mean
people,
but
that's
part
of
that
discussion.
B
D
D
B
It's
a
little
weird
to
think
about.
Like
I
guess
you
can
do
really
creative
things
with
our
back
like
say,
give
somebody
the
ability
to
edit
at
CDs,
but
not
deployments
or
or
staple
sets,
or
whatever
or
similarly
like.
Instead
of
a
somebody,
you
could
make
it
an
app.
An
app
only
has
the
ability
to
work
on
these
custom
resources
and
our
back
prevents
it
from
creating
more
than
whatever
to
protect
the
cluster
from
unintended
fan-out
yeah.
D
B
B
And
if
our
back
is
part
of
the
package,
then
our
back
may
have
options
right
and
like
it
just
gets
a
little
funky
because
yeah
if
you've
got
like
a
default
like
1000,
you
know
DNS
record
limit
or
1000
at
CD
cluster
limit
and
the
user,
for
some
reason
wants
to
do
something
else
and
keeping
it
simple
to
start
with
is
good,
but
ultimately
we
don't
want
to
create
a
leaky
abstraction.
It
basically
just
caused
you
some
more
pain
for
people
in
the
long
run.
B
B
Other
other
tools
do
not
like
helm,
only
tries
things
once
if
you
use
helm
with
like
helm,
operator
and
flux,
then
you
don't
have
this
problem
because
it's
always
continuously
trying
to
converge
the
clustered,
but
that's
in
interesting
nuance
and
there
are
some
attributes
of
tools
and
how
they
relate
to
that
problem
along
with
the
packaging
needs,
and
then
we
had
some
comments
about
another
suggestion
for
a
project
called
coop.
Cfg
I've
never
used
this
before.
But
if
anyone
feels
strongly
about
including
this
in
here,
we
can
definitely
include
it
in
our
discussions.
B
Thank
you
and
yes
for
commenting
some
links,
and
then
here
I've
tried
to
include
some
example
use
cases.
Ultimately,
the
goal
of
this
document
is
to
produce
some
proposals,
Beach
that
would
probably
have
their
own
kept
and
yeah
I.
Imagine
that
the
cycle
time
for
this
would
probably
be
in
to
116
would
be
my
guess,
but
yeah
there's
a
lot
of
information
here.
I
appreciate
so
much
that
people
have
taken
the
time
to
read
in
and
comment
and
have
additions
and
let's
keep
iterating
on
this
and
get
started
on
some
caps.
C
Once
where
I
thought
I
had
this
might
just
because
I
haven't
thought
about
it
too
much.
There's
a
section
in
a
documentary
talk
about
like
long
lived
processes
versus
just
sort
of
one
shot.
I
think
that's
very
interesting
and
I
would
maybe
like
to
see
like
like
to
understand.
If
it's
a
goal
or
could
we
make
it
a
goal,
or
is
it
crazy
to
make
it
a
goal
that
you
know
we
can
write
that
we
can
write
some
code
one,
so
we
can
write
the
management
once
and
run
it
in
both
ways.
C
I
can
see
we're
just
some
clusters.
You
know
some
users
want
it
to
be
a
long-running
thing
and
some
users
wanted
to
be
a
one-shot
thing
and
I
think
that
technically,
it
feels
like
possible
feels
possible
to
solve
that
with
the
same
code.
I'm
just
curious
if
that's
like
scope
creeper,
if
that
sort
of
makes
sense
to
pull
those
together.
I.
B
B
C
B
B
B
It's
just
the
security
model
and
the
runtime
requirements
of
maintaining
these
components
in
clusters
is
not
fit
for
small
clusters
for
clusters
that
need
to
run
on
low-power
hardware,
for
people
who
are
in
a
high
compliance
environment
where
they
need
very
tight
control
over
things
that
are
running
and
so
like,
just
in
general,
like
having
a
bunch
of
going
binaries
that
each
have
an
AR
back
role
with
the
ability
to
potentially
mutate.
Lots
of
you
know
things
in
your
clusters.
B
Existing
tools
like
customize
and
helm,
as
well
as
many
others,
already
behaved
in
the
ad
hoc
manner
right,
where
they
store
state
about
what
they
did
inside
of
the
cluster
in
very
low
value
objects,
and
then
the
privileged
code
needs
to
be
run
often
with
some.
You
know,
like
our
back
account,
that's
external
to
the
cluster,
so
that
that's
an
attractive
security
model
for
some
people.
B
So
you
need
some
kind
of
code
that
will
like
read
the
kubernetes
service
and
then
mutate
it
in
order
to
do
something
so
like
making
the
core
dns
operators
logic
like
work
off
of
that
CRD,
but
like
not
in
the
long-running
process.
I
would
be
very
interested
in
that,
since
that
means
to
that's
a
component
that
basically
every
cluster
installer
has
to
work
with.
B
C
B
C
C
B
B
Have
some
questions
and
concerns
about
some
of
the
stuff
you've
brought
up,
especially
for
relation
to
customize
the
questions
that
you
seem
to
have
it's
all
about
how
useful
it
is
for
operators.
But
one
thing
that
I
have
been
thinking
about
is
its
distribution,
which
needs
to
be
based
around
get
right
now,
and
it
seems
like
a
lot
of
the
other
community
effort.
Study
mentions
in
the
package.
Increments
seem
to
be
focused
more
on
to
a
CI
based
distribution.
C
B
Distribution
method
for
a
mo
Lux
and
it
make
it
a
little
easier
to
make
decisions
for
operators
yeah.
That's
a
great
point.
Thanks
Evan
the
last
on
the
last
call
I
mentioned
some
or
ass
stuff,
or
somebody
did,
and
then
we
started
talking
about
it
and
Tim
st.
Clair
jumped
in
and
said
hey.
Maybe
let's
not
like
have
external,
like
so
many
external
dependencies
since
that's
apparently
not
a
kubernetes
project,
great.
B
Yeah,
that's
my
understanding
as
ORS
is
a
library
to
construct
to
easily
construct
OCI
packaging
format,
but
the
only
argument
I
can
think
to
make
right
now
is
that
if
you
stand
up
the
kubernetes
cluster,
you
have
to
have
some
image
registry
available
and
you
don't
necessarily
have
to
have
I
get
available.
But
then.
B
Dream
is
so
far
yeah.
The
only
other,
weird
part
that
I
have
about
that
is
that
unfortunately,
using
the
Oh
see
I
stuff,
like
you,
don't
normally
have
access
to
whatever
storage
back-end
is
actually
being
leveraged
by
the
cluster,
so
that
stuff
probably
needs
to
be
pulled
in
by
default
at
runtime
and
then
maybe
stored
into
like
a
PVC
or
something.
If
you
want
to
get
really
weird
about
it,
but
which
doesn't
really
like
chain,
it
doesn't
look
different
from
what
you
would
have
to
do
with
the
gate.
B
B
D
B
B
C
B
Get
one
single
answer
or
early
instead
support
multiple
ones,
but
then
at
some
point
you
know
ideally
there's
like
one
get
toring
and
one
OCI
form
and
yeah
I
would
say
that
I
think
customized
comes
out
of
SIG's
CLI
with
a
little
bit
of
input
from
sig
apps,
if
I'm
not
mistaken,
and
then
customized
customized
has
a
packaging
format.
It's
got
ways
of
doing
like
overlays
and
right
now,
that's
basically
through
get
but
I
I
don't
see
why
that
couldn't
be
easily
retrofitted
onto
an
arrest
library,
I
mean
I.
C
B
That's
like
one
step
closer
to
that
shared
interface.
The
only
other
bit
is
then
like.
How
do
we
execute
these
things
and
get
them
applied
into
the
cluster
and
do
do
we
do
that?
You
know
on
an
ad
hoc
or
converging
basis
or
both
cool
I,
like
it
I'll
work,
to
get
some
of
those
points
represented
in
the
document,
so
that
we
don't
forget
them
for
the
proposals.