►
From YouTube: SIG Cluster Lifecycle - Cluster Addons 20200915
A
B
Yeah
yeah,
let's
be
good.
I
think
this
will
probably
be.
The
main
purpose
of
our
meeting
today
is
to
just
talk
a
little
bit
about
build
infrastructure.
B
So
we
started
getting
some
of
the
structure
in
place
to
stage
images
and
get
them
pushed
into
a
repository
that
people
can
pull
from
and
that
would
host
things
like
the
operators
that
were
publishing
and
then
getting
ci
hooked
up
to
room
tests
and
that
kind
of
thing
inside
of
the
repo
is
also
very
useful.
B
So
justin
has
been
working
with
that
and
then
john
zamdeep
also
put
in
a
patch,
and
I
figured
we
would
look
at
those
and
then
yeah
daniel.
Are
you
able
to
open
the
participant
screen
sharing
put
on
the
grab
host
in
order
to
do
it?
B
B
Sweet,
so
I
just
wanted
to
pull
up
our
agenda
really
quick
and
if
you're
new,
this
agenda
is
public
and
if
you
join
our
group,
you
can
add
your
own
edits.
So
in
this
first
patch
here
this
was
the
github
based,
workflows
and
figured.
We
could
take
a
look
at
this
just
as
a
group
and
talk
about
the
context
as
to
why
it's
being
proposed.
B
Sandeep
you're
on
right
now.
Do
you
have
the
ability
to
speak
to
the
purpose
of
this
patch
a
little
bit.
C
Yeah,
the
main
purpose
was
to
add
some
basic
tests
on
go
formatting
and
go
mods,
since
there
are
various
operators
which
have
each
have
their
own
individual
go
mods,
for
example,
so
this
basically
adds
some
github
workflow
on
this
yeah.
I
don't
know
if
we
need
to
enable
something
for
the
actions
to
run,
but
I
don't
see
my
enabling
those
actions
in
this
repository.
B
Because
so
you
added
a
github
actions
workflow
here
to
do
some
things
with
go
and
it's
supposed
to
be
operating
on,
pushes
and
pulls,
but
then
you're
noticing
that
this
these
checks
are
not
running
from
the
patch
right
and
justin.
Your
previous
patch
was
also
using
actions
to
invoke
container
builder
right.
D
I
did
have
one,
I
think
I
think
it's
fine
that
this
pr,
I
don't
think
github
runs
actions
that
are
newly
introduced
in
the
pr.
So
I
think
like
I
wouldn't
I
don't
think
we
should
worry
about
the
fact
that
it
isn't
running
them
on
the
on
the
on
the
pr
itself.
D
That's
what
we're
concerned
about.
I
think,
if
you
want
to
test
that
you
can
do
it
on
your
own
fork
and
that's
sort
of
how
I've
in
the
past
tested
it
like
you
merged
to
math.
I
think
you
have
to
merge
to
master
in
your
fork,
but
then
then
it
will
run
that
test
and
you
will
see
the
results
there
in
your
fork
and
then
you
can
point
to
your
fork
and
say
look
here.
It
is.
C
Okay,
yeah,
I
I
am
doing
that.
I
did
open
the
pr
on
my
phone
to
test
whether
it's
running
and
working.
It
does
work
there,
but
yeah.
I
was
confused
whether
it
should
be
running
on
here
as
well.
B
B
B
So
the
examples
of
the
the
tests
running
are
all
passing
on
your
copy
of
the.
C
Yeah,
I
sorry
I
had
to
do
some
fixes
for
on
go
format
and
go
more,
for
example,
on
my
local
and
push
those
changes
as
well.
C
B
Yeah,
I
don't
think
that
running,
go
format
and
making
gomod
compliant
is
particularly
controversial.
So
this
is
a
really
great
cleanup
but
yeah.
I
suppose
that
that's
probably
the
major
change
and
then
everything
else
is
just
auto
generated
right.
C
Right
there
was
one
more
thing
I
think
the
installer
package
has
some
rendering
directories
as
well.
I
was
wondering
if
we
actually
need
to
have
a
vendor
director
directory,
because
there
are
some
go
formats
which
ran
on
the
vendor
directory
as
well,
and
the
installer
stuff
is
the
only
directory
which
has
vendor
packaging.
B
We
could
likely
ignore
any
vendor
directory
from
go
format,
but
I
suppose
that
it's
not
entirely
necessary
to
have
the
vendor.
C
B
B
B
Nick
you
just
pointed
out
that
it
looks
like
there's
some
missing
characters
that
are
unintentionally
removed,
so
maybe
we
should
just
fix
a
it's
a
particular
discrepancy.
It
was
just
a.
B
Yeah,
this
doesn't
look
super
intentional
right.
It
was
oh
wait.
I
was
just
correcting,
I
think,
a
grammatical
issue.
That's
probably
not
important
for
this
pr,
just
because
yeah,
it's
probably
unrelated.
I
guess
I
was
misreading
this.
I
thought
that
this
was
part
of
the
patch
for
some
reason,
but
yeah
you're
just
suggesting
a
different
change,
because
this
is
just
a
white
space
thing
go
format.
I
guess.
B
Okay,
oh
well,
so
did
the
the
vendor
directory.
I
guess
this
was.
B
D
Those
two
reasons
are
that
sometimes
people
have
problems
pulling
like
a
lot
of
repos,
particularly
in
regions
like
china,
so
that
I
think
there
can
be
sort
of
firewall
issues
there
and
so
like
just
having
it
in
one
one
poll
and
then
you're
good
is
often
handy.
The
other
one
which
is.
D
Harder
to
fix,
I
think,
is
the
vendor
directory
shows
you
the
subset
of
code
that
is
sort
of
in
scope,
and
it
can
highlight
for
you.
You
know
like
what,
when
we
make
when
we
change
a
gomod
dependency
great,
what
actually
changed
and
vendor
lets
you
sort
of
review
those
changes.
So
if
you're
doing
a
production
image,
you
can
get
a
better
feel
for
what's
actually
changed
like
sure
I've
added
these
dependencies
but
like
have
I
pulled
in
like.
Is
it
a
two
minute
change,
or
is
it
a
three
million
line
change?
D
That's
accidentally
pulled
in
like
the
universe,
so
that's
that's
very
difficult
to
to
address
without
the
vendor
directory.
So
I
I
would
be
in.
I
would
suggest
if
we
can
ignoring
it
and
if
not,
we
can
look
at
whether
the
installer
and
other
tools
need
to.
D
B
And
then
yeah
I
mean
we
probably
shouldn't
be
formatting
code,
that's
already
versioned
and
vendored
from
somewhere
else
as
well.
So
that
probably
is
justification
for
trying
to
ignore
that,
but
other
than
that
I
mean
I
thank
you
so
much
for
doing
this
cleanup.
I
think
this
is
a
really
solid
thing
to
be
continuously
running
on
the
repository
and
hope
they
can
keep
maintaining
it
and
it
can
grow
and
live.
As
we
add
more
projects.
B
So
this
next
one
is
your
patch
justin,
which
I
looked
at
earlier
and
it's
looking
like
a
pretty
simple
needed
set
of
changes.
But
we
just
wanted
to
do
this
on
the
call,
since
we
were
already
reviewing
the
other
one
and
basically
to
me
it
looks
like
you're
just
pulling
in
all
of
the
plumbing
for
the
make
file
to
actually
push
the
image.
D
Yeah
and
push
to
the
right
place.
Yes,
it
was
just
a
mistake
I
I
made
previously
just
because
it
was
hard,
it's
sort
of
hard
to
test.
B
Everything
wired
up
yeah
and
does
this
make
actually
push
the
image
now
or
is
it
just
creating
it
locally?.
D
I'm
I'm
pretty
sure
it's
built
and
pushed
as
I
double
check,
but
I
think
it
makes
some
pushes.
B
B
So
as
long
as
it's
yeah
making
the
right
tag.
That's
that's
a
great
addition,
but
it
might
be
missing
a
make.
B
B
Yeah,
I
don't
know
if
you
would
like
to
can
maybe
review
that
before
we
orange
this
patch
or
we
can
merge
this
as
it
is,
and
you
can
submit
a
subsequent
one,
which
would
you
prefer
either.
D
B
Okay,
yeah-
it
probably
would
be
better
to
just
amend
that
than
in
here,
since
it's
in
the
same
repo.
B
D
B
B
A
Yeah,
it's
really
just
a
quick
update,
so
some
touchy's
blog
post
has
received
like
many
many
reviews,
and
I
think
it's
very
close.
I
think
it
would
be
nice
if
we
could
all
share
it
once
it's
once.
It's
landed
on
the
kubernetes
blog
and
I
just
saw
while
browsing
through
the
pull
request.
There's
two
small
ones
from
some
tochi
like
one
would
need
a
mac
user
to
quickly
look
over
it
and
the
other
moon
wasn't
also
wasn't
very
big.
B
Port
across
like
a
dockerized
build,
but
the
this
pr
is
very
interesting.
B
I
don't
think
jake
is
on
this
call
and
he
might
be
considered
the
maintainer
of
the
code
that
this
is
modifying
unless
maybe
justin
or
jeff
have
a
different
opinion,
but
this
is
for
the
gke,
related
add-on
manager
and
what
some
touchy's
done
here
is
she's
started
to
replace
the
exec
to
coupe
cuddle
inside
of
the
go
code
to
natively
use
ctl
as
a
library.
B
So
this
might
be
interesting
to
people
here
for
a
variety
of
reasons.
We've
talked
about
similar
patches
that
some
touchy
has
opened
up
in
the
coop
builder
declarative
pattern
before,
but
if
you'd
like
to
see
an
example
of
using
coop
ctl
as
a
library
or
provide
your
input
on
this
patch,
not
up
to
date
on
which
part
of
the
review
cycle
we
got
to
here
yeah.
So
you
still
had
some
comments
that
some
touchy
was
going
to
work
on
right,
justin
but
yeah.
B
This
patch
is
great,
really
interesting
and
go
ahead
and
take
a
look.
D
Yeah,
I
think
this
one
is
actually
a
more
a
simple
edition
of
just
starting
to
use
the
client
set
to
forget.
Instead
of
calling
executing
google.
Obviously
santoshi
has
done
the
work
elsewhere
to
do
to
apply,
for
example,
but
this
is
like
the
first
step.
I
think
I
don't
want
to
speak
for
jake,
but
I
would
I
would
imagine
that
the
we
want
to
get
the
image
built
and
into
the
e
tests,
and
then
we
start
changing
it.
D
So,
in
other
words,
let's
get
more
some
coverage
before
we
like
make
changes,
so
that's
blocked
on
me
and
that
other
pr,
when
I'm
not
pushing
or
not
labeling
correctly,.
B
See
so
this
is
only
using
client
go
so
far
and
not
changing
the
apply
logic
yet
right.
So
even
so,
yes,
okay!
Yes,
no!
So
that's
much
more
standard.
I
guess
that's
interesting,
although
still
very
still
very
notable.
B
B
The
weaveworks
team
is
working
on
toolkit,
dot,
flux
cdio.
This
is
called
the
get
ups
toolkit
and
it's
shaping
up
to
be
the
primitive
pieces
for,
what's
going
to
make
up
a
second
release
of
flux,
it's
considered
pretty
much
a
rewrite
where
things
have
been
broken
out
into
modular
components
and
the
reconciliation
portions
are
triggered
off
of
git
repository
sources.
You
know
via
kubernetes
events,
this
kind
of
thing
it's
certainly
in
the
kind
of
space
that
add-ons
works
in.
B
No
so
killer
them.
The
main
thing
that
makes
it
different
from
something
like
helm
operator
is
that
the
helm
operator
is
driven
by
a
kubernetes
declarative
api,
whereas
tiller
was
talked
to
by
its
own
native
api.
B
So
it
was
a
form
of
our
back
escalation,
since
tillers
typically
had
very
high
privileges
and
had
had
an
exposed
bespoke
non-kubernetes
api.
B
So
that's
that
is
fundamentally
different,
but
you
do
still
receive
the
active
reconciliation
pattern
of
like.
What's
declared
to
be
released
is
what
helm
operator
will
install
for
you,
and
there
is
a
least
privileged
model
for
that
for
the
get
ups
toolkit,
which
is
the
primitives
of
flux,
v2,
helm
operator
is
just
being
renamed
to
helm
controller
and
it's
a
new
implementation
that
can
operate
as
a
multi-tenant
singleton
in
the
cluster.
We're
using
some
api
machinery.
B
Authorization
features
we're
using
some
authorization
features
to
impersonate
things
that
and
allow
us
to
support
a
multi-tenant
environment.
In
the
majority
of
cases.
D
D
B
Yeah
and
not
just
the
not
just
to
release
the
chart,
but
also
to
decide
which
rel,
which
charts
are
allowed
and
from
which
repositories
also
controlled
via
the
custom
resource
policies.
B
D
B
And
there
is
a
different
components
that
can
apply.
General
manifests
as
well
as
customization
directories,
so
you
can
compose
these
things
together,
either
by
having
customizations
that
create
helm
releases
or
vice
versa.
Helm
releases
that
create
customizations
that
sync
could
get
repo
to
the
cluster.
B
One
final
use
case
is
that
helm,
it's
possible
to
sync
a
chart
from
git
repository.
These
are
the
kinds
of
things
that
we
found
are
quite
popular
in
the
first
rendition
of.
D
B
I
certainly
would
be
interested
to
see
as
well
how
some
of
these
get
ops
patterns
with
this
tooling,
that
we're
playing
with
composes
with
other
things
in
the
open
source
and
enterprise
ecosystem.
B
Yeah,
that's
the
last
thing
I
had
to
share
again
if
you've
got
any
final
things
that
you
wanted
to
add
to
the
agenda
feel
free
to
do
so
or
just
speak
up,
and
otherwise
we're
kind
of
at
the
end
of
our
meeting.
C
So
I
actually
had
a
topic
in
mind
since
1.20
cycle
is
started.
Do
we
want
to
proceed
with
the
cubadium
installing
items
via
cubadium
there's,
one
of
the
enhancements
which
I
think
julie
had
opened
up?
Should
we
continue
with
that
or
for
that
agenda?
Should
we
open
a
new
kp,
the.
B
Existing
cap
is
good.
This
is
really
just
a
task
of
bringing
this
up
with
the
kuva
damn
maintainers
and
if
you
or
somebody
would
like
to
do
that
potentially
with
me,
then
I'd
be
happy
to
start
those
conversations
again.
B
B
I
do
think
that
there
is
a
pretty
clear
need
to
either
handle
the
add-ons
problem
inside
of
kubernetes
or
to
separate
kubernetes
from
the
add-ons
problem,
and
we
were
in
agreement
that
one
of
those
two
was
going
to
need
to
happen,
but
I
think
that
some
of
the
priorities
for
the
maintenance
cycle
previously
was
just
stretched
thin
and
fabricio
and
lubimiron
etc.
B
I
don't
believe
any
of
them
are
on
the
call
right
now,
but
if
we
were
to
chat
with
them
on
the
kubernetes
call,
that
would
be
good
because
they're
they're
very
much
open
ears
on
this.
I
just
they
just
don't,
have
the
cycles
to
do
the
work
themselves.
C
B
So
for
anyone
that
doesn't
know
the
coupe
idiom
office
hours
would
be
this
same
time
block,
but
tomorrow
and
it's
under
cluster
lifecycle.
So
if
you
are
interested
in
joining
in
for
that
conversation,
it
sounds.
C
B
B
And
then
yeah,
I'm
sure
they'll
be
eager
to
talk
about
that.
I
imagine
that
there
will
be
some
backlog
stuff
that
they
want
to
talk
about.
B
So
same
time
block
tomorrow,
thanks
for
bringing
that
up.
B
All
right,
well,
thanks
everyone
for
joining
if
you're
new
and
you
didn't
speak
up-
welcome
to
the
meeting.
We
hope
to
hear
from
you
later
on
thanks
for
staying
up
to
date
with
what
we're
doing
in
this
group,
and
we
will
convene,
if
not
before
in
two
weeks.