►
From YouTube: 20181023 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
So
CR
DS
have
been
great
for
if
you
have
a
purely
external
project,
but
when
you
have
internal
core
kubernetes
components
depending
on
CR
DS,
you
run
into
a
bunch
of
challenges.
So
we
were
one
of
the
teams
that
actually
introduced
a
couple
of
CRD
objects
that
the
attach
detach
controller,
for
example,
will
use
last
quarter.
One
of
those
challenges
was:
how
does
a
CR
d
get
installed?
A
CR
d
back.
A
core
component
depends
on
our
initial
attempt
at
this
was
well
the
controller
that
uses
it
as
part
of
initialization
could
install
it.
B
B
We're
gonna
follow
that
path,
but
it
sounds
like
there
is
a
larger
requirement
for
the
project
to
kind
of
figure
out
how
the
CR
DS
should
be
installed,
whether
it
should
be
some
sort
of
new
controller,
some
new
mechanism,
maybe
a
v2
of
the
add-on
manager,
so
I
wanted
to
get
that
discussion
started
here.
I
went
to
API
machinery
last
week
and
their
recommendation
was
to
come
talk
to
us.
A
cluster
lifecycle,
so
I
see.
A
Justin
turn
on
this
camera,
so
I
know
we
both
are
probably
going
to
similar
things.
There
I
there's
ongoing
work
with
regards
to
management
of
core
add-ons
and
the
talk
of
bundles
right
and
that
work
is
ongoing.
I,
don't
think
it's
in
a
fully
fleshed
out
state.
Yet,
but
I'll
pause
there
and
let
Justin
talk
yeah.
C
I
I
think
what
you're
said.
What's
that
what
you
said
to
him
was
spot-on,
I
guess
my
question
would
be
if
there
is
a
controller
whatever
that
is
in,
has
that
some
more
privileged
thing
has
to
install.
Is
there
any
reason
that
that
privileged
thing,
whatever
it
ends
up
being,
should
not
also
install
the
CRT
like?
Is
that
not
what
we
want
is
our.
C
D
But
the
problems
we
would
have
is
I
would
want
to
install
multiple
aggregated
API
servers
and
each
one
wanted
to
install
of
global
non
made
spit
noms
non
names
face
a
registration
for
API
registration,
a
resource
that
would
be
the
only
comment,
otherwise
I
think
you're
right.
That
makes
it
it's
the
right
place
to
install.
A
It
so
so
it
becomes
a
little
weird
to
me
in
that.
If
you
start
to
modify
core
api's,
it
is
basically
a
core
component.
Then
it's
no
longer
an
add-on
and
it's
moved
to
core
component
so
that
I
can
see
a
migration
path
for
some
things
that
are
add-ons
to
be
managed
via
add-on
manager,
but
once
we
get
into
the
state
of,
if
they,
if
the
API
is
quote
unquote,
be
one
of
some
sorts
right
and
it's
depended
upon
by
multiple
core
components.
It
is
a
core
component
by
itself.
I.
C
Mean
I
think
the
I
think
one
of
the
reasons
for
the
bundle
is
sort
of
the
base
observation
that
you
know
like
a
kubernetes
cluster
shortly
won't
won't
work
on
a
on
a
cloud
provider
in
the
same
way
that
it
does
today,
because
you'll
need
an
external
Cloud
Controller,
and
is
that
core
or
not
I?
Don't
we
don't
that's
sort
of
semantics,
but
I
think
we
want
to.
We
want
to
acknowledge
that
there
will
be
a
bunch
of
things
that
now
have
to
be
installed
for
even
parity
with
what
we
have
today.
I
think.
A
Bundles
is
the
right
way
to
solve
this
in
the
current
state
of
where
things
are
because
you're
still
gonna
have
the
core
API
I.
Don't
I,
don't
see
that
really
migrating
away
in
wholesale?
That
would
be
a
dangerous
thing,
anyways
from
an
employment
perspective,
so
be
like
you
know.
These
are
new.
Are
these
all
net
new
features,
sod.
A
If
they're
totally
net
new,
then
it
makes
perfect
sense
to
have
a
separate
bundle.
That
applies
the
add-ons
in
bulk,
which
would
include
all
the
add-ons,
not
just
this
particular
one.
Now
it
is
it
going
to
be
like
I,
don't
exactly
know
how
the
installation
process
goes,
is
a
separate
controller,
or
do
you
want
to
have
something
actually
creating,
because,
typically,
when
you
have
a
controller
that
can
sometimes
initially
create
these
CR
DS.
B
So
for
us
the
behavior
is,
for
example,
the
attached
detached
controller.
We'll
look
at
you
know
lists
of
a
particular
CR
D
if
it
exists,
modify
its
behavior
based
on
that.
So
the
assumption
that
the
attach
detach
controller
would
like
is
that
this
CR
D
already
exists.
We
can
add
in
logic
that
says
if
it
doesn't
exist.
B
A
I
think
this
is
broadly
applicable,
though,
to
I
think
we
need
to
start
to
think
about.
It's,
not
just
a
bundle
of
add-ons
it's
like
installation
procedure
for
CR
DS,
which
is
which
is
like
a
one-shot
type
of
installation,
and
that
needs
to
be
thought
out
more
thoroughly
because
it
doesn't
just
apply
to
you.
It's
gonna
apply
to
everybody.
A
B
C
B
A
B
B
I
think
lava
lamp
and
some
of
the
API
machinery
folks
really
didn't
like
the
the
controller,
is
pre
installing
the
CRD
I.
Think
part
of
the
objection
was
the
fact
that
the
CR
D
code
was
being
auto-generated
within
code
instead
of
a
manifest
file,
which
is
easier
to
keep
track
of
that
also
I
think
broke
the
API
reviewer
semantics
that
were
used
to
so
API
reviewers
are
owners
of
the
types
go
API
phone
and
the
API
folder?
So
it's
very
clear
when
you
know
API
changes
are
being
made
and
who
the
approvers
are.
B
C
Do
agree
that
it
is
convenient
that
the
controller
registers
its
own
one
I
do
think
in
production,
like
just
for
authorization
type
reasons.
We
want
like
just
a
limit.
The
permissions
that
this
controller
has.
We
would
prefer
not
to
have
have
it
actually
registrants,
UNC
Rd,
but
as
I
understand
it,
that's
as
simple
as
pre
register
the
CRT,
the
controller
wakes
up
says:
oh
there's
my
CRD
I.
Don't
need
to
do
anything
and
then
in
development
it
just
works.
Is
it.
A
A
E
F
Yes,
sorry
I
was
running
a
bit
late
today.
Do
you
have
thoughts
here?
I
was
trying
to
read
through
the
issue
since
I
missed
the
beginning
of
size,
description
to
figure
out
exactly
exactly
what
they're
looking
for,
because
when
I
I
joined,
he
was
saying
something
about
that
manager,
but
I
didn't
catch
all.
B
B
F
It
sounds
to
me
this
is
in
some
ways
similar
to
storage
class
right
where
the
the
core
controller
for
storage.
Today,
if
the
storage
there's
like
no
storage
class
to
find
it
sort
of
gracefully
degrades,
but
if
the
installation
mechanism
for
the
cluster
has
created
a
default
storage
class,
then
this
you
know,
built-in
storage
controller
has
enhanced
functionality
because.
F
Of
similar
to
that,
where
the
core
controller
would
rely
on
this
CRD,
but
you
know
wouldn't
start,
you
know
breaking
or
crash
flipping
that
controller
manager,
if
it's
not
there,
but
we
would
like
the
common
installation
mechanisms
to
have
an
easy
way
to
plug
in
the
creation
of
these
CR
DS
on
cluster
creation.
Yes,
okay
and
I.
Guess
the
the
difference?
That's
where
either
line
between
this
and
sort
of
add-ons?
B
I
mean
III,
see
them
as
being
just
like
the
core
built-in
API,
when
you
upgrade
when
you
down
great
we're
gonna
need
some
sort
of
hooks
to
be
able
to
say:
okay,
I'm
upgrading.
This
object,
I
need
to
run
through
some
special
downgrade
or
upgrade
script
or
process.
Things
like
that.
So
I
think
yes,
in
that
way,
they're
different
from
add-ons
well.
F
It
sounds
kind
of
similar
to
the
way
we've
been
talking
about
add-ons,
because
what
we
talk
about
is
it
add-ons
are
tied
to
the
lifecycle
of
the
cluster,
in
the
sense
that,
like
oftentimes,
when
you
upgrade
your
cluster,
you
also
want
to
change
the
version
of
add-on.
That's
running
because
we've
done.
F
F
Work
in
a
different
configuration,
but
you
know
it's
not
really
supported
in
the
same
way,
and
so
this
would
be
the
same
thing
where
you
you
upgrade
your
and
I
guess:
I'm
curious.
What
would
you
component
you
think
this
is
tied
to?
Is
it
tied
to
the
it's
tie,
to
the
upgrade
of
the
controller
manager
here
effectively
right
now?
Yes,.
F
C
Agree
with
the
very
similar
add-ons
and
I
think
it's
pretty
clear
that
suit
cost
a
life
cycle
seems
like
the
right,
like
the
API
machinery
right
to
sequester
life
cycle
feels
correct
right.
That
feels
like
this
is
tied
to
the
bunk,
the
work
tied
to
revenue,
add-on
manager
and
making
it
meet
whatever
requirements
it
doesn't
meet
today.
I
can't
think
of
a
better
I
think
of
a
better
thing
for
it
honestly.
B
A
F
It
sounds
to
me
like
that
sort
of
Assad
was
looking
for
was
like.
Can
you
find
the
right
home
for
it
so
that
we
can?
We
can
start
figuring
out
how
to
solve
the
problem
once
instead
of
instead
of
creating
chaos
by
everybody?
Solving
it
separately.
So
I
think
that
that
part
of
the
process
is
working
here
and
it
sounds
like
Assad
is
not
proposing
like
I.
Have
the
solution,
but
I
have
a
short
term
work
around.
F
They
will
unblock
my
short
term
work
in
some
cases,
but
if
we're
using
the
add-on
manager
to
do
this,
you
know
on
gke,
that's
not
going
to
help
for
a
cuban
installation
right,
so
we
are
still
going
to
want
to
have
a
more
uniform
way
of
doing
this
across
installations
and
across
different
types
of
therapies
and
I.
Think
that's.
C
C
B
A
F
G
C
A
F
It's
two
things:
one,
my
guru
Tim.
We
should
have
a
kept
describing
this
and
I
think
this
SIG
may
end
up
having
to
own
that
cap.
If
it
becomes
sort
of
general-purpose.
How
do
we
do
with
CR
DS
but
thought
it
would
be
great
if
you
could
kick
off
the
process
since
it
sounds
like
you've
thought
about
this
quite
a
bit,
at
least
for
your
use
case,
and
then
too.
We
should
make
sure
that
this,
these
sort
of
requirements-
you
know
hopefully
written
down
and
that
kept
feed
into
the
process
of
revamping
add-on
management,
I.
F
Think
so
far.
We've
been
thinking
about
add-on
management,
more
as
controllers
I
would
say
they
need
to
run
that
are
tied
to
the
lifecycle
across
the
cluster
as
opposed
to
API
objects,
because
for
the
the
KIAC
objects,
as
you
said,
have
been
built-in
and
hazard
have
been
revved
with
the
API
machinery,
as
opposed
to
separately,
but
I
think
that's
a
really
good
way
to
think
about
it
because
we
are
gonna
want
it
to
have
more
and
more
sort
of
these
extension.
Api
is
other
series
there
like
extra
API.
Is
we
got
on
the
clusters?
C
Just
right,
it's
actually
really
it's
really
subtle,
because
if
you,
if
I,
knew
CR,
if
the
new
version
of
your
CRD
introduced
is
a
new
version
on
a
downgrade,
you
actually
don't
want
to
downgrade
to
the
old
CRD
right,
you
want
to
keep
it
in
the
you
might
keep
the
CR
deregistered
with
it
with
a
newly
introduced
api
version,
even
if
most
people
then
be
using
a
lower.
You
know
one
like
orphan
those,
those
newer
versions
well,.
F
That's
a
general
problem
with
our
API
is
right.
That
is,
that
it's
not
clear
exactly
how
to
downgrade
and
that
I
think
you're
right
to
start
with.
That's
probably
a
good
line
to
draw
over
to
say,
like
we
shouldn't
just
downgrade
these,
because
we
might
orphan
things
and
break
things,
but
we
also
need
to
figure
out
a
way
where
it
is
possible
to
safely
boundaries
as
well.
B
Okay,
so
I,
let's
make
sure
there
is
owners
assigned
and
a
timeline,
so
the
feature
issue
that
I've
opened.
We
need
to
figure
out
whether
we
want
it
to
be
part
of
one.
Thirteen
or
not.
It
sounds
like
this
is
a
short
quarter.
I'm,
not
sure,
there's
gonna
be
a
lot
of
progress
here.
So
would
folks
be
ok,
removing
it
from
the
one
thirteen
milestone
I
would.
A
B
B
A
H
H
A
A
Silence
means
no,
so
I
think
just
write
an
email
with
a
time
up
for
like
three
days
and
when
you
file
the
request
for
a
repository
point
them
at
the
votes,
the
voting
email.
They
say
that
voting
has
occurred
and
with
all
the
other
details
that
are
defaults
for
when
you
request
in
your
repo-
and
you
should
be
good
to
go.
Okay.
H
A
F
A
One
thing
that
I
think
is
important
to
take
a
look
at,
though
with
you,
if
you
decide
to
import
or
vendor
things
in,
take
a
quick
look
at
your
dependencies
to
see
to
verify
that
your
depth
graph
is
Apache
too,
as
well
now
as
a
project.
We
have
not
done
this
very
consistently.
We
don't
have
tools
for
this,
but
just
doing
a
quick
pass
in
some
type
of
note,
as
part
of
the
repo
request,
would
be
helpful.
F
Yeah,
that's
a
great
point.
We
were
looking
at
the
priests,
er
D
version
of
the
cluster
API
recently
and
found
that
it
I
think
that
some
sort
of
LGPL
dependency
had
snuck
in
there
from
the
old
API
server
builder
dependency
chain,
which
was
unexpected.
So,
yes,
we
need
to
be
keep
more,
careful,
I.
Think
I
was
LGPL,
maybe
with
some
other
license
that
we
didn't
like
I.
C
Pasted
the
the
rules
in
the
chat-
yes,
so
the
dependencies
are,
is
a
is
one
there's
some
boilerplate
things
and
then
the
contributors
must
have
signed
a
CNC,
F,
individual
CLA
or
corporate
CLA,
ideally,
and
then,
if
not,
then
there's
some
like
escape
hatch
type
thing,
but
hopefully
there
there
are
a
few
contributors
that
we
that
that
is
not
an
issue.
I
guess
hopefully.
A
F
So
I
snuck
this
one
in
I
think
we
mentioned
this
a
while
back,
but
our
sink
has
three
presentations
that
we're
doing
for
Cuba
on
Seattle,
Tim
and
I
are
gonna,
give
a
ciggy
intro
obsession,
which
is
mainly
targeted
at
trying
to
attract
new
folks
to
help
join
the
sink
and
sort
of
talks
about
what
our
sig
does
and
the
projects
that
are
inside
the
sink.
The
other
two
presentations.
F
So
if
people
are
interested
and
aren't
contributors
to
the
cluster
API
project,
please
pay
me
on
slack
and
by
the
end
of
this
week
and
then
I'll
reach
out
to
the
folks
that
have
done
so
and
we'll
try
to
figure
out
a
somewhat
fair
way
of
picking
a
co-presenter.
So
I
think
it
would
be
nice
to
have
two
presenters
for
each
of
the
sessions,
especially
with
some
company
diversity.
In
there
hey.
F
I
know
I
know
there.
Other
cyclical
lifecycle
focused
talks
at
the
conference,
there's
also
cubes
per
talk
and
there's
some
other
ones.
I
meant,
like
ones
that
our
sig
has
signed
up
for
as
a
sake
ones
that
have
to
go
through
the
same
sort
of
approval
process
right
because
we
sort
of
short-circuit
that
and
say
these
are
six
sponsored
talks
and,
as
such
I
think
we
are
trying
to
open
it
up
to
presenters
to
know
get
people
involved
who
are
interested
in
presenting
that
didn't
go
through
this
or
the
normal
talk
process.
F
G
My
specific
ask
right
now
is
one:
how
can
we
set
up
some
infrastructure
such
that
we
can
run
these
cluster
API
integration
tests?
There's
some
like
common
things
that
everybody
is
going
to
need,
and
it's
not
super
clear
to
me
how
those
are
going
to
how
those
are
going
to
fit
into
the
existing
proud
based
infrastructure?
Is
that
still
too
vague.
J
You
know
I
think
the
main
caveat
there
being
that
the
prowl
stuff
makes
sure
we
always
test
PRS
against
master
before
merging
and
that's
realized,
but
otherwise
that
just
works
I,
don't
think
we
can
say
a
lot
more
without
knowing
like
what
did
the
integration
test
actually
need?
What
do
they
do?
Well,.
A
I
think
right
now,
the
simple:
if
I
summarize,
the
simple
problem
is,
we
need
to
be
able
to
build
push
and
then
test
right,
mm-hmm.
The
push
is
so
for
the
eventual
testing
capabilities
right
and
because
it's
in
a
separate
repository
I,
don't
care
I
just
basically
want
to
be
able
to
job
to
have
a
job
that
allows
me
to
arbitrarily
build
and
push
the
test.
A
J
J
Basically,
you
just
write
a
kubernetes
pod
spec
as
a
container-
and
you
add
a
few
things
around
it
to
say
this
is
the
name
of
the
job,
and
this
is
what
it
needs
to
check
out
other
than
that
I
mean
it's
going
to
dump
you
into
a
container
with
a
with
a
go
path,
set
up
with
your
code,
checked
out
and
you
can
run
commands
to
do,
build
and
push
or
whatever.
So
sorry,.
J
A
J
A
There
I
think,
if
we're
actually
eventually,
this
is
the
first
pass
of
testing,
but
eventually
we're
going
to
do.
You
know,
do
periodic
jobs
with
full
deployments
just
like
we
do
with
other
jobs,
and
that
will
require
an
actual
registry
that
is
accessible
to
the
outside
world.
So
they
can
pull
the
images
you
know
into
some
Amazon
deployments.
So
that
way
it
can
actually
run
in
same
with
other
cloud
providers
right.
A
J
E
So
not
all
just
do
exactly
the
same
thing.
Right,
I
know
we
need
it
a
you
know
at
some
point,
but
can
be
e
unblock
ourselves
by
starting
to
use
a
local
doctor
industry
get
some
stuff
going
and
then,
by
the
time
you
know,
I'll
try
to
figure
that
out
in
the
kits
in
protein.
If
we
can
have
something
similar
to
the
case
test
images
where.
A
E
Even
in
the
PRD
jobs
we
can
download
the
image
the
container
image,
tarts
and
then
upload
to
the
docker
registry
and
then
use
the
docker
registry.
So
there'll
be
other
other
things
that
we
could
do
when
we
get
to
that
point,
I'm
hoping
that
we
won't
need
to
do
that
at
that
point.
But
you
know
yeah,
that's
you
know
get.
A
E
We
have
that
pattern
right
now
in
CI,
cross
jobs
see
a
cross
jobs,
push,
you
know,
images
to
a
GC,
our
repository
and
then
other
people
pick
it
up
from
there.
So
yeah.
We
have
that
pattern
and
we
need
to
replicate
that
pattern
already
when
we
are
moving
from
ghoul
to
CN
CF
plus.
The
additional
thing
here
is:
we
need
to
open
up
that
process
to
the
cluster
providers
and
club
products
as
well,
so
yeah.
There
is
more
work
to
be
done.
It's
not
just
for
this
group.
J
E
G
J
So
so
far,
but
is
a
kubernetes
secret
in
the
build
and
test
cluster,
with
the
implication
that,
like
that
probably
can
be
stolen
by
a
pull
request,
but
no
matter
where
we
put
it.
If
you
give
some,
you
know
closer
API
implementation,
access
to
some
credentials.
Someone
could
you
know,
put
code
in
their
PR
to
obtain
that
cred,
and
so
it
just
needs
to
be
one.
That's
for
CI
resources
and
doesn't
have
access
to
anything
else.
G
F
It's
also
reading,
so
that
that's
also
a
reason
that
we
require
okay,
to
test
your
people
that
are
not
part
of
the
sig
membership.
Because,
basically
we're
saying
we
trust
people
that
are
part
of
the
sig
organization.
And
if
not,
then
we
expect
someone
in
the
organization
to
actually
take
a
glance
at
the
code
before
applying
that
label
so
that
we
don't
have
that
sort
of
problem
from
somebody
just
randomly
submitting
a
PR.
J
J
A
J
E
J
E
A
J
Yeah
yeah,
like
I,
said
I.
Think
a
few
people
have
looked
at
this
and
there's
very
little
of
actual
automated
image
pushing
besides
kubernetes
today.
We
recently
have
some
put
it
for
testing
throat,
but
that
works
a
bit
weird.
It's
only
in
post,
submit
I
think
not
in
pre
submit,
and
so
it's
running
in
a
separate,
more
trusted
cluster.
That
only
has
certain
builds
in
it
and
it's
pushing
to
like
the
crowd
test,
dimensions,
location.
J
H
E
K
A
So
what
did
the
policy
so
this
is?
This
is
I
sent
an
email
message
to
the
working
group
the
other
day
with
regards
to
find
great
access,
controls
and
sort
of
layered
levels
of
repositories
or
registries,
because
GC
r
has
issues
with
that
right.
The
the
idea
of
having
sort
of
like
Kate's
IO
registry
or
some
some
some
registry,
where
you
could
have
layers
which
in
this
case,
could
be
just
be
denoted
by
like
slashes,
right
and
fine-grained
access
controls
for
those
individual
layers
would
be
the
ideal
scenario
or
like
you
could
just
do.
I
A
So
I
think
feeding
this
in
your
requirements
with
the
working
group
is
probably
what
we're
gonna
have
to
do.
It
seems
like
this
topic
is
a
little
meta
and
we
need
to.
We
need
to
start
the
hash
out
requirements
with
the
working
group
and
try
to
move
forwards,
but
I
think
the
local,
the
local
hack,
the
local
registry
that'll
work
for
the
time
being
so.
G
J
E
J
A
J
A
J
J
A
C
Last
topic:
Justin:
we
got
seven
minutes.
I
hope
you
won't
need
it.
But
yes,
so
in
our
last
meeting
we
talked
a
little
bit
about
this
proposal
for
X
ETA
DM,
which
I
am
making,
which
is
to
summarize
to
combine
platform
nines
at
CD,
a
DM
CLI
tooling,
with
the
work
I
was
doing
for
the
automated
f
CD
manager.
C
Tim,
you
asked
me
to
do
a
cap,
so
I've
put
together
a
kept.
The
response
seems
pretty
positive.
So
far,
the
only
like
question
was
around
the
scope
in
terms
of
like.
Is
this
gonna
be
like
a
cops
for
EDD
type
thing
where
you
like
do
like
Etsy
dat
I'm
up
and
it
like
gives
you
a
nice
new
cluster
and
answer?
Is
the
definitive?
No
so
I
like
I
did
a
an
additional
permit
this
morning,
like
scope
with
what
cloud
and
direction
we
are
doing,
and
why
I'm
over
what
we're
not
doing?
C
I
everything
else
but
I
guess
if
Tim
I
know
you
like
gave
it
a
sort
of
conditional
or
provisional
approval.
I,
don't
know
if
we're
ready
to
like
send
out
an
email
to
the
to
the
mailing
list
and
proceed
with
the
sort
of
lazy
consensus
approach
or
whether
we
want
to
do
something
more.
At
this
point,
I.
C
Just
I'm
worried
that
I
can
always
send.
I
can
always
do
a
longer
time
out,
but
I
I
figure
I
might
as
I'll
send
it
to
sync
toaster
lifecycle.
Just
so
everyone
can
see
it,
you
know
I,
don't
want
to
be
like
you
know,
we
do
the
cap
and
then
like
I.
Wasn't
one
didn't
see
any
stuff?
You
know
that
sort
of
thing.
What
do
you
say?
Grubby.