►
From YouTube: 20180821 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
August
21st
2018.
This
is
the
standard
sequester
life
cycle,
meaning
I
had
put
a
bunch
of
agenda
topics
that
are
meta.
Topics
to
discuss
today
and
I
was
hoping
that
Robbie
would
join,
but
justin
is
here
and
he
now
works
for
Google,
so
I
will
proxy
through
Justin.
So
the
first
mid,
a
topic
that
applies
to
all
of
sequestered
life
cycle
is
in
a
long
time
ago.
Many
moons
ago
there
was
a
an
effort.
A
I
will
save
I'll
spare
the
colorful
things.
I
would
say
about
it
to
add
a
bunch
of
teams
to
the
communities
repository.
So
that
way,
those
teams
had
the
ability
to
see
different
aspects
of
things
that
were
ongoing
and
they
tried
to
apply
a
standard
label.
There
has
been
pushed
back
within
the
community
over
time
to
try
and
eliminate
those
teams.
A
B
A
A
Currently,
what
from
what
I've
seen?
Yes,
so
the
way
yabbies
there
so
currently
from
what
I've
seen
there
has
only
been
two
remaining
teams
left
for
most
other
things
that
have
proactively
done
this
effort,
but
I
don't
know
if
other
people
have
other
thoughts
there.
So
I
know
you
commented
on
the
PR
Ravi.
A
D
A
A
So
one
of
the
topics
I
wanted
to
broach
here
is
that
a
should
we
still
continue
to
have
this
meeting
on
a
weekly
basis
and
B.
Should
this
meeting
be
germane
to
only
the
group
level,
topics
that
apply
to
other
folks
or
might
cut
across
sequester
lifecycle,
because
we
have
office
hours
for
all
the
different
sub
projects
and
I
don't
want
to
I
want
to
make
sure
that
other
folks
don't
feel
that
this
particular
meeting
is
dominated
by
topics
that
aren't
applicable
to
them.
So
if
people
are
interesting,
cube,
idiom
there's
convenient
office
hours.
A
E
Think
I
think
there's
definitely
like
PR
discussions
probably
naturally
belong
in
the
office
hours
I.
Think
if
there's
a
Cuba
diem
for
example,
discussion.
That's
it's
more
cross-cutting
and
higher
levels
and
I
think
it'd
be
very
welcome
here
and
we
can
sort
of
follow
that
sort
of
approach
and
and
see
where
we
end
up
with
where
the
time
is
spent
right,
like
I
certainly
have
been
I.
Don't
talk
about
like
cops
pr's
here,
because
it's
not
of
interest
to
most
people,
but
we
do
talk
about
them
in
officers.
F
A
Agree,
one
of
the
things
I
wanted
to
do
is
is
loop
back
into
the
fold
now
that
we
actually
have
a
charter
which
should
hopefully
be
merged
soon
live
back
into
the
fold.
Some
of
the
other
folks
like
the
coop
sprayer
folks
and
whatnot,
so
that
they
can
have
a
venue
to
you
know
where
we
can
cross
over
some
ideas,
and
they
can
be
welcome
to
join
this.
This
venue
too,
as
well,
because
they
they
kind
of
in
the
incubator
land,
which
is
a
kotoba
a
holding
zone
for
the
time
being.
A
A
Do
folks
think
that
we
need
a
weekly
meeting
for
this
or
do
this
by
weekly
work?
Do
they
think
half
an
hour,
because
these
are
these?
Are
the
meta
topics
that
kind
of
cross
I,
don't
think?
Well,
in
the
past,
we've
kind
of
dominated
some
of
this
conversation
with
QB
diem,
specific
things,
but
I,
don't
necessarily
think
that
all
the
comedian
topics
slice
across
this
whole
group.
D
I
think
bi-weekly
sounds
better
to
me
than
half
an
hour.
I
feel
like.
If
we
try
to
do
half
an
hour
every
week
were
likely
to
kind
of
push
it
over
or
get
cut
off,
whereas
if
we
do
an
hour,
I
think
that'll
be
easier
to
sort
of
maybe
queue
up
more
urgent
items
and
time
box
things
and
and
free
up
more
time
off
your
calendar.
A
I'm,
just
I
suffer
from
meeting
paralysis
right
now
and
I'm,
trying
to
show
shed
as
many
as
I
can.
F
D
D
I've
seen
Aaron
Berger
is
doing
a
great
job
of
that
I
see
him
sending
emails
to
various
groups,
saying
here's.
What
I'm
doing
vo
leads
again
says
that
Fievel
injected
by
this
time
we're
gonna
move
forward
and
then,
when
people
don't
object
to
moves
for
at
that
point
and
I
think
that's
sort
of
the
good
decision-making
process.
We
get
people
this
forum.
If
they
won't
object
in
person,
we
have
people
the
email
form
if
they
want
to
object
asynchronously,
but
then
we
don't
block
for
too
long
Hart.
A
So
if
we
don't
get
it
objection
from
one
week,
I'm
just
trying
to
plan
for
next
week
because
I
don't
think
we'll
get
the
calendar
updates
in
place.
So
if
there
isn't
objection,
we
could
probably
punt
this
meeting
next
week.
If
that's
prevent
that's
okay
with
people
or
maybe
we'll
just
hold
it
one
more
time
and
then
front
so
I'm
gonna
skip
the
next
topic,
because
that
kind
of
overlaps-
just
some
of
this,
so
the
next
one
Richard.
H
Yes,
this
kind
of
spawned
from
my
Cuba,
the
MPR,
but
now
actually
I,
think
it
might
be
something
that
can
cross
cut
cross
a
few
other
things
with
this
seed.
Cluster
lifecycle
stuff
I'd
like
to
see
if
we
can
come
up
with
a
kind
of
standardized
way
for
dealing
with
configuration
files
with
our
various
bits
and
pieces,
specifically
one
which
is
sort
of
distribution
friendly.
Coming
from
the
openSUSE
point
of
view.
H
So,
like
one
thing,
I'd
like
to
sing
cube,
ABM
and
elsewhere
would
be
having
configuration
files,
distro
package,
abbu
living
in
like
the
USSR,
which
then
gets
augmented
or
overwritten
by
user
configuration
files
living
in
et
Cie.
So
you
have
this
way
of
industro
being
able
to
roll
out.
You
know
cops
cube,
ATM
whatever
with
the
distro
defaults
and
then
you
know,
have
the
user
overwrite
that
willingly
ring
that
I
did
to
the
table
and
see
how
many
people
I
can
scare
in
one
go
now.
A
So
that
way
you
can
have
a
way
of
having
similar
configuration
file.
Similar
esque
configuration
files
that
you
can
have
throughout
your
system.
Instead
of
having
like
these
half-baked
artifacts
of
system
to
you
know
files
for
some
stuff,
and
then
you
have,
you
know
some
configuration
for
the
other
things
and
some
of
those
things
were
complaining
before
so
the
work
is
ongoing
and
folks
are
aware
of
some
of
it,
but
we
not
talked
about
applying
that
grand
unified
component
config
to
other
areas.
A
F
F
H
Mean
merging
or
or
tearing
so
you
know
you
could
have
a
situation
where
the
the
distro
provided
config
is
just
providing
like
the
starting
point
and
then
if
the
user
provides
a
different
conflict,
then
you
know
all
of
the
district
conflict
doesn't
apply.
You
could
go
like
to
take
like
the
system.
The
example
is
a
wonderful
one
in
a
go
for
a
fancy
sort
of
layered
model
of
things
like
selective
drop
into
certain
parts,
but
that's
yeah,
something
to
talk
about
and
think
about.
That's
what
I
wanted
yeah.
F
A
Until
now,
there
is
a
separate
issue
in
the
comedian
repo
that
overlaps
with
this
topic,
but
I
would
like
to
make
sure
that
it's
addressed
by
it
and
we
need
to
come
up
with
a
proposal
for
unification
for
how
we
deal
with
command-line
arguments,
component
configuration
and
overriding
system
stuff
right,
but
it
we
need
to
actually
deal
with
this.
Probably
I
would
like
to
have
a
proposal,
maybe
within
the
112
ish
end
of
112,
ish
timeframe
or
beginning
of
113
I
can
cross
link
against
your
issue.
G
Then
we
took
a
particular
to
your
poor
request.
What
what
you
went
into
agenda?
The
biggest
problem
is
not
was
clicking
on
for
configuration
files,
but
problem
is
what
they
have
two
entities
which
tries
to
configure
one
component.
You
have
couplet
arguments
which
comes
from
your
package
and
we
have
a
cube
Radian,
which
also
expects
to
pass
some
Kubla
turbulence.
So
if
we
find
a
way
how
to
district
and
reduce
defaults
to
pure
radium,
what
would
they
a
lot
better
than
just
trying
to
sneak
in
some
parameters
to
cool
it?
Yeah.
H
What
I've,
what
I've
done
today
in
cubic,
which
kind
of
takes
the
issue
off
the
burner
for
now
it
is
having
yeah
having
packages.
Manhandling
et
sees
this
conflict
cubelet.
So
you
know
cube.
Atm
is
reading
that
from
its
system.
The
unit
file
when
docker
is
installed,
it'll
be
pre-populated
with
the
docker
parameters,
when
cryo
witnessed
or
they'll
be
pre-populated
with
the
cryo
parameters,
and
it
works.
It's
potentially
messy
and
fragile
and
all
fall
apart,
elliptically
in
the
future,
but
it's
a
it's
a
starting
point
at
least
well.
G
H
H
There's
a
that's
where
there's
like
having
a
decent
strategy
for
how
how
to
tear
these,
how
to
have
you
know
some
hierarchy
and
structure,
and
then
we
can
at
least
document
that
and
say:
okay.
This
is
the
order
of
precedence.
This
is
how
things
work
yeah.
It
all
seems
a
little
bit
random
with
the
mother.
A
It
totally
is,
and
that's
part
of
that,
there's
their
meta
topics
that
address
these
things,
but
I
do
think
we
should
have
they
pieces
of
the
puzzle,
but
some
of
those
pieces
are
currently
in
flight.
So
I
think
if
we
have
a
grand
unified
field
theory
of
what
we
want.
I
think
it
was
it's.
The
timing
is
fortuitous
right,
because
these
other
pieces
are
moving
finally
in
place
right.
I
Okay,
so
for
those
who
don't
know,
I
am
I.
Am
the
current
CI
signal
d4
if
I
want
I'm
going
to
recycle
and
we
have
to
be
up
close
to
a
closer
place,
well
optically
cycle
and
we
have
tests
that
are
currently
failing
for
some
time
and
it
we
fix
so
I
want
to
link
those
tests
there,
like
that
is
the
qadian
thing
didn't
rain
for
a
while
now,
and
also
multiple,
a
big
test
about
that.
We
feel
for
some
time.
C
So
the
first
one
I
can
fix
pretty
easily
now
that
we
have
the
kubernetes
anybody
change
with
a
branch,
so
the
other
one
is
very
weird.
We
have
demon,
sets
related
issues
in
a
bunch
of
different
tests
and
they
are
not
related
to
kubernetes
commits
or
testing
for
commits.
So
my
only
other
guess
is
something
GC
related,
because
I
have
no
idea.
What
else
could
it
mean?
I
can.
A
A
A
A
I
C
In
terms
of
the
grit
is
like
I
already
spoke,
I
already
made
a
comment
to
Muhammad
about
this.
If
we
get
some
sort
of
an
expert
from
GCE
who
can
look
at
the
tests
like
what
are
these
demon
sets
failures
if
he
can
debug
it
walk
on
it,
we
can
get
a
feedback
because,
like
going
to
all
the
six
ins
saying
to
look
at
the
tests,
I
feel
like
there's
a
unified
problem
underneath
all
the
test
failures,
No.
C
C
A
D
I
think
that's
not
a
bad
idea
because,
like
you
said
routing
to
this
group,
is
almost
in
some
ways
right
into
a
black
hole,
because
it's
if
I
notice
it
and
I
can
find
someone
else
like
Google
to
work
on
it.
Whereas
if
we
route
it
to
sig
TCP,
then
presumably
more
Googlers
will
notice
and
we
can
also
sort
of
put
more
people's
feet
to
the
fire
about
getting
rid
of
the
cluster
directory.
D
Individual
tests,
I
think
what
Tim
is
saying
is
that
there
are
a
lot
of
places
where
the
sort
of
roll-up
of
a
test
suite
comes
to
see
a
cluster
lifecycle
that
should
be
changed
as
sake
GCP,
so
like
I'm
looking
at
the
the
first
one
that
was
linked
in
the
bug
which
is
GCE,
new
master
upgrade
cluster
new
and
it's
the
overall
test
is
failing
this.
It's
a
skew
test
is
failing.
D
It's
a
sick,
lesser
life
cycle
responsible
for
the
upgrade
that
cluster
should
maintain
a
functioning
cluster
during
upgrade
and
say,
gaps
is
responsible
for
Damon,
said
upgrade.
So
when
I
look
at
that,
I
think
that
the
the
overall
thing
is
probably
failing,
because
the
see
gaps
test
is
failing
and
the
cigar
smoke
look
at
why
Damon
said
upgrade
its
failing.
If
that
gets
fixed
and
the
overall
one
is
still
failing,
then
it
should
probably
seek
GCP
instead
of
seed
cluster
lifecycle
to
debug.
Why
that's
broken
is
that
sort
of
a
long
line?
C
I
D
F
C
C
A
Will
I
will
take
an
effort
to
take
a
look
at
those
tests
and
try
to
PR
the
tagging
that
we
do
inside
the
tests
such
so
that
it
gets
routed
a
little
bit
better?
Perhaps
we
can
do
an
ordered
layering
thing.
I
have
a
sleep
testing
meeting
that
have
to
go
to
later
on
today
and
I
can
discuss
that
with
them
too,
as
well,
so
that
way
tests
can
get
routed
properly.
I
A
B
The
beginning
of
the
cycle
will
be
disparities
at
fault
man.
We
never
decide
exactly
how
to
do
this,
and
so
I
have
two
options
for
first
of
all,
I
think
that
from
being
112,
we
should
make
user
not
capable
to
create
new
class
there
with
those
flags,
so
basically
I
the
flags,
and
it
is
the
first
piece,
but
the
most
critical
part
is
how
to
manage
existing
cluster
that
are
using
those
visual
flags.
I
have
to
propose
on
the
first
one
is
to
block
on
a
plate.
B
A
That
added
up
over
time
was
not
worth
the
benefit
or
gain,
because
we
could
do
it
in
a
separate
layer
and
a
separate
tool
in
a
separate
step
right
that
separate
layer,
a
tool
could
be
a
cluster
API
upgrades
or
it
could
be
a
separate
pivoter.
That
could
do
these
things.
It
doesn't
even
manage
my
cube
ATM,
so
that's
the
background
of
why
we're
trying
to
do
that
and
I
I'm
okay,
with
blocking
the
upgrade
with
enough
breadcrumb
information.
But
that's
that's
my
take
on
it.
Well,
there
folks
have
other
thoughts,
I
guess.
D
The
other
thought
is:
does
this
sort
of
signal
that
the
sig
is
sort
of
moving
away
from
the
self-hosted
model?
So
I
know
like
hearing
from
the
folks
at
car,
West
they've
moved
away
from
the
self-hosted
model,
talking
to
some
folks
from
s
IP
they're
working
on
gardner,
they've
moved
away
from
the
self-hosted
model.
It
seems
like
this
is
signaling
that
cue
diamond
is
moving
away
from
self
us
model.
Is
that
sort
of
a
trend
like
in
terms
of
like
the
cross-cutting
things
across
multiple
parts
of
the
sig?
Is
this
just?
D
A
It
is
okay,
I
mean
there's
cost
benefits
for
this
right
and
I,
don't
think
it's.
It
falls
cleanly
into
bucket
a
versus
bucket,
B,
I.
Think
for
specific
user
stories.
It
makes
a
ton
of
sense,
but
I
don't
think
Kubb
idioms
should
go
in
the
complexity
and
that's
the
that's
the
choice
that
we
were
explicitly
making
in
the
beginning.
The
cycle
is
because,
as
we
started,
adding
these
other
features
like
H,
a
capabilities
native
into
QT
and
proper.
It
complicates
the
upgrade
scenario
and
the
master
joints
or
the
control
plane
joint
scenario.
A
So
by
eliminating
that
complexity,
it
simplifies
to
beta,
and
you
can
defer
that
effort
into
a
separate
tool,
and
we
don't
think
if
there's
demand
for
that
tool.
I
think
that
pivoting
and
management
of
generating
the
manifest
and
then
making
the
manifest
look
like
self-hosted
manifests
could
be
done
by
anyone.
It
doesn't
need
to
be
done
by
you
know
it
could
be
done
with
other
people
who
are
have
interests
or
want
to
do
this
effort,
but
that
also
defers
a
lot
of
the
other
complexities
that
we
had
in
the
past.
H
The
context
on
the
the
cubic
side
of
things
behind
the
curtain,
because
we
haven't
really
announced
any
of
this
publicly.
We
are
looking
at
a
cellphone
sting
solution
using
cube
ATM,
but
then
with
our
external
tooling
that
we're
currently
working
on
and
with
our
external
approach.
So
you
know
the
cube,
ABM
vision
of
not
doing
yourself
posting
itself
and
then
expecting
something
external
to
handle.
The
magic
fits
in
exactly
with
the
direction
would
come
and
be
playing
with
you'll
see
more
about
if
it
actually
works.
Just
to.
A
H
Well
aware
of
that,
we've
seen
what
you
guys
have
been
doing
to
be
honest
inside
the
team,
there's
a
bit
of
a
split
I'm,
very
much
on
the
ident
to
the
point
I'm
kind
of
with
you
guys.
But
you
know
we
have
others
who
are
far
more
keen
about
it.
So
we're
experimenting
with
boats
in
the
project
and
you
know
we'll
see
who
survives
yeah.
A
C
Well,
there
are
a
couple
of
different
ways
of
calling
self
hosting
actually
I've
seen
it
on
the
Internet.
One
is
like
system,
diss
services
versus
the
aesthetic,
aesthetic,
pods
people
call
this
self
costume
as
well.
They
are
self
hosting
we
are
talking
about.
Here
is
the
one
that
you're
on
the
control
plane
into
daemon
set
and.
A
D
We
give
a
warning
message:
someone's
self
posted,
what's
gonna
happen
when
we
actually
try
to
upgrade
them.
Do
we
want
to
test
that
scenario?
I
mean
it.
Certainly
it's
easier
to
implement
a
hard
stop,
then,
to
try
and
basically
flip
them
back
out
of
the
self
hosting
mode
and
make
sure
that
that
actually
works.
Yeah.
F
D
C
At
least
did
some
investigation
of
the
topic
in
terms
of
the
fog
deprecation
policy
for
these,
and
since
these
are
alpha,
the
API
mature.
If
folks
told
us
that
we
can
pretty
much
do
an
action
require
actually
required
not
in
the
release
loads
and
remove
them
right
away.
But,
of
course,
fabrizio
brought
the
topic
that
this
might
break
existing
concerns
in
terms
of
a
blitz.
So
that's
the
whole
discussion
here
about
how
do
we
proceed?
I'm.
A
F
L
Somebody
can
send
a
message
to
Mike
Okada
and
ask
him
what
he
and
the
teams
there
are
doing.
They
have
their
own
run
books
in
regard
to
managing
the
manifests
and
pivoting
control
plane
in
in
different
failure
modes.
They
like
honestly,
they
they
are
the
authority
that
I
am
aware
of
on
self-hosted
kubernetes
at
scale
with
multiple
clusters,
and
mike
has
said
in
the
past
that
they
would
like
to
get
into
these
meetings,
but
they
haven't.
They
haven't
done
that,
so
I
would
say
that
they
can.
L
They
can
be
their
own
owner
of
that
and
we
can
discontinue
the
feature
and
if
they
show
up
then
I
would
be
happy
to
entertain.
You
know
actually
supporting
for
a
wider
community,
but
it
makes
sense
to
me
that
we
would
just
provide
a
release.
Note
that
says:
hey
we're
not
supporting
this
anymore,
since
we
have
no
owner,
no
technical
owner,
no
user.
In
regard
to
no
feedback
loop
for
any
self
hosted
bits.
F
A
Basically
have
a
tool
that
does
what
the
the
internal
portions
of
KU
BDM
does
to
pivot,
the
manifest
from
a
static
manifest
to
a
self-hosted
one,
one
that
pivoting
code
could
be
excised
into
a
separate
tool
and
you
could
basically
pipe
the
output
from
Jen
into
that
separate
tool.
And
if
we
wanted
to,
we
can
even
put
that
tool
in
the
Covidien
repo.
L
Can't
you
just
use
an
old
version
of
Covidien
there's:
do
we
have
a
face
for
the
reverse
pivot?
No,
you
see
that's
the
problem.
We
never
did
that.
Okay,
cool
well,
I
mean
people
just
have
to
undo
their
thing
and
whether
or
not
we
want
to
document
that
I
guess
is
the
question
so
yeah
we
don't
need
to
write
a
reverse
pivot.
Unfortunately,
we
don't
have
one.
So
that
seems
to
be
a
little
faked.
A
L
Three
times
I
guess:
I
have
a
question:
I
mean
I,
never
get
to
see
like
a
like
Robert,
Bailey
and
stuff
anymore.
So,
like
Justin
sb
is
on
here
as
well.
So
you
guys
are
working
with
clusters
in
different
contexts.
Does
anybody
here
run
the
metric
server
in
production?
Oh.
D
A
Dr.
Tito,
okay,
this
is
gets
into
the
add-on
app
space
if
we
want
to
get
into
add-ons
and
add-on
management
for
second
layer
order,
things
I'm,
really
I'm,
pretty
opposed
to
some
of
these
things
we
can
always
defer
to
the
other.
Six
like
I
would
have
a
gladly
defer
to
well-documented
location
from
sync
cluster
lifecycle
and
saying,
like
okay,
your
control
plane
is
now
stood
up
and
you
want
to
add
these
other
things.
Go
there,
because
that
story
shifts
and
changes
and
is
inconsistent
with
across
providers.
A
So
I
would
much
rather
go
and
defer
on
that
to
well-documented
location
kind
of
like
how
we're
trying
to
push
sync
cloud
to
be
the
aggregator
for
cloud
provider
integration.
So
they
are
updating
the
documentation
there
and
instead
of
us
trying
to
own
all
the
integration
pieces,
we
are
simply
going
to
defer
documentation
of
how
you
change
things
in
that
location,
cool.