►
From YouTube: 20190226 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Tuesday
February
26.
This
is
the
standard
sequester
lifecycle
meeting
our
agenda
is
kind
of
light
today.
If
you
have
topics
that
you'd
like
to
discuss,
please
add
them
to
the
meeting
notes.
Also,
you
can
add
your
details
to
the
meeting
minutes.
That
would
also
be
helpful,
starting
us
off
Alexander.
He
want
to
talk
about
good
con
China.
B
Yes,
so
we're
previous
meeting,
we
were
discussing
what
we
submitted
with
proposals
for
intro
and
deep
dive
sessions
for
pubic,
only
you
and
we
don't
have
any
one
yet
for
China,
so
I
chatted
offline
with
dish,
issue
reports
on
this
home
ID
to
a
previous
cube,
China
talks
and
both
of
us
practically
volunteering
to
do
for
where
next
keep
Kanchana
about
six
sessions.
Film.
B
A
A
A
A
There's
a
cig:
we
have
time
to
do
intro
and
deep
dive
for
any
of
our
projects
and
sub
projects,
so
it's
pretty
much
a
somewhat
guaranteed
slot.
So,
if
you're
interested,
can
you
please
add
details
to
the
meeting?
No
single
look
contact.
Do
you
offline
about
coordinating
to
get
that
squared
away
for
logistics?
Yeah
I'll
go
ahead
and
do
that
I.
Think
I
will
sync
up
with
the
other
folks
too,
after
the
meeting
who
are
not
here
unless
Antoine
magically
shows
up
to
see
who
else
might
be
able
to
do
a
deep
dive?
A
Next
up
is
a
topic.
I
just
wanted
to
get
a
brief,
TLDR
update
on,
and
maybe
it's
mainly
for
Justin
and
myself,
but
everyone
should
be
aware
of
it.
I
don't
have
any
updates,
but
I
wonder
if
you're
more
connected
to
all
the
other
people
who
are
Googlers,
who
are
very
interested
in
the
CR
D
life
cycle
problem.
Do
you
know
if
there's
any
updates
to
that.
C
As
far
as
I
know,
we
have
not.
We
are
basically
punting
on
anything
that
needs
to
be
CRD
and
turning
it
into
a
core
API
where
it's
possible,
which
is
not
ideal,
I,
think
I
think
there
are
other
issues.
There
are
other
feature
gaps
with
CR
DS
that
mean
that,
even
if
we
were
magically
solve
this
issue,
you're
gonna
even
have
magically
to
solve
the
lifecycle,
management
or
CRD
management
issue.
It
wouldn't
necessarily.
We
wouldn't
necessarily
push
everyone
to
UC
Rd,
yet
I
think
in
terms
of
our
CRD
installation
topics
I.
C
My
personal
view
is,
if
we
just
say,
like
here's,
a
manifest
with
the
CR
DS
and
any
controllers.
If
you
need
them
and
then
we
make
sure
that
that
manifests
works
with
cuvee,
DM
or
so
cluster
API
I'll
make
it
work
of
cops,
I
think
that
would
be
sufficient.
I
think
the
bundle
is
a
good
approach
there,
but
orthogonal
so
I,
don't
know
what
your
view
is:
Tim
I,
don't.
A
C
A
The
but
it
needs
to
probably
be
addressed
as
a
sig
yeah
within
the
115
time
frame.
Maybe
there
our
next
meeting
we
can.
If
we
have
sort
of
meta
topics
that
apply
to
the
whole
sig,
we
can
start
to
maybe
get
people
rallied
around
execution
on
some
of
these
things,
because
that
one
is
thorny
in
affects
everything.
A
If
you
want
to
think
of
it
that
way,
and
because
of
that,
we
need
to
be
able
to
think
about
lifecycle
management
of
these
things,
and
we
haven't
really
thought
about
the
problem
at
all.
To
be
honest,
but
we
need
to
address
that
issue,
probably
in
the
115
cycle,
and
get
a
sort
of
a
small
strike
force
or
a
working
group
to
potentially
find
a
happy
landing
at
home
for
this,
and
maybe
we
kind
of
promote
bundles
to
be
a
sub
project
that
pulls
this
as
part
of
that
process.
I.
C
Think
also
that,
today
the
requirements
are
fairly
unclear
like
if
it
was
just
the
case
that
I
think
what
was
it
storage.
Some
somebody
something
with
CSI
node
info,
for
example-
is
the
one
we're
talking
we
were
talking
about,
but
have
now
made
a
core
API
or
going
to
make
a
court
API
so
sure
we
can
install
those
via
an
atom,
but
are
there
requirements
specific
to
CSI
and
specific
to
the
bootstrap
that
make
that
insufficient?
C
In
other
words,
it
has
to
be
installed
first
or
something
like
that
were
installed
early
and
one
of
the
guarantees
I
think
it's
a
sort
of
a
two-way
street
like
what
are
the
guarantees
we
can
provide
and
are
those
guarantees
sufficient
for
the
various
consumers
and
where
there
aren't
that's
where
we
have
to
deal
with
it.
Yeah.
A
A
F
F
A
F
I
mean
we
caught
the
observation.
I
have
is
that
people,
so
your
functionality
exists
and
people
use
it
whether
we
categorize
the
way
to
use
it
as
an
add-on
or
as
an
application,
is
not
germane
to
the
fact
that
they
are
using
it,
and
it
would
be
nice
if
there
was
a
cohesive
way
for
people
to
patent
people.
All
of
women
you
see.
Are
bees,
I,
don't
think
it's
cultural
lifecycle,
a
unique
problem
right,
it's
not
I
mean
I
think
this
is
generally
applications
to
do
this.
A
For
applications
yeah,
but
it
gets
into
this
weird
you're,
basically
you're
falling
down
the
rabbit
hole
that
we
all
fell
into
it.
When
you
first
started
talking
about
this,
you
get
into
the
state
space
where
you
have
all
the
tools
for
clustered
life
cycle
stand
up
a
cluster
right,
but
the
problem
is,
if
you
start
federating
the
quarry
at
the
eyes.
What
have
you
stood
up
yeah?
Maybe
we
used
to
have
anything,
that's
functional
in
you
at
all
right.
A
F
So
I
understand
I,
don't
think
it's
a
black
or
white
issue,
though
I
think
there's
a
gradient
of
functionality,
and
so
you
could
have
a
cluster
that
works
for
infrastructure
engineers.
You
can
say
well
but
doesn't
work
for
my
developers.
Well,
maybe
not
you
can't
get
it
installed.
It's
such
a
sceptre,
so
I
guess
I,
don't
want
to
wrap.
Was
here
I
just
my
hope
is
that
we
have
have
engaged
with
sig
apps
and
the
solution
we
pick.
Hopefully
either
works
for
them
or
leave
them
in
a
similar
direction.
A
Did
everything
in
my
power
to
try
to
put
this
out
of
sync
cluster
lifecycle,
because
I
did
not
want
to
own
this,
but
it
keeps
on
coming
back
to
us.
So
the
you
sound
like
you're,
very
interested
in
the
problem
to
me.
So
maybe,
when
we
start
planning
for
115,
maybe
you
want
to
take
along
with
the
fun
nerd
fight
on
what
it
means
to
be
a
chicken-egg
serious,
sound,
enjoyable,
all
right
so
how
about
next
session?
C
Next
up,
Justin
I
just
inserted
this
so
about
two
meetings
ago.
We
did
it
I
put
a
cap
for
add-ons
link.
I
have
neglected
this
and
let
it
fall
off
my
radar,
but
I
guess
we
will
I
just
put
it
in
as
it
feels
related
to
what
we
were
just
talking
about
and
that
the
add-ons
are
also
likely
to
use
a
bundle.
C
D
So
for
a
cluster
API,
some
of
the
highlights
ours,
we've
been
working,
how
to
we've
been
working
out,
how
to
handle
releases
and
release
versioning
and
associated
kubernetes
versioning.
There's
a
link
doc
doc
in
the
notes
there,
if
you're
interested
in
providing
some
feedback,
we've
also
recently
added
support
for
a
cascading
deletion.
So
now,
if,
when
you
delete
a
cluster
object,
it
will
delete
the
associated
machines.
There
is
still
some
follow-up
work
there
to
support
machine
deployments
and
machine
sets
and
I've
linked
to
the
issue
there
for
tracking
network
we've.
D
Also
we're
getting
ready
to
land
delete
policies
for
machine
sets,
I've
linked
to
the
PR,
and
what
that
allows
for
is
it's
going
to
allow
for
integration
with
the
kubernetes
autoscaler
by
default.
The
policy
will
just
delete
a
random
machine,
but
there
there's
also
a
newest
and
an
oldest
policy
supported
as
well
and
just
kind
of
a
rundown
of
status
for
v1
alpha
one.
We
currently
have
22
open
issues
and
we're
looking
at
trying
to
deliver
March
29th.
D
So
please
feel
free
to
take
a
look
at
any
issues
who
may
be
interested
and
help
contribute
out
of
those
22
issues.
We
have
four
that
are
priority
long-term,
so
we
can
bump
those
if
needed,
and
we
currently
have
seven
that
are
life
cycle
actives.
So
we
have
plenty
of
open
tests
if,
if
anybody's
looking
for
something
to
do.
A
At
some
point,
maybe
again
in
the
next
meeting,
we
should
go
through
and
had
an
agenda.
I
am
to
talk
about
what
it
means
to
release
all
these
massive
plethora
of
sub
projects
that
exists
and
how
could
we
unify
some
of
that
release?
Mechanics
across
all
the
sub
projects,
simplify
everyone's
life
yeah.
A
It's
t-minus
two
weeks
and
I
think
there'll
be
enough
time
for
folks
to
think
about
it.
I
did
have
a
question
with
regards
to
the
deletion
of
machine
sets,
especially
when
doing
it
at
a
grand
scale
to
folks
I,
didn't
I
didn't
look
through
the
PR,
but
just
a
quick
question
of
whether
or
not
folks
that
thought
about
using
disruption.
Budgets
on
deletions.
C
A
Do
it
directly,
but
there's
there's
also
weird
kinds
in
for
problems
that
can
exist
when
you
delete
a
bunch
of
things.
So
so
long
is
all
the
workloads
have
here's
the
problem
that
already
uses
properly.
So
if
you
enforce
some
standard
policy,
you
know
about
disruption
budgets
on
the
machine
deletion.
A
You
can
kind
of
do
a
graceful
thing,
just
like
a
like
a
and
you
could
also
offer
a
force
if
you
wanted
to-
and
it's
also
been
asked
by
a
number
of
different
people
using
cluster
API
over
on
period
of
time,
but
wanting
to
make
sure
that
they
have
a
a
rolling
deletion
or
update
policy.
So
that
way,
when
they're
doing
updates,
upgrades
or
deletions
it's
all
in
a
in
a
rolling
style
fashion,
trying
to
be
as
respectful
to
the
infrastructure
as
possible,.
D
Yeah
so
I
recently,
because
there
was
a
new
PR
out
to
factor
out
the
code
for
doing
drain
from
cube
kernel
itself
into
a
reusable
library,
I
created
a
action
item
to
or
an
issue
on
the
cluster.
Well
I
did
it
under
a
cluster
API
provider
AWS
to
track
that
as
we
do
machine
deletion
to
actually
drain
the
nodes
prior
to
deleting
the
machines,
because,
right
now,
at
least
for
AWS,
we
just
kind
of
delete
the
instances
underneath
and
it's
not
really
graceful.
Yeah.
A
Well,
the
drain
is
it
given
like
it
should
be
part
of
the
workflow
but
I'm
trying
to
think
of
like
higher
order
disruption,
but
it's
for
actual,
like
you
know,
so
not
everybody
respects
disruptions
budgets
on
their
pods,
so
having
a
disruption.
Budget
on
the
machine,
deletion
itself
or
part
of
the
Machine
sets
would
help
to
minimize
the
potential
blast
radius
when
a
person
decides
to
do
an
upgrade
or
a
deletion
of
a
certain
number
of.
A
Yes,
no
next
up
make
you.
E
Yes,
so
I
wasn't
sure
exactly
which
time
period
to
cover
so
I
tried
to
do
everything
for
the
quarter.
Thankfully,
most
of
the
important
stuff
has
been
pretty
recent,
so
he
released
a
34
with
the
new
interface
lots
of
stability
fixes
consistent
ip's
initial
support
for
language
localization,
a
little
bit
more.
On
that
later,
we
did
suffer
regression
with
when
we
pulled
in
a
new
version
of
Lib
machine
where
it
on
AM
D
processors
with
VirtualBox.
It
says
they
don't
have
virtualization
enabled
so
we're.
E
To
get
that
fixed,
we
did
just
introduce
a
proper
abstraction
layer
for
container
runtimes.
We've
had
support
for
alternative
container
runtimes
for
a
long
time,
but
it
was
very
happy
so
one
point:
I
will
be
really
the
first
release
where
cryo
will
be
a
first-class
citizen,
container
D
as
well,
so
we're
basically
into
the
final
sprint
for
one
point
out
to
be
released
in
about
three
weeks.
There's
a
link
for
issues
if
anyone's
interested,
they
should
be
all
marked
as
good
first
issue
or
Help
Wanted.
E
So
our
focus
for
1.0
is
documentation.
Integration,
testing
and
utter
detection
of
configuration
issues
proxies
being
probably
the
biggest
configuration
issue
that
we
face.
People
have
all
sorts
of
very
interesting,
HTTP
proxies
that
block
access
to
the
Internet
and
that
it's
always
exciting
for
us.
E
So
we've
also
started
talking
in
our
last
office
hours
on
post
1.0
discussions
where
we're
basically
focused
after
1.0
on
how
do
we
make
mini
cube,
use
we'll
buy
the
next
billion
users?
You
know
the
next
set
of
people
who
has
never
used
kubernetes
they're
using
mini
Cube
as
a
way
to
learn
how
do
how
to
do
kubernetes
and
they're,
not
necessarily
system
administrators.
Yet
so
we
basically
would
like
mini
cubes,
start
to
work
for
everybody
with
no
flags
required
and
if
there's
any
dependencies,
your
system
configuration
issues
that
need
to
get
resolved.
E
It
should
offer
to.
You
know
handle
those
like.
Would
you
like
to
download
hyper
kit,
or
you
know,
we
noticed
your
hyper
kit
is
an
old.
You
know
full
driver
version
things
like
that
language.
Localization
is
going
to
be
very
big
for
us.
We
we
plan
on
presenting
in
Shanghai
mini
cube
in
traditional
Chinese,
so
that's
something
we're
been
prepping
for,
and
also
support
for
local
image
repositories.
E
E
E
A
wonderful
question,
so
you
can
yeah
I
I,
don't
actually
know
how
they're
getting
it.
I've
read
some
guides
and
I've
seen
it
uploaded
to
some
random
web
sites
internally
to
China.
But
a
lot
of
our
issues
actually
coming
in
are
people
in
China
trying
to
use
mini
cube
with
strange
proxy
configurations
to
work
around
the
firewalls.
We
would
like
the
experience
to
just
be
mini
cube,
start
no
fancy,
setup
or
flags.
A
A
Maybe
that's
another
topic
for
next
time
and
when
we
start
doing
planning
for
115
cycle
to
jump
on
your
topic
about
issue
triage.
As
you
asked
a
while
ago,
I
wrote
up
a
doc
for
the
TLDR
of
how
we
do
triage
and
it's
inside
of
the
community
repo,
so
folks
wanted
to
follow
that
they
can
feel
free
to
follow
along.
It's
gotten
a
good
amount
of
positive
feedback
so
far
from
a
bunch
of
different
people,
so
I'll
check
that
out.
Thank
you.
A
Next
up
is
Covidien
I,
see
four
beaches
on
the
cover,
a
couple
of
notes
there,
but
if
you
wanted
to
give
more
of
an
update
that
also
be
useful.
The
four
major
topics
I
have
is
the
improved
H.
A
life
cycle
join
phases,
a
lot
of
the
work
that
Lumiere
did
to
integrate
kind
as
a
default
option
for
the
builds
and
testing
on
a
number
of
bug
fixes
pretty
similar
adultery.
That
yet.
H
H
H
A
C
Yes,
I
tried
to
put
on
like
the
sort
of
highlights
of
the
last
three
months,
so
the
big
one
is
that
we're
finally
getting
everyone
to
Etsy
d3
so
hope
you
will
never
again
speak
of
Etsy,
t2
and
I
hope.
There
is
no
at
City
four,
but
we
will
see
one
of
the
interesting
things
we're
doing.
Right
now
is
like
the
runs
CC
ve,
which
is
sort
of
interesting,
because
we
have
packaged
docker.
A
I,
don't
see
it.
One
I'll
try
to
put
him
to
see
from
us
to
give
an
update
for
groups
Trey
and
I'll.
Try
to
do
that.
Asynchronously
by
extension,
not
kind,
isn't
necessarily
a
sub-project
cluster
lifecycle,
but
Ben
is
here
and
if
you
want
me
to
do
a
readout,
because
a
lot
of
folks
who
work
on
this
thing
also
have
a
lot
of
interests
in
kind
and
have
thrown
down
resources
and
help
the
effort
there.
Is
there
any
updates
that
you'd
like
me
to
specially
call
out
during
the
community
meeting.
I
A
When
can,
when
can
I
I
tell
everyone
that
local
cluster
up
is
dead?
That's
the
question
I
want
to
do.
A
A
Okay,
I
think
we
got
updates
from
everybody.
Are
there
any
more
group
topics
that
folks
would
like
to
discuss
cool
once
twice
three
times?
Okay,
so
I
think
the
only
thing
that
I
want
to
make
sure
that
we
follow
up
on
is
I
will
assign
action
items
in
the
notes
in
Sawyer
were
armed
to
the
teeth
for
a
next
conversation
in
two
weeks,
which
will
hopefully
be
a
good
time
for
us
to
discuss
about
115
items
in
the
great
sea
or
de
migration
path
that
doesn't
go
through
the
Oregon
Trail.