►
From YouTube: 20190702 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
July,
2nd
2019.
This
is
the
standard.
Sequester
lifecycle
mean
I'll,
go
ahead
and
shimmer
screen.
If
you
could
add
your
details
to
the
meeting
notes,
that
would
be
useful.
Also,
if
you
have
topics
that
you
want
to
discuss,
please
add
them
in
the
appropriate
subsection
and
we'll
try
to
make
sure
you
address
them
in
this
meeting
so
walking
through
from
the
top
of
the
list.
A
First
up
was
follow-up
from
the
several
notices
that
I've
given
over
time
and
time
again
that
we
today
is
the
day
to
vote
for
with
regards
to
the
image
building
proposal.
So
if
you
haven't
had
a
chance
to
look
at
it,
I've
had
to
link
in
there
several
several
times.
I
have
to
go
back
in
time
to
figure
out
where
it
is.
A
My
plan
is
to
address
the
current
comments.
I
think
there's
been
some
minor
ones
here.
I
just
want
to
make
sure
I
bashed
it
all
up
to
just
shovel
some
texturing
and
then
from
there
go
ahead
and
Greenline
it.
So
the
typical
standard
operating
procedure
we
do
with
this
is
we
put
the
proposal
up,
we
give
feedback.
Then
we
did.
A
A
A
A
A
B
B
A
It
looks
like
we
have
critical
mass
if
there
are
any
last
comments.
What
I'll
do
is
I
will
update
the
cap
and
if
there's
any
last
comments,
please
we'll
do
lazy
consensus
merge
by
the
end
of
the
week.
I
do
know
it's
holiday,
but
it's
that
way.
I'll
do
the
update
today
and
if
there
are
no
issues
that
I
will
I
will
do
the
formal
logistics
stuff
I
need
to
do
with
repo
organizations
and
whatnot
and
then
next
meeting
we
can
talk
about
the
details
of
where
everything
is.
A
Right
next
up
state
of
Cathy
ownership
and
kept
star
providers.
This
was
an
action
item.
Follow
up.
I,
know
that
I
don't
see
Fabio
in
the
call,
but
that
was
one
of
the
topics
that
he
had
brought
up
and
I
know
Justin.
You
had
mentioned
wanting
to
make
sure
that
we
had
parity
there
too,
as
well.
Has
there
been
any
progress?
Any
of
this.
C
B
No,
it
is
just
a
link
to
the
mailing
list.
I
see
injury
in
short
lethal
mouth
without
a
doodle
for
coordinating
on
the
time
best
time
for
this
onboarding
meeting.
So
maybe
in
what
should
we
say
two
or
three
days
from
now,
we
can
collect
the
results
and
go
with
proline
next
week,
sometime
and
yeah
hope
to
see
folks
that
well.
D
B
D
A
Think
one
of
the
things
that
might
help
awareness
of
this
most
people
don't
really
realize
that
component
config
is
the
way
we're
trying
to
standardize
how
to
do
customizations
for
deployments
and
there's
been
some
high-profile
tweets,
basically
around
people
trying
to
figure
out
how
to
do
customization
of
the
environment.
I
don't
know
if
it
requires
awareness
documentation,
some
type
of
like
maybe
a
blog
post
for
to
to
get
more
eyeballs
in
this
space,
because
I
know
that's
been
kind
of
the
problem.
A
B
A
D
B
We
could
do
it
onboarding
me
and
with
poor
anyways
I,
don't
see
that
as
blocking
yeah
I
mean
yeah,
Roderick
would
be
better
yeah.
If
we
can
look
if
we
had
done
it
before,
we
could
just
link
that
there's
awareness,
blog
post
and
then
linked
on
morning
session.
So
if,
if
people
want
to
see
how
it
works,
yeah.
A
Next
step
was
a
follow
on
from
last
time,
which
was
that
we
were
talking
about.
We
have
a
couple
of
long-term
accidents,
one
of
which
was
to
retired
in
and
the
other
was
removing
the
cluster
directory.
I
did
not
do
any
updates
with
regards
to
the
slash
cluster
directory,
although
I'm
gonna
plan
to
start
to
triage
that
that
issue
this
week.
So
maybe
that's
an
excellent
for
me.
A
E
Tim,
specifically
on
the
getting
rid
of
the
cluster
directory,
so
that's
one
thing
that
I
was
looking
a
little
bit
into.
It
seems
that,
from
my
initial
looking
at
it
that
a
lot
of
the
work
is
going
to
be
sort
of
identifying
places
in
say
the
documentation
that
refer
to
scripts
and
pieces
that
are
in
the
cluster
directory
and
sort
of
replacing
that
documentation
with
more
upstream
recommended
approaches
for
things
is
that
is
that
correct,
or
at
least
a
good
chunk
of
that
work?
Is
it's
like
that?
That's.
A
A
piece
of
it
I,
don't
know:
if
that's
a
good
chunk
of
it,
we
we
should
not
be
referencing
those
documents
at
all,
really
any
right,
because
it's
been
deprecated
for
eons,
but
that
is
a
definite
to
do
item
on
the
list.
In
fact,
I
think
that
what
college
should
be
like
the
first
things
we
should
do.
This
should
be
low-hanging
fruit
that
anyone
can
execute
on
yeah.
A
A
Well,
if
you
want
to
feel
free
to
asynchronously,
add
details
here,
I
think,
given
the
maturity
of
some
of
these,
we
may
want
to
roll
them
up.
I,
don't
know
into
other
talks,
it's
a
possibility
until
we
actually
have
a
formal
release
and
then
once
we
have
a
formal
release,
we
might
want
to
have
make
them
have
their
own
session
for.
B
A
A
Products
we've
already
done
it
for
QA
TM.
I
think
cluster
api
is
did
that
last
cycle
to
because
they're
a
they're
a
little
bit
off
kilter
with
regards
and
I
believe
add-ons
also
did
that
recently,
because
I
got
hooked
on
that
couple
of
issues
with
Justin
I,
don't
know
about
other
projects.
I
do
know
that
Etsy
DDM
I
was
looking
for
more
reviewers
and
approvers.
B
B
I
was
thinking
about
like
so
we
concluded
last
meeting
with
it's
that
it's
good
to
graduate,
for
example,
members
to
reviewers
and
reviewers
to
approvers
and
maybe
also
approvals
to
sub
project
owners
in
onus
files
to,
and
then
we
tend
to.
We
encourage
sub
projects
to
do
it
once
every
cycle
and
as
Tim
pointed
out,
different
projects
have
different
cycles.
So,
for
example,
requests
for
API
is
not
running
the
same
schedule,
sq8
them,
but
anyways.
What
kind
of
schedule
makes
sense
for
cops,
for
example,
or
keep
spray,
and
what's
there
any
graduation
of
people
very.
C
It's
a
good
idea,
cops.
We
do
it
sort
of
as
needed,
which
basically
means
that
we
forget
so
I,
really
like
the
idea
that
the
answer
has
to
sort
of
ask
I
really
like
the
idea
of
doing
on
a
schedule.
We
have
not
done
it
on
a
schedule,
but
we
have
been
promoting
people
recently
or
moving
them
up
the
of
the
levels,
I,
guess
and
I
would
say:
I've,
never
regretted
anyone
that
we
have
like
moved
up
the
level.
C
A
B
A
G
G
G
G
G
But
the
according
to
the
deprecation
cycle
did
this:
we
will
happen
in
118
and
there
is
no
planet
change
in
the
human
API
for
this
cycle
at
least
not
nothing.
There
is
not
yet
critical
mass
of
a
request
and
also
the
coup
de
combate
mean
blog
post
link
to
the
5
blog
post
for
release
went
out
one
or
two
weeks
ago,
and
that's
it.
That's
all
for
me.
A
Then
you
do
continuous
breakdown
as
part
of
your
standard
office,
hours
meetings,
and
we
also
talked
about
you
know
this
last
section
year
ago,
during
the
end
of
Mile
End,
a
milestone
evaluating
whether
or
not
you
should
do
promote
folks.
It's
a
this
pattern
has
has
proved
itself
very
fruitful.
So
this
is
more
like
a
PSA.
Don't
follow
as
we
talked
about
planning
any
other
questions,
they're
good
once
twice
three
times
next
up
cluster
API
I,
don't
know
who
wrote
that
I
wanna
put
their
name
there.
H
So
three
PSAs,
we
have
had
three
releases
like
in
the
past
week.
Copy
has
been
released
like
a
zero
one,
four,
which
brings
no
draft
changes.
We
have
a
new
controller
like
actually
adds
the
node
references,
and
we
have
more
events
that
the
users
can
consume.
On
the
machine
machine
set
em
machine
deployments,
cluster,
a
PA
SS
provider
has
been
raised,
zero
three
three
to
support
the
node
best
changes,
and
we
also
have
cap
and
be
released
at
zero.
Three
zero
and
there
have
been
a
lot
of
changes
in
cap.
H
V
I'll
suggest
to
look
at
extensive
release,
notes
that
Andrew
cut
together
so
perhaps
to
put
them
for
the
great
work
and
I
believe
cap
V
is
now
at
the
same
feature
parity
with
Kappa.
So
that's
actually
a
great
milestone
for
the
project.
A
Two
questions:
one
question:
do
these:
for
the
broader
audience
is
like
version
scheme,
that's
a
pretty
confusing
version
scheme.
I,
don't
think
anyone
besides
the
people
who
are
working
on
the
project
would
understand
what
that
means
so
like
what
is
what
is
0-1
for?
Why
does
that
translate
to
a
B,
0,
3,
2
3.
A
H
A
I
Of
the
things
we
have
to
keep
in
mind-
and
this
is
something
that
we
ran
across
both
with
the
AWS
provider
in
the
piece
for
your
provider-
is
that
there
may
be
breaking
changes
to
the
provider
that
still
run
with
the
same
kind
of
API
version
for
upstream
cluster
API.
So
while
it's
relatively
easy
for
common
cluster
API
to
stick
with
the
kind
of
API
types
we
define
for
the
minor
version
at
this
point,
we
can't
necessarily
rely
on
that
for
the
AWS
provider
of
the
vSphere
provider.
I
So
for
the
aid
of
us
provider
itself,
we
had
to
change
the
way
that
we
were
labeling
instances
within
the
AWS
API,
because
it
was
causing
issue
with
the
integrated
cloud
provider
with
kubernetes,
so
that
introduced
the
breaking
change
there.
But
it's
still,
we
still
kept
working
against
the
same
functional
version
of
cluster
API,
so
whatever
we
define
needs
to
be
able
to
handle
those
types
of
changes
as
well.
We.
A
Should
I
think
we
should
probably
take
this
offline
but
focus
on
maybe
addressing
this
in
a
V
1
alpha
2?
It
has
to
be
addressed
before
beta
before
broader
consumption
for
sure,
but
maybe
addressing
this,
and
if
you
want
alpha
2
plus
time
for
him
to
have
a
common
version
schema
across
providers.
So
that's
a
little
little
when
you,
when
people
do
an
update
here,
it'll
make
more
sense
for
consumption
model,
because
some
people
just
want
to
be
consumers.
They
don't
want
to.
They
don't
want
to
know
how
the
sausage
mate
is
made.
I'll.
H
Cool
for
B,
when
alpha
to
the
v1
operative
types
have
been
merged.
One
of
the
biggest
change
has
been
to
change
the
domain,
which
has
changed
from
clustered
arcades
that
I
go
to
clustered
or
six
okay
stereo
the
b1a4
one
types
are
gonna
all
still
going
to
be
in
the
older
domain.
For
now,
unless,
like
we
have
like
a
strong
reason
to
bring
them
in
your
domain,
that
will
complicate
a
lot
like
upgrade
paths
and
stuff
like
that
and
we're
currently
working
on
the
controllers.
C
H
H
Good
approach
and
one
other
thing
for
we
went
out
for
two
I,
like
probably
like,
will
post
a
message
like
in
the
next
in
the
next
week
or
this
week
to
just
like
a
pair
with
someone.
If
someone
is
interested
in
pairing
on
working
on
b1a4
I
would
be
willing
to
kind
of
like
get
the
time
to
just
like
on
board
like
newcomers
or
like
people
that
just
want
to
learn
more,
and
this
is
I'm,
just
gonna
go
and
the
yeah.
H
J
So
the
screen
is
a
little
slow
to
refresh
from
what
Tim
is
sharing
Lucas
had
a
question:
can
we
quickly
say
why
we
have
this
provider
or
why
we
need
it?
What
problem
does
it
solve
and
the
nice
thing
about
the
docker
provider
for
cluster
API
is
that
it
lets
us
do
a
relatively
self-contained
test
of
all
of
cluster
API
without
needing
to
pay
for
cloud
resources.
J
H
J
B
C
C
The
ideas
that
we're
trying
to
be
very
focused
on
like
a
real-world
use
case,
because
the
whole
space
of
add-ons
is
so
broad
that,
without
like
actually
getting
concrete,
it
becomes
almost
impossible
make
any
progress
so
where
we've
merged
a
PR,
it's
not
perfect
by
any
means,
and
we
are
looking
into
like
extending
it
to
make
it
more
perfect.
So,
for
example,
I'm
working
on
tooling
to
generate
add-ons,
probably
I,
think
Jeff
Johnson
was
looking
at
updating
to
convert
or
0.2
I.
C
Think
Daniel
and
Lee
are
looking
at
or
I
think
I
know
is
mostly
looking
at
Docs.
I
need
to
use
mostly
thinking
about
a
cap
right
now
and
I'm
sure
people
are
also
gonna
do
other
things
as
well,
but
that's
sort
of
the
big
headlines.
I
think
that
people
were
talking
about
working
on
last
week
and
they're
sort
of
current,
like
medium
term
goal
that
we're
adding
timing
towards
is
to
produce
an
add-on
operator
for
something
that
troub
ATM
cops.
C
Could
spray
like
a
basically
big
insulation
tools,
will
choose
to
integrate
that
they'll
want
to
integrate
and
a
likely
candidate.
For
that
we
suspect,
is
the
node
local
DNS,
we're
not
there
yet,
but
that
is
sort
of
we're
trying
to
get
to
a
place
where
we
produce
something
that
is
valuable
and
sort
of
solace
real
problems
and
such
that
the
installation,
tooling,
would
want
to
install
it
or
choose
to
install
it.
A
C
D
F
F
D
We
need
to
kind
of
poke
at
again,
and
then
we
have
a
new
contributor
believe
is
on
the
call
of
who's
beginning
work
on
legacy,
flag,
integration
with
coop
proxy
and
then
eventually
cube
idiom.
So
that's
taking
some
of
my
coffins
work
to
distill
our
approach
to
flags
into
a
venerable
library
in
the
legacy
flag,
repository.
F
D
Then,
consolidating
and
standardizing
that
approach
in
the
existing
components
so
that
we
have
an
actual
way
to
think
about
that
maintained
and
eventually
deprecated
and
then
again,
as
mentioned
earlier,
the
meeting
time
scheduling
doodle
is
up
for
the
contributor
onboarding
meeting.
So
we're
gonna
try
to
get
a
blog
post
out
about
that.
The
police
update
your
schedules
there.
If
you
would
like
to
attend
and
they're
interested
in
contributing
we'll
we'll
try
to
make
that
as
useful
the
session
as
possible
and
if
you
are
not
able
to
join
we'll
make
sure
it's
recorded
Thanks.
A
C
Hello,
yes,
so
I
think
our
big
thing
that
is
probably
of
most
interest
generally,
is
that
we
are,
we
think,
getting
closer
to
being
able
to
distribute
our
artifacts,
which
are
both
containers
and
binary
artifacts
through
the
working
group.
Kids
infra
infrastructure,
which
is
spinning
up
I,
believe
we're
that
sort
of
the
end
of
that
process
is
is
on
the
horizon,
as
it
were,
and
so
I
think.
The
next
thing
that
we're
now
sort
of
thinking
about
is
how
we
do
more
automated
releases.
I
think
this
is
something
that
cross
cuts
across
all
the
projects.
C
A
C
Almost
like
it
could
be
a
sub
produce
of
sig
release,
or
something
like
that,
but
yeah
I
just
didn't
know
whether
there
was
a
it's
clear:
the
should
not
own
it
and
I.
Don't
know
who
should
okay,
I
mean
I
I
think
the
way
we
do
it
tradition
is,
you
know,
cops
build
something.
Cluster
API
build
something
we
sort
of
look
at
where
we
are
and
see
what
works
and
what
doesn't.
And
then
we
pick
the
best
of
all
the
various
worlds
that
have
been
built,
but
I
don't
know
where.
A
A
C
K
C
K
Thank
you
yeah,
so
we
had
we
had
a
meeting
last
week.
It
was
great
had
actually
I
think
more
more
people
attending
than
then
at
least
I
expected
from
from
the
doodles.
You
know
great
input
and
we
made
some
decisions
around.
You
know
what
what
to
do.
First
number,
one
I
think
request
was
to
create
a
types
for
SDM,
a
you
know:
kubernetes
style
configuration
version
and
so
that
that's
something
that
we'll
we'll
start
working
on
also
I'm
working
on
getting
tests
running.
K
So
these
would
be
sort
of
end-to-end
tests
using
probably
not
kind,
but
but
a
kind
of
a
variation
on
that
with
containers
that
have
system
D,
enabled
and
I
know
that
kind
kind
uses
the
same
idea
as
moshus
config
ADM.
So
so
that
should
work
with
with
with
prowl
that'll.
Give
us
some
some
signal
just
like
end
to
end
at
sigue
DM
in
it
join
and
so
forth,
and
then
we
can
get
fancier,
you
know
from
from
there
and
then
I'm
working
on
supporting
seda
diem
in
it
from
an
existing
data
directory.
K
This
is
one
of
the
ways
that
that
you
can
recover
a
cluster
that
has
failed.
You
know
in
case
you
don't
have
a
snapshot
or
you
prefer
not
to
use
a
snapshot
because
it's
outdated,
but
you
have
access
to
the
state
of
least
one
of
the
members.
So
that's
that's
that's
another
item
and
then
I
think
support
for
concurrent
joins,
which
is
something
that
Loomer
I
think
is
added
to
tacuba,
diem
and
yeah
it
anyway.
K
It
didn't
it
didn't,
according
to
the
SETI
documentation
that
wasn't
wasn't
supposed
to
work
in
all
cases,
but
it
looks
like
it
will
and
it's
a
sort
of
a
bug
that
that
became
a
feature
so
so
I
think
that
that
will
be
probably
useful
for
especially
for
integrating
a
CDA
DM
to
some
kind
of
larger
automation
where
you
just
want
to
kick
off.
Maybe
a
bunch
of
control,
plane,
replicas
and
just
have
them
have
them
join.
Do.
A
K
A
good
question
off
the
top
of
my
head:
I,
don't
I
think
to
you,
know
to
at
least
do
the
the
P
like,
let's
say
the
end-to-end
tests
and
the
version
configuration
I,
think
that
that
seems
you
know
like
reasonable
within
a
month
timeframe:
I,
don't
know
it
says
to
Pat
it
a
little
bit.
I,
don't
know
Justin,
you
have
you
have
more
experience.
If
you
want
to
a.
C
Yeah,
if
you
want
to
I,
would
I'd
say
I
think
I.
Think
one
thing
we
should
mention
is
like
the
CLI
experience,
the
basic
CI
experience
works
great
right
and
you're
running
that
yeah
as
I
guess
so
like
there
is
a
there
is
a
baseline
functionality
that
works,
and
we
are
talking
about.
You
know
additions
to
that
functionality.
That's.
A
Think
the
in
the
fullness
of
time
having
a
single
tool
that
manages
the
lifecycle
of
a
CD
in
the
community
style
fashion
is
super
important
for
the
project,
but
it's
also
the
in
the
fullness
of
time.
I
said
in
a
lot:
it's
sometimes
extinct
a
lot.
The
the
question
I'm
gonna
have.
Is
that
currently
the
way
we
do
sed
management
in
the
main
repository
for
better
for
worse,
more.
F
A
H
K
C
With
the
people
they're
working
on
like
some
of
the
functionality
like
non-voting
members,
which
should
really
make
non
learners,
thank
you.
We
should
make
it
love
this
much
more
like
some
of
the
automated
stuff,
a
lot
more
robust,
so
I
think
that
would
be
great
to
have
them
like
had
that
one
fatty
acid,
ATM
right,
okay,.
K
Yeah
well
I'll
grab
that
I'm
happy
to
grab
that
also
I
guess
Justin
Jo,
Jo
Beth
I
think
is
his
name.
He
did.
K
A
You
know
getting
stuff
clearing
out
these,
these
terrible
bugs
it.
Usually
it's
been
like
a
valiant
effort
from
the
people
who
have
encountered
them,
which
is
not
it's
never
been
sustainable.
You
know,
I
was
I
was
one
of
those
people
for
a
long
period
of
time,
I
backed
away,
but
it
doesn't
doesn't
mean
the
problem
solved.
K
K
I
mean
I,
guess
the
question:
is
you
know?
What
would
you
have
any
recommendations
for,
for
you
know
releasing
like
Justin
said
the
core
functionality
is
there,
people
are
people
are
using
it.
You
know
like
we're
using
it.
You
know
in
in
production,
it's
alpha
because
it's
sort
of
you
know
it's
still
developing
like
we're.
We're
going
to
be
moving
away
from
flags
to
this.
You
know
version
to
API,
but
so
it's
yeah
I'm,
just
wondering,
like
you
know,
as
far
as
sending
some
kind
of
signal
with
or
with
the
release.
A
K
Yeah,
that
sounds
that
sounds
reasonable
and
and
Lumiere
brought
up
that
that
point
at
last
week's
meeting.
You
know
that
there
are
some
that
it
would
be.
You
know
it
would
be
nice
to
be
able
to
consume
at
CD
ADM.
Perhaps
there's
a
library,
although
you
know
both
Justin
and
I,
are
you
know
very,
like
vo,
feel
pretty
strongly
that
that
the
CLI
experience
is?
Is
it
as
a
plus
as
a
benefit
for
end
users?
You
know
in
order
not
to
not
to
like
hide
this
functionality.
A
Think
I
think
the
answer
that's
ideal
is
both
I
mean.
If
you
have
component
config,
you
should
be
able
to
do
it
programmatically
and
they
should
still
use
the
tool
as
well
I.
Think
having
the
story
be,
you
know
some
tool
can
drive
it
through
automation.
This
is
exactly,
but
you
know
we
want
to
do
with
everything
else,
as
some
tool
can
drive
it
through
automation,
using
the
well-defined
configuration
in
the
declared
a
fashion.
Other
people
can
use
the
CLI
and
a
more
imperative
way.
A
C
I
think
the
the
distinction
that
I
think
we're
playing
around
with
is
the
idea
that,
if
that
automated
tooling
is
non-trivial
like,
if
you
imagine
the
like
robots
that,
like
reconfigures
that
CD
for
you
like
for
a
cloud
scenario,
it
can
build
confidence
in
the
robot.
If
it
is
expressed
as
command
literal
command
lines
that
you
could
yourself
run
and
also
then
you
sort
of
the
the
user
can
look
at
the
logs
and
learn
what
it
is
doing.