►
From YouTube: 20180522 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
We
have
a
kind
of
sparse
agenda,
so
folks
could
fill
in
the
details
and
if
they
have
agenda
items,
please
add
them
to
the
meeting
notes.
The
first
topic
we
have
was
the
conversation
we
had
from
last
time
was
whether
the
Moran
test
folks
had
offered-
and
we
had
talked
during-
could
con
about
sponsoring
the
doctor
and
docker
solution
to
be
imported
into
the
kubernetes
SIG's
repository
and
I
wanted
to
have
the
last
chat
to
see
if
there's
any
speak
now
or
forever
hold
your
peace
around
this
topic.
B
So
I
would
really
like
it
to
sink
with
sig
testing
before
if
that
is
feasible,
so
I
I
got
ping
by
James
Nunnally
and
they
have
like
he
is
working
in
chest
ik
and
they
have
this
Fork
of
kubernetes
testing,
for
that
does
pretty
much
the
same,
but
it's
even
more
layers.
So
the
thought
there
is
that
it
integrates
really
well
with
brow,
which
is
the
suggesting.
B
Main
main
project
for
automation,
so
if
we
could,
that
works
automatically,
so
it
could
make
sense
for
us
to
at
least
take
a
look
and
compare
the
differences
between
the
two
projects
and
see
if
they
can
merge
or
if
it's
actually
so
I
tried
it
locally
and
it
seemed
to
work
pretty
well.
I,
don't
know,
have
artists
tried
well.
C
Actually,
the
folks
from
sick
testing
did
chime
in
in
the
discussion
topic
in
Google
Groups,
and
the
idea
is
that
the
projects
have
a
bit
different
scope.
This
testing
for
JND
solution
is
intended
for
building
kubernetes
from
source,
while
KDC
has
wider
scope,
for
example,
it's
to
be
used
by
projects
that
extend
kubernetes
for
testing
on
public
CI.
It
works
at
the
instrument,
engines
and
so
on.
So
as
far
as
I
understand,
at
least
from
this
message
from
suggesting
folks,
they
agree
that
the
project's
should
exist.
Maybe
the
is
a
chance.
C
C
B
Okay,
that's
that's
fine
by
me.
I
guess
then.
My
main
concern
after
that
is
who's.
Gonna
maintain
this
stuff.
Cuz
I
really
want
to
have
dedicated
people.
Otherwise
it's
gonna
slip.
I
mean
we
talked
about
rewriting
KDC
and
go
more
than
a
year
ago
at
cube.
Con
Berlin
and
I
haven't
seen
that
happen
so
and
also
from
what
I
saw
there
were
latency
on
the
cube
idiom.
So,
for
example,
the
latest
you're
merging
things
sort
of
my
rent.
If
repo,
how
I
gonna
address
that,
if
we
merge
in
the
cabinet
one.
C
Well,
hopefully,
I
will
be
able
to
work
more
on
the
project
now,
at
least
from
the
internal
situation
in
my
aunty's,
but
I
hope
that
more
with
the
project
gaining
more
visibility,
maybe
someone
else
will
be
willing
to
step
in
and
help
maintain
in
the
project.
It
will
be
appreciate,
appreciate
it
very
much.
A
So
there
is
no
formalization
from
the
higher
level
project
perspective
about
whether
or
not
you
can
have
multiple
projects
within
this
six
sponsored
location.
I
do
agree
that,
like
I,
would
really
like
to
Highlander
these
solutions
and
by
that
I
would
like
to
say
there
can
be
only
one,
because
there
are
many
of
them
and
they
don't
all
meet
all
the
requirements
that
we
would
like,
but
at
the
same
time
I
don't
want
to
start
a
new
project.
I
want
I,
want
people
to
amalgamate
on
a
single
one
right,
yeah.
A
If
there,
if
people
have
bandwidth,
there's
nothing
preventing
us
from
moving,
and
if
people
have
bandwidth
it
succeeds,
if
not,
it
dies,
and
we
we
do
some
type
of
attic
process
right,
and
we
could
also
let
multiple
things
evolve
in
their
own
time
frame.
But
I
am
concerned
in
that
they
there
are
now
like.
A
A
You
know
solution
that
that
works
for
most
of
the
use
cases
and
I
don't
even
really
care,
but
the
implementation
details
I,
really
don't
so
long
as
I
can
spin
up
cube
ATM
clusters
easily,
it
can
be
written
in
my
Carribean
I,
wouldn't
care
so
long
as
it
worked
and
was
maintained
right.
That's
the
that's!
The
key
I'll
stop
there
and
see
what
thoughts
are.
D
See
I
guess:
I
was
second
the
part
about
maintenance
that
both
Lucas
and
Tim
brought
up,
because
I
think
we
have
seen
you
know
even
with
Q
bad
men
and
when
you
know
look
this
disappears
to
go
back
to
school
for
long
stretches
of
time.
Just
a
the
bandwidth
from
the
restless
leg
sort
of
has
some
trouble
sort
of
filling
the
gaps
there
and
so
I
think
that
you
know
having
a
new
project.
Even
if
there's
one
really
strong
maintainer
is
not
enough
right.
A
C
B
A
Perspective,
like
part
of
the
adoption
into
the
cigs
repository,
is
not
for
like
individual
six
sponsors
to
maintain
it's,
it's
meant
to
be
provide
a
common,
safe
playground
for
multiple
vendors
to
work
on
right
so
long
as
the
owners
file
has
legitimate
owners
that
are
the
maintainer
of
it
right,
and
you
know
the
the
six
sponsors
can
help
at
best
to
get
things
rolling.
But
you
know
we're
not
going
to
be
able
to
take
on
me
and
that's
a.
C
A
C
B
A
All
right,
I,
don't
really
have
a
problem.
Is
it's
just
it's?
We
could
always
do
it
like
a
trial
period
to
we
can
reevaluate
and
I
can
talk
within
the
steering
committee
from
that
perspective
and
see
if
we
can
land
a
happy
home
with
it,
and
also
this
could
be
a
rallying
point
for
people
to
converse
about
it,
because
so,
if
Jets
stack
wants
to
contribute
their
thing,
I'd
be
happy
with.
You
know
them
coming
to
talk
and
because
it
sounds
like
everybody
wants
a
single
solution.
A
I
right
I
we
could
use
this
as
a
rallying
cry
and
we
could
talk
with
other
folks,
because
google
has
pieces
of
this
they're
testing
in
fro.
Other
people
have
pieces
of
this.
I
know
some
red
headers
that
have
done
their
own
thing.
I
do
that
Judge
tech
folks
have
done
their
own
thing,
and
this
one
too
so
I.
B
Think
it's
Rio
period
is
fair
because,
right
now
it's
I
said
like
the
maintainer
shipped
away.
This
still
unclear
like
if
there's
enough
enough
people
but
I'm
I'm,
happy
to,
as
you
said,
use
it
as
a
starting
point,
and
maybe
if
we
do
this
and
it's
like
how
should
I
put
it
like
controversial
enough
to
save
one
of
them,
we
might
use
it
as
a
way
to
start
the
discussion
to
actually
try
to
conversion
things.
A
A
D
We
have
a
clearer
perspective
on
this.
Is
there
any
issue
importing
code
from
the
Miranda
sorg
into
the
Ready's
org
or
later
kicking
it
back
out?
I
know:
I
was
having
some
conversations
yesterday,
where
we
were
talking
about
trying
to
sort
of
repatriate
some
of
the
cloud
provider
code
into
the
Google
cloud
platform
organization
and
that
the
lawyers
were
telling
us
that
that
was
tricky.
D
A
D
A
So
long
as
the
Apache
2
license
has
been
adhered
to,
we
should
be
fine.
The
I
will
talk
with
the
CN
CF
folks,
one
last
time
before
I
pull
the
trigger,
but
we
haven't.
In
my
past
life
there
was
never
a
redhead
as
long
as
you
you're
licensing
model
was
consistent.
It
didn't
matter
because
the
licensing
model
would
be
ok,
but
I
do
know
that,
like
the
googles
layers
are
very
specific
versus,
like
Red
Hat
lawyers,
which
are
very
different,
so
I
guess.
D
A
You
can
dust
it
off
the
attic
shelf
and
bring
it
back
downstairs,
and
now
your
lamp
is
cool
again,
but
I'll
talk
with
them
to
to
try
and
get
clarification
on
lifecycle,
but
I'm,
okay,
with
the
the
trial
period
too.
B
A
B
That
means
basically,
we've
approved
it
in
the
sig
and
you're.
Just
checking
it
up
that
everything's,
okay
with
the
steering
committee
before
doing
but
I
think
if
we
have,
if
we
don't
have
any
like
strong
opinions
against
here,
we
could
just
say
feel
free
to
go
after
you
double-check
things.
Token
stream,
as
the
period
is
that
okay
by
everyone
else.
A
B
We
we
messed
it
up.
Last
week
we
were
talking
about
the
dates,
but
2/5
is
the
current
current
code.
Freeze
date,
they
might
do
that
29th
in
case
it
proves
to
be
extremely
flaky
right
now,
but
the
price,
the
the
so
sig
release
had
a
meeting
last
week
where
they
said
things
seems
to
be
stable,
where
everyone's
okay,
with
the
faring,
the
code
freeze
one
week,
actually
like
two
hours
after
that
meeting,
all
the
GCE
or,
like
all
the
EDS
suits
on
GCE,
started
flagging.
B
A
I
wouldn't
mind
us
doing
it
in
formal
freeze
by
the
29th.
You
know
all
in
like
let's
try
to
get
all
the
remaining
items
feature
additions
in
for
like,
namely
the
config
changes.
Those
are
the
big
quince
train,
yeah
and
then
dynamically
configuration
stuff,
the
anything
beyond
that
should
be
like.
Let's
stop
because
this
this
release
has
a
ton
of
changes
in
it.
Now
we
had
a
lot
of
changes
before
now.
A
It's
got
a
lot,
a
lot
of
changes
in
it,
so
I'd
like
to
stabilize
and
start
to
grant
folks
into
documentation
and
testing,
because
we
we
clearly
have
a
need
to
fix
a
bunch
of
documentation
issues,
as
I
mentioned
in
the
last
meeting.
The
a
big,
a
big
problem
we
have
is
stemming
the
tide
of
people,
submitting
issues
into
Cuba
diem
and
it's
more
or
less
a
documentation
and
UX
problem
and
I
do
think.
We
need
to
address
those
issues
by
potentially
bisecting
some
of
the
docks
and
write
in
them
judiciously.
A
A
Front-Loading
the
UX
questions
into
some
of
the
docks,
because
I
know
there's
a
ton
of
issues
with
regards
to
people,
understanding
how
you
do
configuration
overrides
and
having
examples
set
in
place,
because
currently
we
in
the
past
couple
cycles,
not
just
110
but
one
nine
and
before
we
added
a
whole
bunch
of
knobs
right
and
now
in
stabilizing
that
config
we're
pushing
out
a
lot
of
those
extra
knobs
and
we're
saying
use,
use
the
other
either
metadata
or
this
other
apparatus.
So
this
is
going
to
confuse
a
ton
of
people.
B
Yeah
I'm
gonna,
so
I
wrote
up
here.
Local
I
haven't
had
time
to
submit
this
as
an
issue,
but
like
high-level
Doc's,
we
should
add,
go
along
the
lines
of
package
incubating
and
running,
and
on
system
D
systems
running
idiot
tests
against
cubed
and
clusters.
Customizing
control,
plane
arguments
with
cube,
ATM
securing
your
cubed
em
cluster,
even
more
customizing
the
cubelets
and
integrating
with
other
CRI
runtimes
customizing.
The
proxy
DNR
DNS
announced
testing.
Pre-Release
versions
of
cube
kubernetes
with
cuba
am
upgrading
your
version.
B
So
you
you
would
just
go
to
cubed
in
reference
and
find
those
things,
but
they
weren't
discoverable.
If
you're
aware
on
the
queue
bed
and
getting
started
guide,
you
wouldn't
see
them.
If
you
didn't
go
to
the
reference
and
scroll
down
like
50
pages,
and
then
you
came
to
like
this-
is
how
the
dropping
works
so
I
propose
to
to
break
out
most
of
the
reference
guide
into,
for
example,
these
nine
topics-
I'm
gonna,
write
or
I-
can
take
on
writing
the
upgrading
version.
B
A
There,
let's,
let's
log,
all
the
issues
and
we'll
get
everybody
rallied,
at
least
from
our
side,
to
get
the
docs
in
place.
It's
a
p0
to
get
that
in
place
because
without
it
we're
gonna
be
in
trouble.
So
the
we
already
have
a
bunch
of
Doc's
issues
currently
and
we're
gonna
address
some
of
those
too.
But
I'll
try
to
do
is
rationalize
the
priority
across
some
of
the
docs.
A
G
G
B
As
I've
reached
researched
a
lot
of
internet
to
get
away
it
with
mics,
often
that's
working
on
similar
thing.
I'm
gonna,
I'm
gonna
write
that
I
already
rolled
up
the
like
long
issue.
I'm
gonna
convert
into
a
cap,
but
I
might
I'm
gonna
convert
that
into
a
cap,
but
I
might
do
that
after
or
informal
or
formal
code
freeze,
as
I
have
like
one
or
two
peers
for
that.
Still
to.
G
Submit-
and
we
also
have
a
bunch
of
API
changes-
are
we
gonna
take
care
of
those?
Yes,
okay,
something
else
about
the
dog,
though
I
don't
think
we
can
automate
everything
for
111
like
generating
a
config
and
showing
it
to
the
user.
On
the
website
we
can
I
guess
we
can
points
towards
cube,
ATM,
config
rate
default,
which
is
gonna,
help
them,
and
also
we
can
show
the
gold
dogs
for
the
master
configuration
as
well.
G
A
Prefer
go
DAC
in
referencing,
go
doc
versus
trying
to
like
maintain,
because
this
is
a.
This
is
a
common
pattern
that
I've
seen
fail
everywhere.
Is
that
people
write
all
these
docs
for
configuration
files
and
then
it
changes,
and
then
then
they
have
to
rewrite
their
entire
thing
or
it
gets
stale
and
it's
confusing
to
the
user.
A
G
B
So
so
the
cubed
M
configuration
and
bad
sticky
proxy
configuration,
so
that
is
actually
all
the
time
you
print
master
configuration
it's
gonna,
be
there,
but
I
don't
see
us
adding
documentation
in
the
print
defaults
command,
at
least
not
yet.
Instead,
we
have
to
generate
from
God
a
custom
said
the
the
documentation
for
our
API
types
and
put
them
on
our
website.
This
would
preferably
work
automatically,
but
I
think
in
the
meantime,
we
have
that
we
should
have
the
tools
to
generate
these
manually
and
like
or
manually,
in
the
sense
that
we
have
a
command.
B
G
So
the
the
tool
which
basically
pulls
stuff
from
the
master
repository
to
the
website
repository
is
currently
broken
because
of
the
qubit
transition
on
the
website
now
uses
Hugo
as
a
back-end
and
it
broke
not
only
the
folder
structure
but
also
the
tool,
the
tool
chain
that
handles
the
automation.
So
that's
what
I
am
suggesting
to
not
plan
this
for
1.11,
because
the
prefers
have
to
fix
the
website.
Repository.
A
A
To
change
the
the
the
change
is
fairly
significant,
so
just
having,
let's
set
aside
the
time
to
stop
making
stuff
do
bug
fix
PRS.
Do
the
docs
changes
now
that
we
need
to
do
as
well
as,
like
start
surfacing,
some
of
the
other
UX
experience
issues
that
I
know
are
going
to
come
up
as
part
of
us
actually
testing,
because
because
we
haven't
really,
we
still
don't
have
an
automated
test
for
the
intent
test
suite.
A
B
Adding
generating
documentation
for
adding
the
tests
best
infra
have
been
challenging
because
they
varied
all
the
times.
I've
done
it
so
far
being
different.
Just
like
happens
to
be
yeah
so
every
time
I've
done
it
so
far.
I
just
synched
with
this
cig
testing,
folks
like
Ben
and
Jeff
and
Sen
like
what
needs
to
be
done
this
time
but
yeah.
If
and
when
things
stabilize
I
hope,
we
can
add
more
information
on
how
to
add.
A
Automated
tests,
so
the
intent
test
suite
that
liz
has
created.
We
can.
We
can
absolutely
exercise
that,
on
our
own
as
part
of
a
is
a
test
plan
and
we
can
absolutely
add
more
tests
to
it
to
add
more
coverage
for
a
lot
of
these
features
that
we've
created
and
then
eventually
get
the
automation
in
place
so
I'm
totally.
Okay
with
that
path,
yeah.
B
Yeah
I
was
talking
about
the
other
paths
we
should.
We
should
have
both
indeed,
but
I
was
talking
about
the
what's
checked
into
testing
for
at
the
moment
and
doing
doing
the
three
things
like
generating
111
cluster
from
scratch:
creating
110
cluster
using
cuban
of
CLI
111
upgrading
from
one
ten
to
one
eleven,
those
those
are
the
tests
we've
got,
but
those
are
high
level
and
the
the
test
cam
that
can
be
run
using
list.
That's
test
fields
are
more.
B
B
So
I'm
still
working
on
as
you've
seen
from
the
reviews
shuffling
around
of
removing
stuff,
we
don't
actually
need
in
preparation
for
hopefully
going
betta
with
the
config
in
one.
What
is
it
twelve
in
September?
So
that
is,
that
is
a
goal
that
is
mostly
around
reorganizing.
The
structs,
which
I
have
a
document
up
on
like
just
like
I
just
reorganized
everything
locally
and
uploaded.
B
B
That's
that's
wanting
and
I'm
basically
doing
the
least
disruptive
things
in
this
release.
Sorry
team,
I
I
know
they
are.
They
are
big,
but
it's
at
least
the
least
disruptive
things,
and
then
we
have
the
cube
ADM
cublas
integration,
so
basically,
cubed
M
will
write
down
two
files
at
runtime.
One
is
the
cube:
let's
yeah
Moe
file
with
structured,
structured
and
versioned
component
configuration
for
the
cubelets.
The
other
is
an
environment
file
with
runtime
odds
for
runtime
flags
for
a
cubelet.
B
This
environment
file
is
then
used
by
the
system
did
drop
in
so
or
any
other
init
system.
You
prefer
it's
like
Generic
in
that
sense.
So
that
means
that
we
can
do
stuff
like
add
specific
paints
on
insulation
or
specific
labels,
or
let
the
user
pass
node
IP
we
a
qubit
and
config
or
letting
us
find
so
these
are
being
like
longstanding
requests.
We
just
say:
like
won't
fix
at
this
time
we
don't
have
a
clear
interface
or
stable
guarantees
from
the
cubelet
team.
B
Now
we
have
better
with
the
component
configuration
and
all
that
stuff
and
the
main
benefit,
so
those
are
nice
to
have.
The
main
benefits
is
that
we
at
runtime
can
share
a
glad
Oh.
This
specific
system
has
the
wrestle
conf
its
using
system
d,
rizal
d,
and
we
can
then
fold
this
specific
system
right
the
flag
to
the
right
part
for
the
result,
conf.
Otherwise,
everything
is
going
to
break,
and
this
is
per
node
configuration,
not
the
structured
inversion
configuration
which
is
which
is
general
for
the
whole
cluster.
B
B
B
The
exactly
the
same
as
earlier,
exactly
as
scary
as
other
upgrades
as
well,
but
after
this
one
it's
been
not
going
to
be
as
scary
as
before.
So
this
is
the
long
conversation
yeah.
It
was
like
an
hour
conversation
last
time,
so
yeah
I
have
it
all
in
the
issue.
Please
please
read
through
it.
It
like
got
really
long.
B
B
Good
thing
is
that
dynamic,
cubelet
config
is
actually
going
towards
beter
this.
This
cycle
I
saw
the
one
of
the
main
pieces
merge
yesterday,
which
is
great
again.
We're
not
gonna
use
it
in
111,
but
we
have
the
infrastructure
needed
to
enable
this
in
the
future.
In
case
we
want,
we
want
to
do
it.
We
might
as
well
not
want
to
do
it,
but
then
we
can
have
it
have
it
as
an
opt-in.
B
B
Cubelets
api,
so
as
part
of
that,
so
when
we're
doing
these
changes,
we
really
really
really
want
to
make
sure
that
it
doesn't
break
the
use
cases.
We
have
like
having
the
API
endpoint
secure,
having
the
read-only
port
secure,
having
the
CER
I
support,
locked
things
like
that.
So
d2
is
a
contributor
on
reviewer
to
the
sig.
He
is
in
China,
so
he
can't
attend
these
meetings.
B
B
B
In
the
same
way
as
we
do
that
and
like
if,
if
we
don't,
we
have
a
special
case
of
if
pre-flight
checks
can't
or
if
the
condition
is
false.
So,
for
example,
let's
say
the
cubelet
isn't
running.
Instead
of
failing
them
and
saying,
oh,
you
have
to
start
the
cubelet,
which
is
like
base
virtually
99%
of
the
cases.
Your
cubelet,
it's
gonna
be
stopped
at
the
time
you
rank
you've
any
minutes.
B
So
this
is
gonna
be
and
together
with
the
recommendation
that
the
user
should
pull
them
by
him
herself
before
with
the
new
pool
command,
as
Tim
pointed
out
so
then
we
will
have
in
a
troubleshooting.
We
can
say:
did
you
people
if
you
did
people?
What
did
it
say
and
then
like?
If
it's
still
deadlocked
or
hanged,
then
we
know
it's
a
cubelet.
That
is
faulty,
but
we
can
actually
detect
the
cubelet
being
faulty
and
have
a
semi
good.
B
B
So
so
those
are
the
high
level
high
level
things
going
in
and
also
we
or
so
is
there.
Anyone
that
wants
to
do
pair
programming
with
me
on
the
testing
for
changes,
so
I
can
I
know
roughly
where
the
bit
should
be
and
I've
done
it
before,
but
if
I
I'm
not
gonna,
be,
for
example,
around
the
next
time
so
can
I
do
some
knowledge
translates
to
someone
simply
hacker
I'd.
B
A
A
I'm
gonna
do
is
I'm
gonna
run
through
the
111
milestone
again
next
time,
even
get
all
the
issues
logged
and
anything.
That's
not
it's
that
we
don't
care
about
for
111,
I'm,
just
gonna
start
punting
them
out.
So
anything
that
isn't
on
the
list
that
we've
talked
about
today.
I'm
just
gonna
put
them
out
of
111
just
so.
We
have
a
concrete
list
of
actually
items
that
folks.
Everyone
on
this
sake
and
take
a
look
at
in
this
global
visibility.
B
B
B
A
Do
need
to
we
do
need
to
eventually
write
a
charter,
but
what's
happening
with
the
steering
committee
is
that
the
charters
are
changing
so
because
there's
people
have
brought
up
a
number
of
questions,
comments,
complaints,
concerns
on
the
format
of
how
the
Charter
should
be
done.
So,
even
if
we
were
to
write
something
up,
he
would
probably
just
sit
there
in
a
full
request
for
indefinitely.
A
Are
deprecated
and
the
replacement
isn't
ready
yet
try
to
show
deprecated
with
the
new
charter
that
hasn't
been
decided
on,
so
the
it
there's
just
needs
to
be
a
tighter
feedback
loop
with
some
of
the
existing
charters
that
are
in
flight.
So
it
doesn't.
We
could
we
could
draft
something
up
and
I
can
point
folks
at
all
the
existing
templates,
the
best
ones
that
are
there,
but
that's
still
TBD
and
the
finalization
for
that
stuff.
B
B
B
Yeah,
we
could
also
revisit
that
see
what
is
actually
happening,
for
example,
the
add-ons
API.
How
can
this
sake
help
cops?
Improve
the
cluster
API
succeed?
How
can
we
push
components
config
forward
for
different
components?
This
is
a
true
pain
point
and,
for
example,
one
of
our
one
of
the
requirements
for
us
to
go
GA
is
virtually
the
QED
proxy
component
consecutive
graduates
beta
as
we
embedded
in
our
config
so
yeah.
We
we
have
to
to
sync
with
others
there
and
getting
getting
other
components.
B
Config
to
move
out
and
similar
would
be
really
really
helpful
as
well
like
sometime.
We
could
talk
about
cube
spray.
Is
it
something
this
sig
right
now?
As
we
know,
the
incubation
process
is
deprecated.
The
projects
can
live
there
still,
but
do
we
want
to
find
some
way
to
put
that
under
our
umbrella
or
not
and
and
similar
similar
thing
and
who
owns
that
CD?