►
From YouTube: 20180919 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Wednesday
September,
19th
2018.
This
is
the
standard
sync
cluster
lifecycle,
meaning
there
are
a
couple
of
agenda
items
just
as
a
PSA.
We
are
kind
of
in
the
throes
of
final
release.
So
if
you
have
any
topics
that
aren't
kind
of
germane
to
the
release
at
this
point,
we'll
probably
hold
on
those
things
until
another
date,
I
do
plan.
B
I've
been
talking
to
some
people
who
would
love
to
get
started
contributing
to
kubernetes,
and
they
found
it
kind
of
odd
that
they
can't
be
assigned
issues
in
the
repo.
So
I
was
kind
of
wondering
if
that
was
a
gates
thing
or
a
github
thing,
but
it
looks
like
that's
a
github
limitation
that
we
can't
assign
people
unless
they
are
in
the
organization.
So
I
was
wondering
if
that's
kind
of
a
it's
kind
of
an
issue
for
new
contributors
and
the
process
that
we've
been
using
so
far
has
been
marketed.
B
A
I
think
you
know
we
can
mention
that
too
in
the
regular
standard,
supercluster
life
cycle
meetings,
so
it's
broader
to
the
other
folks,
but
that
has
been
the
implied
process.
We
haven't
really
written
down
all
the
details
of
what
we
do.
We've
had
a
couple
of
meetings
where
we
have
informally
disgusted
I,
don't
know
because
different
states
operate
differently,
I,
don't
know
if,
where
it
would
live,
we
could
possibly
put
it
into
the
community
repository,
that's
probably
a
good
place
for
it.
We
could
start
we'd.
B
A
The
broader
group,
which
kind
of
splintered
earlier
on
in
the
formation
of
cigs
and
incubation
and
I'm,
trying
to
within
the
113
cycle,
rear
alley
folks
to
try
and
have
a
single
umbrella
with
many
sub
projects
yeah,
especially
as
we
push
rubidium
to
BGA
I,
think
as
it
becomes
GA
and
is
used
by
a
number
of
tools,
mini
cube,
coop
spray,
etc,
etc,
etc.
That
you
know
having
that
general
framework
by
which
we
operate
is
super
helpful,
because
it
allows
people
to
move
across
these
different
projects
in
a
cohesive
way.
A
B
A
A
I
have
found
some
issues,
there's
a
bunch
of
other
things
that
are
kind
of
coming
in
a
little
late
in
the
cycle
and
I
think
we
need
to
apply
a
little
bit
of
extra
rigor
to
verify
some
of
this
stuff,
because
last
night
was
pretty
frantic
and
we
still
have
a
bunch
of
other
things
to
fix
up
before
the
release.
I.
A
A
A
A
A
We
don't
have
a
global
tracking
issue.
We
just
have
the
milestone.
So
if
there
are
things
that
are
not
working
on
112,
please
add
them
to
the
milestone
and
we'll
just
deal
with
that.
That
way,
so
the
milestone
currently
has
10
open
issues.
There
are
other
stuff
that
I
know
is
not
working
like
dynamic.
Could
the
configuration
to
be
honest
at
this
stage
of
the
cycle?
I'm?
Fine,
because
it's
a
feature
gated
thing:
I'm,
fine,
with
punting
it
to
113
and
backporting
fixes
as
needed,
because
it's
not
a
default
configuration.
It's
a
total
opt-in
configuration.
A
A
C
A
C
A
It
depends
the
answer:
is
we
only
backport
things
typically
on
if
they
are
breaking
changes
for
a
release?
If
they
are
status
quo
and
we
want
to
change
behavior?
The
answer
is
typically
no,
so
this
one
is
status
quo
change
behavior.
So
we
would
not
back
for
in
a
fix
for
that,
but
if
we
had
something
that
was
actually
a
breaking
change,
a
difference
in
behavior
from
on
you
know
from
111
to
112,
then
we
would
back
for
a
fix.
C
A
We
can't
usually
we
try
to
be
pretty
strict
on
the
rules,
because
the
trains
keep
moving
right.
It's
like,
if
you
don't
hit
this
train,
there's
another
train
coming.
So
if
we,
the
problem
that
we
face
with
cherry-picks
that
you
would
face
with
any
project
that
has
a
time-based
cadence
release
is
that
you
could
always
cherry-picks
forever
right
and
you
would
never
keep
on
truckin
right.
So
so
the
guidelines
for
the
cherry-picks
is
always
breaking
behavioral
changes
between
a
minor
release
version.
C
A
It's
not
pretty,
we
could.
We
could
always
split
hairs
on
this,
and
I
I
totally
would
be
empathetic
to
the
idea
of
backporting
fixes,
because
it
would
be
better
for
the
dot
release
product
but
at
the
same
time
there's
a
process
problem
that
would
occur
and
that
we
would
never.
We
would
be
in
paralysis
mode
for
minor
releases.
To
be
honest,.
C
C
A
You
all
right
so
go
down
the
list.
We
only
have
ten
bugs
in
this
list
and
yeah
are
there
any
other?
If
there's
no
initiative,
there's
other
issues,
please
file
them
to
this
milestone
in
the
Covidien
repo,
and
we
can
address
those
I
think
most
of
the
things
we
have
are
docks.
Liz.
Do
you
have
any
update
on
the
V
111
to
be
1/12
of
grade
Docs.
E
D
Michael
TOEFL
says
here,
and
the
answer
from
Michael
is
particularly
are
that
a
node
cannot
set
its
own
dynamic,
Hoobler
coffee.
This
is
the
problem.
Iii
I,
don't
know
because,
basically,
when
we're
on
cup
could
have
enjoy
on
node,
we
have
only
the
the
node
itself
as
identity,
so
I
don't
even
understand
how
it
could
work
in
the
past.
D
A
Iii,
don't
think
here's
a
conundrum
that
we've
had
for
a
while.
The
test
matrix
for
validating
features
that
are
cluster
wide,
have
been
non-existent.
They've
been
ad
hoc
right,
so
it
basically
be
users
saying
feature
X
on
their
own
because
they
care
about
it.
This
needs
to
change
in
1:13
and,
as
we
kill
career
days
anywhere
with
fire
in
113,
part
of
this
should
be
to
try
and
enumerate
a
test
matrix
for
said
features.
So
we
can
start
to
try
to
address
that
all
the
way
through.
A
A
A
A
B
B
A
A
A
C
So
this
is
linked
to
the
PR
that
you
sent
Tim
about
changing
this
stable
111
to
112,
and
we
found
out
that
we
generate
in
the
dog
some
much
old
versions,
Nick
next
to
the
kubernetes
version
for
all
the
commands
we
have
incubating
and
I
think
this
is
some
sort
of
a
bug
because
the
dog
they
generated
the
dogs
before
we
did
the
same
PR
that
you
recently
did,
but
in
111.
So
that's
that's.
C
A
I
need
to
talk
with
somebody
on
cig
release,
because
the
typical
process
was
to
do
some
minor
update
to
that
GCSE
bucket
as
part
of
the
RC
process,
because
it
was
typically
always
before
RC
women's
cuts
and
our
instructions.
If
you
actually
look
here
so
if
you
go
to
the
instructions
list
in
history,
inside
of
the
cou
midium
repository,
there
is
a
Doc's
and
then
there
is
a
release
cycle.
If
you
scroll
on
the
bottom,
we've
done
this
now
for
many
releases
important
right
before
RC
one
is
cut.
A
Dump
the
kubernetes
version
in
comedian
like
this,
and
it's
basically
updates
to
point
to
a
different
GCS
bucket.
When
you
pull
it's
just
a
file,
it
gets
updated
as
part
of
the
release.
Jiggery
right-
and
you
know
this
is
an
example
of
one
six,
two
one
seven
and
we
did
that
for
112
and
I
need
to
talk
to
somebody
unreleased
to
figure
out
why
that
file
doesn't
exist,
because
if
you
try
to
pull,
let
me
see
if
I
have
a
command
here
somewhere.
C
A
We
can
just
hold
on
that
PR
until
the
very
end,
but
that's
not
the
way
it
was
done
in
the
past
right
like
so.
That's
the
conundrum,
this
one
in
particular
and
any
events
they
even
have
documented
it
was
before
RC
one
is
cut,
so
I
need.
The
problem
is
continuity
and
consistency
across
releases,
so
I
think
caleb
was
the
only
person.
Who's
been
actually
running
the
release
script
tree
so
I'll
try
to
sync
with
Caleb
no.
C
B
C
So
it
was
a
last
release.
I
think
the
this
this
time,
we're
gonna,
get
it
right,
but
again
I
don't
have
confirmation.
When
are
the
the
sick
dogs
people
going
to
generate
our
reference
documentation
last
cycle,
it
was
a
mess,
and
this
cycle
I
have
no
information
when
they're
gonna
trigger
it.
So
we
have
to
sync
with
them,
because
we,
if
they
start
generating
the
docs
now
you
know
we
have
the
old
version.
C
A
B
A
This
is
the
setup,
but
it's
this.
It
includes
both
the
external
head,
CD
and
the
stack
masters
and
the
the
thing
that
stood
out
to
me
is
that
you
are
specifically
working
on
the
external
HDD
version
and
Jason
was
working
on
the
stack
masters
version,
but
this
update
to
this
portion
touches
v1.
You
know
it
does
updates
a
section,
but
it's
the
stack
master
section.
It's
just
still
says
we
went
out
with
you
yeah.
B
F
B
A
A
So
you
have
to
recopy
your
initial
invite
just
because
that's
how
the
kubernetes
community
calendar
works
so
for
those
in
the
call
expect
an
email
to
sync
cluster
lifecycle,
email
lists,
as
well
as
an
update
to
the
community
calendar
which
by
hopefully
by
next
week,
will
be
on
the
new
zoom
and
I'll.
Send
another
reminder
on
the
slack
channel
as
well.
I.
A
Just
don't
want
to
update
it
today
because
we
were
already
in
mid-flight.
Last
thing
is
a
cloud
provider
goo.
This
is
basically
just
a
tracking
issue
for
me
to
poke
people
in
the
eye
with
stick,
because
they
had
promised
that
they
were
going
to
update
their
documents
and
they
still
haven't
seen
them
so
that
we
can
reference
them
as
part
in
installation
instructions,
because
for
many
means
we
have
had
a
number
of
cloud
provider
integration
issues
filed
against
the
qadian
repo.
So.
C
A
A
From
my
side,
it's
frustrating
it's
health,
because
the
external
cloud
provider
was
supposed
to
be
done
now,
so
the
instructions
vary
quite
a
bit
on
how
you
configure
external
cloud
providers,
especially
if
you
want
to
use
like
the
previous
ones
that
were
in
tree
or
should
be
now
auditory
and
supported.
So
that
includes
GCP
AWS
and
something
something
pick
up
a
stack.
So
the
the
I
don't
have
an
answer
to
that
right.
So
if
people
want
to
use
a
given
cloud
provider,
how
should
they
run.
A
A
A
We
won't
get
it
all
in
place
and
pre-ordered,
but
you
know
come
with
things
that
you
think
you'd
be
able
to
throw
down
on
and
we'll
see
how
we
can
level
set
priorities.
I
think
the
big
thing
we
want
to
do
for
113
is
get
that
the
coop,
a
DM
config
or
the
mini
configs
now
to
beta,
so
we
can
get
cuvee
DM
to
GA.
There's
also
some
sub-command
shuffling
for
the
sub
commands
to
flow
into
the
right
locations
and
there
will
be
some
restructuring
work,
but
I
think
that
art
is
pretty
easy.
C
A
That
one
of
the
canonical
problems
that
people
have
had
to
you
was
a
better
logging
that
clearly
defines
phase
a
phase
B
so
that
they
know
the
execution
order.
So
if
they
want
to
implicitly
run
the
steps
they
can
do
it
by
themselves,
because
there's
many
people
who
are
said
things
like
I
would
like
to
run.
X
out-of-band.
C
D
Now,
now
what
we
support
we
support
in
it
we
support,
join,
and
we
support
upgrade
were
upgraded,
basically
means
change,
the
release,
okay,
what
what
is
not
supported
properly
by
cubed
mean
is
change
attribute
of
the
cluster
without
change
the
release.
So
first
I
want
to
add
a
new
flag
to
the
API
server.
I
want
to
enable
a
new
runtime.
D
C
D
D
A
So
sorry
curve
is
pretty
thorny,
but
I
do
know,
that's
something
that
people
like
to
do
or
do
do
in
strange
ways
all
right.
Are
there
any
other
group
topics,
especially
pertinent
to
112?
If
not,
we
can
always
punt
everything
else
to
next
week
going
once
twist
three
times:
okay,
thanks.
Everybody
thanks
bye,.