►
From YouTube: 20200421 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Hopefully
folks
can
see
my
screen.
Yes,
no
see
a
thumbs
up
for
Lumiere
alright,
so
we
can
just
go
through
the
standard
group
topics.
If
you
have
any
group
topics,
please
add
them
and
feel
free
to
add
them
up
top
and
put
me
in
the
meeting
and
I'll
try
to
get
to
him.
If
you
think
of
something
later
on,
you
want
to
kick
us
off.
A
B
Hello,
this
is
just
a
PSA
that,
like
the
meeting
with
from
sick
released
today,
there
was
a
discussion
about
a
certain
email
that
they
sent
to
save
architecture
and
other
medalists
about
the
potential
change
in
the
release.
Schedule
I
outlined
some
of
the
big
changes
here.
First
of
all,
there's
not
going
to
be
sorry.
Let
me
let
me
start
with
the
change.
There
is
cold
freeze.
It's
now
going
to
lock
the
release
branch,
so
you
have
to
backdoor
changes
once
called
phases
on
pretty
much.
This
is
the
major
change.
B
B
So
that's
like
a
delay
of
three
weeks
and
I
think
the
mayor
is
of
that
is
the
kovat
situation
I'm
not
going
to
argue,
but
there
was
a
bit
of
a
bad
about
this
particular
topic,
because
it's
disruptive
to
follow
up
releases
after
that.
This
is
just
a
PSA
I.
Don't
want
us
to
discuss
this,
but
if
you
have
in
comments,
please
go
ahead.
C
B
A
That
seem
seems
a
little
premature,
given
the
nature
of
things.
Typically,
typically,
people
fix
a
bunch
of
stuff,
so
they're
gonna
have
to
submit
it
twice
for
everything.
I,
don't
know,
it's
I
think
there's
a
lot
of
extra
work
for
them
to
be
honest,
I
think
they
just
created
more
work
for
themselves,
whatever
yeah.
A
B
A
I'm
not
I
mean
I've
wanted
to
have
three
releases
a
year
as
a
simple
way
of
changing
it.
Maybe
this
is
a
forcing
function
to
have
three
releases
a
year,
then,
because
there's
no
sense
than
having
quarterly
it
was.
It
was
arbitrary
choice
to
begin
with,
and
it's
also
it
sets
the
time
window
for
expiration
for
two
versions
to
be
very
tight.
At
least
this
pushes
this
out
a
little
longer.
A
Any
other
comments
we
could
bike
in
this
for
a
while.
If
we
wanted
to
but
I'm
I,
don't
care
enough
to
make
a
big
deal
on
it
good
once
twice
three
times:
nothing.
Alright,
let's
go
on
to
some
project
readouts.
If
anyone
has
other
topics
that
they'd
like
to
discuss
as
a
group
feel
free
to
add,
one
group
topic
that
I
was
thinking
about.
I
have
no
idea
what's
gonna
happen
with
the
CFP
for
coupe.
A
Con
I
know
that
it's
coming
or
it's
out,
I,
don't
recall,
but
with
regards
to
talks,
and
then
you
and
everything
else,
one
of
the
things
we
typically
do
is
coordinate
that
stuff,
but
I'm
in
this
weird
holding
pattern.
So
I
have
no
idea
of
what
conferences
is
going
to
look
like
in
the
in
the
fall
/
winter
timeframe.
So
mixed
up.
B
A
D
That's
good
timing,
I'm
struggling
a
little
bit
with
zoom,
so
I,
don't
know
if
you
can
hear
me
or
not
just
interrupt
me
and
tell
me
you
can't
and
I
will
try
to
join
my
phone.
The
thing
that
is
of
most
cross
product
interest,
I
think,
is
that
there
is
this.
I
was
going
to
put
in
links,
but
fortunately
times
down
as
well,
I
will
add
links
later
there.
Is
this
not
fully
diagnosed
issue
with
flannel
and
Canal,
which
uses
flannel,
which
seems
to
exhibit
primarily
on
rel
operating
systems
or
rel
distros?
D
It
appears
to
be
at
least
two
things:
there's
a
checksum
offload
bug
in
the
kernel
and
some
sort
of
different
Cisco
defaults
in
rel
I.
Think
what's
interesting
for
this
group,
though,
is
this
sort
of
implies.
We
have
to
or
should
be
testing
all
the
CM
is
on
all
the
distros,
or
something
and
I
have
at
least
started
by
creating
tests
for
flannel
on
all
the
distros,
which
is
pretty
revealing
because
most
of
them
fail
right
now,
even
though
ones
the
cup
supports
about
flannel,
and
that
is
just
using
automatic
generation
of
trou
jobs.
D
A
A
D
A
great
question
I
will
say
that,
and
this
particular
bug
has
caused
a
lot
of
issues
to
be
reported
against
cops
against
kubernetes
against,
like
Cuba
diem
has
a
bunch
sink.
Network
has
just
read
about
it.
It's
it
does
ripple
through
our
ecosystem,
so
it
certainly
has
a
cost
to
us
and
obviously
funnels
more
widely
used
than
you
know.
A
new
one
would
be
than
my
personal
one,
for
example,
but
I.
A
Guess
for
something
like
that,
where
it's
not
there
is
no
clear
maintainer
for
flannel.
Should
we
start
to
like?
Have
it
like
right
now
in
the
main
documentation,
we
have
a
list
of
CN
eyes
right
like
inside
of
like
the
bootstrapping
stuff.
Should
we
like
I,
think
one
of
the
things
we
possibly
could
do
is
curate
the
list
of
seen
eyes
that
we
validate
or
verify
against
and
maybe
link
out
to
the
other
ones?
And
you
know,
given
that
flannel
it's
like
unmaintained,
should
we
even
like
be
highlighting
any
of
that
there
docks
I.
B
I
confirmed
that
I've
seen
a
lot
of
flooding
issues
and
this
point
I
just
like
Abed
colleague
or
or
we've
met
instead
of
photos
and
I
just
closed
tickets,
all
over
the
place,
a
flannel
is
not
really
well
maintained.
Nowadays,
just
try
another
see
and
I
put
him.
This
is
our
perspective
as
a
kubernetes
installer
for
Q
medium
at
least.
We
just
cannot
support
everything
out
there.
It
turns
out
the
docs
question.
B
This
goes
to
the
discussion
that
sick
dogs
opened
about
third-party
content
and
they
really
want
to
remove
most
of
the
CN
eyes
listed
on
that
page
and
I
think
they
are
still
going
to
allow
us
to
keep
our
CN
eyes
in
the
cube.
Atm
guide,
that's
a
separate
page
where
we
have
a
preference
for
some
of
our
top,
because.
E
This
is
an
interesting
conversation,
because
I
can
understand
the
six
perspective
and
I've
read
that
thread
before
and
commented
on
it,
but
it's
just.
It
is
really
difficult,
as
a
user
who's
trying
to
determine
what
to
pick
like.
There
are
hardly
any
collections
of
comparisons
or
even
lists
of
these
things.
Next
to
each
other,
there's
articles
that
will
compare
you
know
maybe
two
or
four
CNI
providers,
but
there's
no
like
CNI
hub
shoot.
B
I'm
quite
tempted
to
remove
flutter
at
this
point
from
the
QA
yep
guide.
If
you
give
me
this
prohibition
tip
I'm
going
to
do
it
right
away
today,
pretty
much
okay
in
terms
of
what
we
test,
we
can
add
a
sentence
for
the
PDF
guide
that
we
are
explicitly
using
calico
for
testing.
It's
not
really
we're
using
it
because
this
and
that
we're
using
it
because
it
worked
for
us
at
that
particular
time.
We
are
just
sticking
to
it.
So
we
cannot
assert
this
for
that,
and
explaining
here
here
is
our.
A
D
I
mean
I
think
that's
that's
reasonable,
I
think
I.
One
of
my
challenges
has
always
been
that
we
don't
have
any
data,
and
so
like
I
figured
I
could
start.
We
can
start
running
tests
start
getting
data
like
do.
We
actually
know
that
calico
is
better
or
they
just
think
it's
better
and
it
used
to
be.
The
funnel
was
better
and
now
calico
is
better
and
like
even
if
we
I
was
pretty
shocked
at
the
failure
rate
of
final
to
be
honest
and
I
I
will
add
calico
as
an
interesting
one.
D
D
D
B
Something
that,
because
of
the
whole
CLI
picture,
something
that
kind
did
that
I
like
what
they
did,
but
it's
also
technical
complication
is
the
day
implemented.
There
are
CNI
and
they
support
this
year
like
and
they
say
here
is
our
CLI.
It
should
work
for
you
if
you
try
something
else.
It's
not
supported
for
the
cooperate
disappointment.
That
kind
is
doing,
but
it's
still
possible.
B
A
We
can
make
some
call
out
and
we
can
review
as
a
group
I'm
not
going
to
if
somebody
wants
to
to
help
boost
the
CI
sentiment.
That's
totally
on
them,
but
I
see
no
obligation
as
a
saint
to
do
that,
because
I
should
really
be
responsibility
of
the
C
and
I.
We
have
federated
tests
and
we
have
all
these
other
capabilities
for
them
to
be
able
to
report
out,
and
you
know
they've
chosen
not
to
do
so.
I
think
we
can
state
that
this
is
our
default
testing,
C
and
I'm.
A
D
That
seems
fair
to
me.
I
think
you
know,
for
historical
reasons.
Cops
has
not
made
a
recommendation,
so
we'll
probably
probably
like
set
up
reasonable
grid
testing
I'm,
certainly
not
going
to
track
down
and
dock
and
take
it
as
like
my
obligation
to
track
down
and
fix
every
see
my
provider
out
there.
G
So,
just
as
a
little
bit
of
historical
data,
I
mean
I'm
Brian
Forum
I'm,
lead
maintainer
on
we've
met.
I
was
under
the
impression
that
we've
met
is
included
in
some
of
the
kubernetes
into
end
asks.
Although
I
haven't
looked
in
a
while
and
I,
don't
recall,
being
notified
of
other
mechanisms
to
join
in,
although
that
could
be
just
because
I'm
forgetful.
A
A
We
just
pick
as
a
sig.
We
typically
just
pick
one
to
make
sure
that
everything
is
working.
We
care
more
about
the
componentry
of
whatever
we're
testing,
particularly
in
this
case
in
the
previous
case
would
be
cube.
Atm
we
get
test
signal
or
cops,
should
have
its
own
test
signal,
but
we
don't.
We
don't
particularly
care
about
the
CNI
itself.
G
E
Was
about
April
or
June?
Last
year
we
started
implementing
kind
in
tests.
Infra
and
kind
with
weave
net
was
not
bringing
up
ready
clusters.
I
did
ping
about
this
issue
with
we've
met,
but
we
never
resolved
it
and
so
I
recall
Luba
Marin
Fabrizio
we're
deep
bugging
on
it
for
a
while
couldn't
find
the
problem,
and
then
we
just
moved
to
the
test
in
faretta,
calico
I.
B
B
B
E
A
A
D
E
On
what
we
should
do,
so
the
update
on
this
that
I'm,
aware
of,
is
that
there
is
I
think
if
I,
hopefully
I'm,
remembering
it
correctly,
it's
April
23rd
we're
supposed
to
have
like
slot
reservations
or
something.
So
if
anyone
knows
anything
about
the
number
of
slots
that
you're
supposed
to
reserve
for
the
program,
that
I
think
determines
how
many
people
you're
supposed
to
pick
from
the
pool
and
probably
like
budgetary
stuff
from
Google
sides
of
things.
We
have
no
idea
of
what
we're
doing,
but
we
keep
talking
about
it.
E
So
that's
good
things
are
going
great
in
cluster
add-ons.
The
most
recent
development
that
we
had
was
a
discussion
from
last
week
that
went
into
great
depth
about
the
bootstrap
needs
for
very
early
add-ons,
such
as
CNI
that
need
to
you
know.
Potentially,
if
you
wanted
to
write
a
CNI
operator,
one
of
the
unique
things
that
you'd
need
to
deal
with
is
the
API
server
advertised
address
because
of
the
absence
of
a
pod
or
service
network,
potentially
the
absence
of
qu
proxy
similar
thing
for
a
coop
proxy
operator,
so
yeah
that
was
a
cluster
addons.
E
A
Email
regarding,
like
the
notion
of
a
federated
API
and
controllers
from
the
main
KK
repository
the
start
of
a
storage
and
last
cigarette
texture
meeting.
The
last
thing
we
discussed
was
like:
let's
just
make
it
a
full-blown
API.
Why
would
we
ship
something
that
doesn't
have
it
doesn't
happen?
All
the
componentry
required
as
part
of
a
forbidden
ease,
but
we
are
there
asking
this
question
again
and
we
still
don't
have
an
answer
for
them,
because
we
don't
actually
fully
have
support
for
cluster
add-ons
all
the
way
through
the
tool
chain.
A
So
many
I
can
send
some
time
to
discuss
this.
I've
also
did
an
eval
and
walkthrough
of
cluster.
Add-Ons
and
I
have
a
bunch
of
questions,
though
I
want
to
raise
them
here.
I
want
to
distill
my
thoughts
a
little
more
so
maybe
next
time
we
could
have
an
actual
walkthrough,
a
discussion
of
cluster
add-ons
and
some
of
the
problem
space.
That's
probably
more
and
I,
could
send
out
my
question
list
to
you,
Lee
and
Justin,
to
see
where
we're
at
yeah.
A
G
A
E
So
we
want
to
be
prioritizing
a
proposal
to
move
forward
with
these
separate
API
types,
that
kind
of
alienate
these
fields
that
seem
to
be
empirically
more
special,
and
then
we
want
to
also
be
facilitating
the
per
node
UX
for
things
like
Kubla
config
inside
of
Covidien,
to
get
away
from
some
of
these
purely
central
config
patterns
and
empower
users
to
make
their
own
declarations.
Also,
things
like
allowing
users
to
provide
the
config
at
all
opportunities,
such
as
during
an
upgrade.
E
These
are
the
conclusions
that
we
made
basically
just
correcting
on
some
of
the
previous
decisions
that
didn't
work
out
well
and
we
didn't
get
much
into
the
versioning
UX
stuff,
which
is
still
a
maintance
burden
and
overhead.
But
kuba
dam
is
currently
working
around
those
types
of
issues
as
well
as
tools
like
kind,
and
then
there
is
a
very
cool
and
interesting
component
config
proposal
and
patch
for
COO
builder,
put
up
by
Christine
who
joined
last
week's
called.
We
discussed
how
to
simplify
and
approach
the.
E
H
H
The
the
big
thing
is
so
a
little
background,
our
our
CI
jobs,
because
we
need
physical
machines.
Since
we
test
on
VMs,
they
normally
sit
under
somebody's
desk
at
Google
and
we've
lost
access,
SSH
access
to
our
Windows
machines,
and
so
we
have
no
way
of
turning
Windows
CI
back
on
and
so
the
best
way
I
know
how
to
fix
that
would
be
to
put
here
in
a
VM.
So
we
got
at
least
test
hyper-v,
but
I
don't
know
how
to
go
about
doing
that.
So
I
was
wondering
if
anybody
here
did,
that
was
it.
A
A
The
main
focus
for
at
least
the
foreseeable
future
is
not
necessarily
be
went
out
before,
but
to
refactor
cleanup
address
issues
with
b1
alpha
3,
that's
a
long
laundry
lists.
If
you
want
to
find
out
more
go
to
the
cluster
API
meeting
from
a
from
a
pragmatic
perspective,
I
think
b1,
alpha
3
is
the
the
last
release
and
the
blog
post
that
went
out
are
probably
the
last
PR
that
we
can
do
at
least
to
this.
A
So
I
think
that
if
there
are
other
tools
that
folks
are
aware
of-
or
maybe
even
some
CI
signal
that
we
can
potentially
change
I-
think
now
is
the
time
to
for
us
to
start
to
think
about
refactoring
some
of
our
default
CI
signal-
and
it's
always
been
my
long-standing
goal-
to
light
slash
cluster
directory
on
fire,
not
to
say
that
I
mean
it,
but
I
hate
it
with
the
fighter
passion
of
a
thousand
suns.
So
if
we
can
do
that
in
some
time
in
the
future,
that'd
be
awesome.
I.
B
A
I
think
there's
a
lot
of
Google
isms
inside
of
the
slash
cluster
directory.
I,
don't
know
if
anyone
will
ever
pay
down.
That
was
debt
unless
it
be
a
community-based
like
google
Summer
of
Code
person,
maybe
that
actually
reports
into
Google,
but
the
otherwise
I
think
it's
going
to
be
a
long
tale
to
eventually
turn
that
trigger
over
or
another
option
would
be
like.
D
Don't
believe
anyone
maintains
it
I
think
I
am
with
you
on
the
we
should
deprecated
I.
Think
that
problem
is.
We
need
test
coverage.
Well,
we
historically.
The
issue
has
been
that
we
need
test
coverage.
We
could
argue
that
we
don't
need
test
coverage
but
like
that
has
been
the
big
challenge
when
you
just
jerk
like
if
we
just
just
turn
it
off.
For
example,
we.
A
D
Something
that's
occurred
to
me
is:
it
remains
much
to
my
chagrin
or
having
pronounced
that
word.
The
path
of
least
resistance
when
you
are
adding
a
feature
to
add
it
to
the
cluster
directory
I
would
much.
Rather
it
was
added
to
convey
the
amur
hops
for
cluster
api
and
that
we
stopped
making
like
still
think
the
problem
worse
as
it
were
so
like
I,
don't
know
how
we
could
say
that,
but
right
now
like
we
continue
to
add
tests
of
new
features
to
the
Koster
directory,
and
that
is
harmful.
In
my
opinion,.
A
Along
with
the
documentation,
I
want
to
write
are
the
questions
I
have
for
add-ons,
maybe
I
should
we
do
should
craft
a
statement
for
next
time
about
policy
in
deprecation
because
like
if
we're
not
going
to
support
it,
it
remains
like
an
albatross
that
we
kind
of
live
along
with
I.
Think
it's
fair
for
us
to
say,
because
we
are
considered
the
owners
of
that,
even
though
most
of
the
people
that
have
quote-unquote
owned
it
don't
work
on
the
sigit
anymore,
that
we
can.
We
can
set
a
policy
like
a
sunset
window
for
this
stuff.
A
B
A
We
should
I
think
maybe
the
policy
should
be
like
we
should
craft
a
statement
of
how
we
want
to
do
this,
which
we
see
them
intent
like
we
don't
own
it
like.
We
may
be
a
long
time
ago,
Robbie
and
some
folks
at
Google
owned
it
and
actually,
you
know,
maintained
and
patched
and
updated
it.
Those
people
have
since
moved
on.
So
at
this
point
it
is
currently
headless
and
is
a
rolling
ball
of
duct
tape.
Oh
really,
by
no
one.
So
either
we
need
to
find
new
ownership
or
we
need
to
deprecated
it
I.
A
B
They're
both
poor
problems
with
that,
because
we
first
like
my
original
proposal
in
there
was
to
start
breaking
things
away
from.
It
was
the
directory,
because
currently
we
have
the
COS
images
in
there.
Also,
we
have
the
CD
image
cuba.
Obviously
the
other
manager
is
still
being
used
to
my
understanding.
So
I
think
we
should
consider
Ector
ii
first
before
starting
to
find
the
owners.
A
Maybe
maybe
next
group
topic-
let's
add,
let's
add
a
group
topic
for
next
time,
where
we
actually
just
kind
of
enumerate
some
of
this
stuff
and
talk
about
it
in
a
little
more
detail.
Maybe
we
can
assign
some
owners
and
then
we
can
also
craft.
The
like
I
can
maybe
have
a
crafting
of
the
initial
language,
because
it's
just
it
just
lips
along
forever.
It's
actually
a
problem.
A
D
Think
you're
right,
I
think
I
think
technically
it
might
actually
be
deprecated,
so
I
don't
know
that
we
can
like
double
deprecated.
It
I
think
that's
where
I
think
setting
this
date
as
much
as
I.
Don't
like
the
idea
like
threatening
a
date
and
telling
people
like
if
this
impacts
you
let
us
know,
is
a
good
way
to
discover
the
the
uses
that
we
don't
even
know
about
right,
I,
think
I
mean
I,
think
sure
we
should
move
like
yet
see.
The
image
is
out.
D
E
A
E
B
B
A
Do
think
that
we
should,
because
no
one
should
be
using
this
thing.
It
seems
weird
right
like
as
a
as
a
project
like
there's
this
weird
disconnect
where
we
have
six
who
are
empowered
to
do
things.
We
have
this
thing
that
exists.
That
is
only
really
used
to
do
releases
and
they're.
The
release.
Managing
team
is
not
actually
testing.
These
feature
enablement
s--
in
the
actual
tooling,
that
we
recommend
right.
That's
just
awkward
right,
so
it's
super
awkward.
We're
released
informing
for
a
bunch
of
other
stuff,
so
they
they
do
see
it.
A
We
do
a
new
look,
but
we
don't
broadly
enable
some
of
the
features
that
are
there
like.
We
don't
do
COO
medium
for
scale,
testing
right.
We
should
really
it's
the
only
scale
tests.
Are
this
weird
hybridized
thing:
that's
inside
of
the
slash
cluster
directory,
for
example,
and
I,
think
it
does
the
community
a
great
disservice.
A
In
my
opinion,
sure
we
could
let
it
live
on,
but
I'd
rather
see
it
move
to
something
that
we
maintain,
because
I
think
the
community
would
benefit
right
if
the
community
actually
had
bunch
of
these
feature
tests
going
on.
For
like
say,
for
example,
for
Koob
ADM
or
cluster
API,
they
they
would
be
able
to
know
that
they
could
get
those
those
artifacts
with
much
more
sensitive
rigor
as
part
of
the
release
process.
B
Yeah
I
completely
agree.
The
problem
I
get
is
that
they're,
multiple
stakeholders
at
this
point
like
we
enumerated
them
earlier,
we
have
security,
have
artifacts,
see
they're
sick
testing
of
artifacts
API
machinery
have
artifacts
in
there,
so
there
has
to
be
a
collective
decision
by
the
project.
Obviously
everybody
was
free
movement,
but
nobody
has
the
time
to
to
provide
the
replacement
and
we
are
helping
with
that.
But
I
I
think
that
we
should
not
only
on
how
effort
for
replacing
poster.