►
From YouTube: 20200519 sig clsuter lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
Yes,
so
I
bet
a
spa
day
to
the
sick
chapter,
arguably
removing
some
things
that
are
out
of
scope
for
the
sick
at
this
point
maintained
by
somebody
else.
But
the
peers
saw
some
comments
from
steering
committee
members
such
as
that
we
should
probably
define
who
is
the
owner
of
the
upgrade
policy
opportunities
and
I
I
said
that
sequester
cycle
never
agreed
to
be
the
owner
of
the
policy.
D
A
E
Yanno
I'm
making
because
I
recall
the
N
minus
2
policy.
It
was
purely
a
it
was
a
group
decision.
I
think
there
was
a
googler.
The
process
I
think
was
Alex
more,
but
the
article
crap
it's
from
years
ago,
but
it
was
it
a
like
an
early
kubernetes
me
up
in
a
basement
somewhere
where
it
ended
in
a
basement,
and
it
was
motivated
by
the
idea
that,
like
how
would
we
do
a
roaming
upgrade
and
somehow
we
got
to
n
minus
2
I'm,
not
quite
sure
panel
I,
think
it
I
think
it's
actually
a
version.
A
C
A
Know
that
Jordan
for
a
long
time
who
wrote
the
wrote
the
original
doc
a
while
ago,
and
we
it's
basically
a
codification
of
what
we
already
did
for
the
most
part.
So
I,
don't
know
I,
think
I'll
come
out
here
and
we
can.
We
can
go
back,
but
we've
never
really
honed
it,
because
if
we
did
then
it
would
be
great.
A
C
C
E
E
I
am
ice,
the
that
is
now
that
code
is
now
moved
to
the
image
builder,
sub-project,
kubernetes,
SIG's,
slash,
image,
builder,
I,
think,
and
so
we
could
remove
the
image
builder
subdirectory
from
that
Pico
and
then
would
be
empty
and
then
I
guess
we
could
did
I,
don't
know
about
like
whether
we
need
to
keep
it
for
any
particular
reason.
It
also
then
becomes
a
we
had
use
it
repurpose
it
for
something
else
that
home
wants
to.
Anyone
has
ideas
on
that,
but
but
yes,
so
that's
that's
sort
of
the
like.
E
A
Could
probably
there's
I
forgot
what
it's
not
the
Attic
I
even
move
projects
that
are
different
than
any
of
it,
but
it's
like
the
pre
waste
bin,
where
we
can
always
take
it
out.
If
we
decide
we
want
to
use
it
still,
but
there's
a
separate
org
where
we
can
transfer
non-active
projects
to,
and
then
it
just
you
know.
If
we
want
to
take
it
out
of
retirement,
we
can,
but
otherwise
it's
not
being
worked
on
a
routine.
It
makes
a
ton
of
sense
to
me.
E
A
A
C
E
A
C
C
C
C
So
what
is
the
question
directory
the
qualities,
qualities,
reporting
sequestered
directory?
It
could
takes
a
collection
of
deprecated
and
active
limited
corporates
like
container
images,
hello
manager,
tools
for
consecration,
I,
suppose
Habashi
about
the
docker
file
is
tightly
coupled
to
other
folders
in
kubernetes,
qualities,
hack
and
tests,
and
also
a
couple
to
other
repository,
such
as
testing
for
four-and-twenty.
Sync,
the
go
of
the
query.
C
This
project
is
for
above
cost
eventually
and,
however,
the
coupling
this
is
kinda
difficult
to
this
point,
the
cosa
directory
was
originally
created
in
2014
shortly
after
the
project
was
so
posters.
This
implies
that
the
q-pop,
the
original
tofu
cosas
creation,
exists
it
before
like
before
query.
This
was
Appa
sourced
I'm,
giving
also
the
commits
that
performed
this
action
over
time.
The
question
director
kool-aid
is
over
9000
commits
it
particularly
became
a
location
to
what
called
the
to
solve
it
elsewhere.
C
What
does
it
actually
contain?
So
coaster
has
adults,
and
this
is
a
duplicated
album
manager
that
is
currently
used
for
GCP
entrant,
a
storage
GC.
This
is
the
GC
provider
for
Cuba
I,
go
to
explain.
Well,
provider
means
later
images.
This
is
collection
of
images
for
EDD,
conformance
kubark
q
mark
is
the
benchmarking
tool
for
covering
this.
If
you
don't
know,
Q
bark
is
also.
There
is
a
cubic
folder,
that
is
a
provider
from
Cuba.
I
honestly,
do
not
know
what
it
is
exactly
walked
up.
This
is
the
the
pin
utility.
C
You
also
have
a
pre-existing
folder.
This
is
another
provider
from
Cuba
for
kube,
mark
/q,
Bob.
That
I
don't
understand,
and
we
have
skeleton
Skeletor
is
a
basically
when
you
say
to
Cuba
that
there
is
a
ready
provider
and
you
don't
want
to
specify
something
like
a
back-end
such
as
GCE
a
provider.
A
has
a
couple
of
meanings:
one
of
them
is
the
infrastructure
provider
and
the
other
one
is
the
test
scenario
provider.
C
So
how
does
this
relate
to
sequester
cycle?
Well,
after
the
sig
was
created
by
the
ceiling?
Please
correct
me:
it
became
the
owner
of
Q
Pappas
cubed,
as
a
deployment
to
keep
up
still
resides
in
cluster.
At
some
point,
all
commits
for
question
were
labeled
with
sequester
psychos
Yola.
Well,
in
fact,
all
are
some
of
the
subfolders
another
six.
C
C
C
C
Yeah,
so
this
is
the
second
copy
that
I
did
to
rebook
abilities.
They
were
completely
because
they
there
was
still
some
residual
in
the
communities.
Colleges
had
testing
for
your
posters,
so
this
is
I'm.
Not
gonna
bother
you
with
the
details
too
much,
but
this
is
how
roughly
we're
currently
using
Edwyn
test
with
cuba
and
the
question
directory.
We
start
the
testing
scenario.
There's
a
two
in
testicle
dance
called
cube
test.
It
comes
10,
20,
sweet.
It
also
runs
cube
up
with
a
provider
GCE.
C
C
C
So
it's
a
very
detailed
analysis
of
how
we
use
the
console
directory
for
testing
the
test
closer
coupling
is
study
that
should
be
at
liberal,
resolved
by
sick
testing,
because
sick
testing
out
the
owners
of
cutest
provided
GCP
recently
for
cluster
started
using
it
for
entrant
test
I
they
don't
get
all
the
cortex
for
that,
but
to
metastatic
is
because
they
don't
what
to
reference.
The
quality
score
is
a
suppository
directly.
C
So
our
super
fan
to
attest
a
serious
question
as
I
mentioned
question
GC
is
owned
by
the
GCP
provider,
is
up
to
them
to
duplicate
the
usage
of
this
Cuba
provider.
See
scalability
is
such
a
reported
that
they
have
also
tight
coupling
with
the
cube
mark
deployer,
and
this
is,
to
my
knowledge,
the
only
like
scale
testing
we
doing
over
ages.
C
So
these
are
a
couple
of
tracking
issues.
The
first
one
is
the
sub
to
the
tip
created.
This
is
removing
the
questa
directory
in
the
second
one
I
created
is
to
potentially
remove
the
release
blocking
Cuba
based
job
in
a
place
named
with
coaster
API,
so
potential
actions,
this
sub
thing
our
proposal,
II
disability.
My
proposal
is
that,
because
we
haven't
submitted
any
convinced
for
Cuba
or
reviewing
this
project
anymore,
my
proposal
is
that
we
should
just
remove
it
from
the
list
of
some
projects.
C
After
finish,
aspects
of
discussion
with
this
coaster,
API
can
help
with
supporting
provisioning,
GC
windows
VMs.
This
is
going
to
replace
the
you
know
the
Cuba
usage
for
Windows
at
this
point,
the
second
tight
that
they
could
help
with
this
support
provision.
If
watch
scale,
clusters
I,
don't
know
I
ABC's
call
for
the
project,
but
currently
I.
Don't
think
that
kind
is
something
that
can
be
used
to
replace
this.
If
we
decide
to
not
use
cause
API
for
that,
we
pretty
much
have
to
write
the
new
deployer
and
the
final
item.
C
The
reality
is
that
gradually
you
should
promote
the
company
cap
G
at
least
walking
sorry
jobs
to
release
walking
because
they
are
currently
only
informing
so
I'm
going
to
leave
this
wide
I.
Don't
have
any
boss
fights
I
think.
So
what
do
you
think
about?
The
first
item?
Remove
Cuba
from
all
the
projects.
C
What
problem
here
is
a
well
a
couple
of
slides
ago:
I
mentioned
that
I
tried
to
move
closer,
sorry
cube
up
in
a
subfolder,
but
that's
broken
number
of
things.
So
if
we
both
keep
up
to
the
soup
folder,
we
can
say
hey.
We
have
an
older
style
here.
We
are
still
the
order
of
Cuba
specifically,
but
it
was
not
possible.
So
now
cube
up
still
resides
in
the
root
of
the
question
directory.
So
it's
a
folder
ownership
problem.
I
guess
also.
A
A
A
Tests
infra
has
never
been
on
the
hook
for
making
major
changes
that
are
not
outside
of
that
are
in
cake,
a
proper
right
so
like
how
can
we
get
tested
for
it
to
hone
the
pieces
that
integration
piece
to
decouple
it
so
that
we're
not
doing
all
the
work
of
all
the
dependencies
that
exists
inside
of
there
plus
there's
the
image
stuff?
That's
a
side
that
is
also
weird.
C
E
Feel
like
maybe
maybe
there
is
an
outcome
which
it
is
like
we
can
say
we
will.
We
will
stay
owners,
but
we
will
not
accept.
We
will
like
hard
deprecated,
we
will
not
accept
any
changes
and
any
new
functionality
wants
to
go
into
it.
The
or
any
changes
want
to
go
into
it
should
instead
go
to
cluster
API,
and
if
your
test
relies
on
new
functionality,
it's
now
gonna
after
may
out,
plus
right,
yeah
and
I'm
great
too.
E
Sorry
would
be
great
for
the
first
test
running
we
using
cluster
I
said
we
can
say,
look
just
at
it
here.
Follow
this
example
PR,
but
like
I
feel
like
the
making.
Yet
on
owned
is
not
going.
It's
gonna
make
it
worse.
For
us,
it's
just
gonna
like
in
terms
of
getting
rid
of
customer.
It's
gonna
just
have
more
and
more
stuff
going
into
clustering.
C
E
C
F
Sorry,
my
was
a
mute.
I
was
just
curious.
Do
we
know
what
the
people
on
the
testing
side
of
the
house
you
know
think
about
this
cuz
I,
like
the
approaches
like
if
we're
gonna
continue
to
own
this,
then
taking
a
strong
stance
towards
you
know
we'd
like
to
deprecated
it
move
to
something
else
seems
appropriate,
but
it
sounds
like
we
need
all
the
testing
folks
to
understand
like
what's
going
to
happen.
Is
there?
Is
there
any
indication
about
that
so
they've.
A
Known
about
this
problem
for
years,
they've
made
no
changes
into
this.
This
area
so
like
the
problem
is
with
the
project
of
KK,
is
that
it's
been
it's
it's
slowly,
literally
whittling
down
to
a
skeleton
crew
in
the
core
that
actually
know
how
all
this
stuff
is.
I,
don't
know
Rube
Goldberg
together,
and
the
only
way
to
do
that
is
for
people
to
step
up
and
to
actually
disentangle
the
hairball
that
is
there.
It's
I
liken
it
to
like
trying
to
disentangle
like
a
snarl
on
a
cat
or
the
cat's
awake.
F
Ya
know
it
makes
sense.
I
spent
several
days
last
week
trying
to
understand
the
et
testing
that
was
all
encoded
there
and
yeah
I
can
get.
This
is
difficult
to
do.
I
feel
like
I,
like
these
kind
of
hard-line
stances
of
like
if
we're
gonna
own
it,
and
we
want
to
move
towards
something
else
that
we
have
to
stay
yeah
we're
not
going
to
accept
more
changes
here,
we'd
like
them
to
go
to
this
other
place,
but
it
sounds
like,
like
you
said,
to
untangle
this,
not
testing
has
to
be
included
in
on
this.
A
Maybe
we,
let's,
let's
go
to
cigar,
to,
describe
this
conversation
and
we'll
try
to
frame
it.
I
think
definitely
bring
your
slides
little
mirror
and
it's
a
nice
sort
of
outline
of
the
of
the
mess
and
then
we
can
have
the
it
needs
to
be
like
a
sympathetic
conversation
be
like
this
is
unmaintained,
it's
just
a
mess.
It's
a
burden
so.
E
I
can't
buy
the
button,
so
I'm
raising
my
hand
physically
I,
just
want
to
say,
like
a
cube
mark
is
essentially
a
version
of
cubelet
to
my
understanding
is
a
version
of
cubed
which
doesn't
has
like
a
null
CRI
provider.
So
it
doesn't
actually
don't
show
you
containers.
So
it's
used
for,
like
benchmarking,
it
brings
up
everything
and
it
runs
through.
It
might
actually
run
through
the
ete
tests,
but
it
just
doesn't
launch
any
containers.
E
C
C
E
There
are,
there
are
two:
there
are
two
tests,
there's
a
scale
test
which
is
like
five
thousand
notes
and
there's
cube
mark,
which
my
understanding
is
actually
just
like,
not
that
demanding
of
the
machine
because
it
doesn't
it
just
does
the
scheduling
and
the
controller,
but
it
doesn't
actually
run
any
pods
was
actually
running
these
tanners
I'm.
Sorry,
it's.
A
G
A
Mean,
like
you
know,
it's
like
a
nice
shot
across
the
bow
too,
like
we,
we
resource,
and
we
recommend
a
tea
of
your
s
provider
like
you
know,
I
have
empathy,
but
also
at
the
same
time,
I
have
frustration
because
I
know
like
you
know,
you
can
only
do
so
much
you're,
you're
being
tapped.
People
have
blood
resources
off
of
main
KK
for
a
long
time,
so
I
think
if
they
don't
kind
of
pony
up
to
do
some
of
this
hard
work,
it
becomes
really
difficult
for
the
project
to
be
sustainable.
A
G
C
A
A
C
Okay,
so
project
leaders
Kapadia,
we
spent
a
lot
of
time
last
week,
debug
in
20s
flakes,
so
it
was
quite
problematic,
but
we
sent
a
couple
of
peers
to
try
to
fix
what
we
saw
as
failures
and
also
I
had
a
added
a
link
for
a
cape
that
we
are
planning
to
implement
as
a
feature
for
1:19.
This
is
to
use
raw
patches
for
customizing
the
control
plane,
static,
pods.
E
There
we
go
phew.
Can
you
hear
me?
Yes,
I
think,
with
more
or
less
historically,
we've
always
been
like
well
behind
on
cups
and
actually
sort
of
caught
up
now,
but
now
we
are
sort
of
facing
a
new
release
problem
which
I
think
is
sort
of
interesting
again
from
the
Supercross
project
perspective
or
song.
E
So
we
continue
to
have
a
ton
of
changes
going
in
now
to
our
master
branch,
and
we
haven't
cut
a
like
branch
for
118,
which
is
the
one
that
we'd
like
we're,
probably
now
little
overdue
to
release,
but
not
to
overdo,
and
we
want
to
cut
that
branch
to
stabilize
it.
But
if
we
cut
the
branch
today,
then
we
don't
get
as
much
test
coverage.
Most
arts
has
coverage
of
focuses
on
the
master
branch,
so
I
think
that's
going
to
be
our
next
release.
E
E
E
I
mean
it
was
fine
with
scripting,
but
it
was
more
than
we
more
overhead
than
we'd
like,
and
there
is
a
trade-off
between
when
you
cut
the
branch
like
today,
we
have
the
118
alphas,
for
example,
coming
off
master,
so
any
PR
that
goes
in
like
will
go
into
the
next
like
release,
but
that
makes
it
like
much
harder
to
get
that
180
into
beta
type
thing.
So
what
is
the
right
time
to
cut
that
release
branch
and
start
taking
the
pain
of
cherry
picking
but
get
to
earlier
stabilization?
E
C
C
Yeah,
you
know
the
whole
repository
dependency
management
is
very
difficult
and
that's
why
I
wanted
qadian
to
follow
the
release
cycle
exactly
because
it
saves
us
all
trouble
discussions
from
before
that
we
had
this
not
clear
how
cube
hello,
I'm
going
to
do
it
pretty
much.
Everything
tried
to
decouple
from
a
cadence
of
a
core
project
is
very
painful
and
I
seen
projects
out
there
that
follow
a
very
strict,
precise
cadence
for
all
their
modules,
and
they
have
massive
repositories
in
projects
out
there.
C
B
B
The
time
frame
is
probably
going
to
be
June
July
to
just
get
them
implemented.
Yeah
there's
a
lot
going
on,
so
we
have
conditions.
Remediation
extend
radiation
is
a
Bowtique
ACP
and
for
external,
so
foreign
forces
provide
specific
remediation.
We
have
multi-tenancy
this
case.
I
could
work
tackling
at
a
plea,
as
first
kappa
gay
support
for
volume
and
at
CD
and
get
involved
from
secrets.
So
the
you
don't
have
like
push
up
data
with
secrets.
B
A
One
thing
that's
worth
calling
up
especially
to
this
group
is
like,
as
we
talked
about
slash
cluster
directories,
all
the
different
tests
that
folks
have
been
working
on
too
and
the
set
of
intent
test,
especially
the
upgrade
tests.
Those
are
really
important
because
they
they
help
to
highlight
like
we
are
doing
all
the
testing
and
potentially
even
more
than
what
currently
you
get
for
release
blocking
signal.
C
A
B
Yeah
this
this
whole,
like
effort,
was
done
in
a
way
that,
like
you,
could
swap
in
provider.
It's
a
like
the
same
exact
I
see
the
runway
Cathy
for
us
for
capital
working
at
signal,
but
also
like
you
can
use
the
same
test
with
a
different
configuration
to
remain
table
and
this
effort
to
move
it
up.
Yes
to
these
to
this
new
test
frameworks
underway,
so
we'll
definitely
get
there
and
we'll
probably
have
like
a
lot
more
to
show,
probably
by
the
end
of
the
month.
I
think.
A
The
big
thing
that
we
need
to
start
to
track,
as
we
start
to
reduce
the
state
space
of
cluster,
is
that
the
there
are
set
of
weird
tests
that
exists,
a
slash
cluster
for
hardware,
enablement
and
other
stuff
that
don't
exist
really
anywhere
else,
and
once
we
get
to
like
that
space,
that's
gonna
be
like
a
long
tail
like.
How
do
we
do
certain
hardware
enabled
it's
and
stuff
yeah.
B
B
C
But
by
the
way
mia
Fabricius
saw
some
like
I
would
say
like
degraded
performance
in
prowl
in
terms
of
CPU
and
networking
resources
and
I'm,
not
sure
what
was
causing
that,
but
it's
soft
itself,
so
it
was
temporary.
We
were
seeing
basically
like
simple
tasks
like
pulling
something
from
the
internet
and
the
building
something
require
like
five
times
the
time.
Basically,
so
it
was
confusing.
G
C
Hopefully
is
going
to
approve
yeah
did
I
say
that
better
asserted
Subba
sheets
to
resolve
the
high
CPU
I
guess
we
should
see
I'm
going
to
watch
the
kinder
dist
as
well
to
see
how
how
they
are
doing
yeah.
H
H
H
C
C
C
I
1:10
mostly
has
improvements
for
a
pond
man
with
cryo,
so
that's
slowly
creeping
its
way
out
of
experimental
states,
so
you
can
use
mini
cube
on
like
Dora
and
stuff
like
that,
because
docker
is
impossible
to
install
in
fedora
and
then
we
released
110
one
the
next
day
because
turns
out
windows
was
totally
busted.
So
111
is
going
to
focus
on
getting
our
windows
integration,
test
infrastructure
back
up
and
running
which,
with
the
help
of
state
windows,
we
have
some
VMS
better
running
our
tests
on
them
again.
C
C
C
Which
is
a
convention?
Yes,
it's
it's
becoming
more
and
more
possible
because
they
now
can
build
the
complete
without
dr.
shim
and
potentially
the
to
bio,
decided
agreement
is
removed,
our
cash
into
a
new
repo,
at
least
I'm,
hoping
that
this
is
what
you're
going
to
do
and
to
either
Selleck
sick
note
wants
to
stop
supporting
it.
So
this
is
a
question
now,
because
sequence
life
cycle
has
users
out
there
still
using
docker
sick
windows.
It's
still
dependency
for
them.
Container
D
for
windows
is
not
in
a
good
state.
C
A
C
Knowledge,
it's
still
discuss
problems
related
to
the
Windows
kernel,
but
it's
very
actively
being
worked
on
and
I
guess.
People
like
Michael
Michael
can
provide
a
better
update
for
this,
but
I
do
understand
that
container
D
is
going
to
be
the
preferred
container
at
time
for
Windows
2,
but
so
this
is
long
term
short
term.
What
do
we
do
with
all
the
rock
you
baby
M
users
that
use
docker
I.
C
It's
a
good
question:
well,
that's!
Basically,
the
ownership
of
the
dock
regime
is
Sigma
doesn't
want
to
maintain
it.
Should
the
other
sig
six
step
in
to
potentially
maintain
it
until
we
completely
remove
docker
support
of
kubernetes,
which
is
kind
of
quite
a
surprise
too.
It's
going
to
be
quite
a
surprise
to
people
out
there
I
think.
A
A
If
we're
going
to
do
this,
we
I
think
we
need
to
go
through
a
formal
deprecation
cycle
right
and
so
that
way
we
give
plenty
of
notification
out
there
that
you
know
we
are
deprecating
docker
support
unless
somebody
steps
up
over
here
and
where
our
preference
will
be
default
to
container
d,
and
I
think,
as
long
as
we
do
the
full
deprecation
policy
and
procedure,
I
think
that
we'd
be
okay,
but
we
need
I.
Think
socializing
at
first
is
probably
important.