►
From YouTube: 2017-06-23 18.21.25 SIG-cluster-lifecycle 166836624
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Will
mention
before
we
start
recording?
We
talked
briefly
about
component
statuses
for
AJ
and
we
linked
a
couple
of
issues
in
the
notes.
Here
we
didn't
make
any
decisions
or
anything
so
the
notes,
sort
of
capture
where
Bret
and
then
the
first
agenda
item
is
to
follow
up
on
action
items
from
last
week's
meeting.
There
were
three
action
items
will
start
with.
Lucas
is
because
he'd
marked
it
as
done
in
their
previous
meeting
notes,
which
was
update.
The
checklist
and
you
got
an
issue
number
127,
that's
the
self
hosting
issue.
C
Let
me
see
what
I
wrote
down
there
well,
first
of
all,
maybe
Excel
faulting
work
and
living
out
their
faith,
and
these
demon
sets
I
have
a
pull
request
for
all
of
that,
we'll
rebase.
Now
that
the
code
freeze
lifted
and
hopefully
we'll
we
can
get
it
merged.
Sometime
next
week,
upload
certificates
to
the
API,
using
secrets
and
through
an
external
computer,
wanted
to
do
that
and
already
sent
the
TR
in
mid
or
early
1-7
cycle.
C
Unfortunately,
it
was
a
bit
controversial
and
we
hadn't
really
any
clear
decision
making
at
the
time
to
move
forward
between,
to
go
and
in
low
group,
so
it
stalled
out.
I
talked
to
him
and
he
will
rebate
the
PR.
One
in
then
propose
a
create.
First
then
delete
demo
set
update
strategy,
and
then
we
can
use
that
strategy
for
controller
manager
and
scheduler
demon
sets,
which
was
another
action
item,
will
come
to
right,
cube
right,
a
cubelet
checkpointing
proposal
and
opt
in
to
that
for
cube
am
so
basically
such
as
the
the
two
Alaric
generators.
B
And
I
can
touch
on
those,
so
I
sent
air
tune,
a
private
message
to
ask
him
how
he'd
feel
about
that.
He
started
a
discussion
which,
unfortunately,
is
internal
to
Google.
We
should
definitely
move
that
over
to
like
the
public
sig
mailing
list.
The
right
now
he's
just
kind
of
feeling
out
information
and,
as
his
last
asked,
was
whether
we
were
impacted
at
all
by
the
proposal
to
switch
the
daemon
set
to
use
the
regular
schedule,
instead
of
just
bypassing
the
scheduler
and
scheduling
things
itself,
which
I
don't
believe,
impacts.
B
This
asked
at
all,
that's
sort
of
orthogonal
or
I
think
that
we'll
everything
will
still
just
continue
to
work.
What
we
want
is
the
damage
that
controller
to
create
before
deleting
regardless
of
how
the
scheduler
schedules
things,
like
the
scheduling,
isn't
too
important
to
us.
It's
the
there's
another
issue
about
taints
that
we
actually
do
care
about,
but
where
the
scheduler
run,
but
we
care
less
of
that
yeah.
C
I
think
so
too,
the
in
any
case
they
can't
break
backward
comparability
when,
like
moving
the
scheduler
or
demon,
said
controller
over
to
using
the
scheduler
to
schedule
right
right.
So
so
it's
not
that
important
that
we
rely
on
the
current
behavior
and
expect
it
to
be
the
same.
The
features
local
right.
B
So
I
view
switching
over
to
snow
schedulers
blocked
on
the
issue
opened
four
five,
seven
one,
seven
in
the
main
repo,
which
is
a
sports
scheduling,
tolerating
workloads
on
not
ready
notes,
because
the
demons
that
controller
currently
does
do
that.
So
we
need
the
regular
scheduler
to
have
the
same
paper
at
which
at
which
point
about
world
counts,
or
else
so
that's
where
that
is
I'm
going
to
follow
up
on
that
thread,
I'll
probably
just
open
a
github
issue
and
link
to
it
from
where
we
are,
and
we
can.
B
B
Don't
think
Tim's
here
yet
and
I
haven't
seen
the
puzzle
but
be
put
together
for
pod
checkpointing,
but
I'll
also
mention
that
I
talked
with
John
a
couple
of
days
ago
and
sort
of
verbally
explained
what
I
thought
we
were
doing
and
she
agreed
that
that
would
be
fine
and
said
that
for
sig
node
it's
on
their
roadmap
as
ap
one
item.
So
it's
a
little
bit
inconsistent
with
our
p0
priority
for
this
task.
B
The
sig
note
has
a
whole
bunch
of
p0
things
and-
and
she
has
been
told
by
her
management,
but
they
had
too
many
Pedro
things
and
I
had
to
bump
something
to
p1.
So
ours
is
a
p1
I.
Don't
think
that
means
it
won't
get
done.
They
seem
to
have
a
pretty
good
track
record
of
delivering
the
things
that
they
sign
up
for,
especially
if
we're
doing
implementation
and
just
asking
for
review
bandwidth
I
think
it
will
be
fine
to
get
it
so
I
think
we're
in
pretty
good
shape
there.
B
C
C
And
follow
from
from
the
first
time
way
to
follow
from
the
first
time
up.
We
basically
talked
mostly
about
powerful
things.
Well,
what
is
the
scope
for
self
hosting
and
it's
control
plane
in
the
first
place?
Cubelets
won't
be
affected
at
all
in
the
checkpointing
thing,
which
is
now
going
to
be
built
in
the
queue.
But
one
action
item
which
I
probably
could
do
is
the
issues
with
using
the
schedule
like
the
issues
with
using
a
deployment
instead
of
a
demon
set
and
there's
a
lot
of
complexity
in
there.
C
So
I
could
do
a
workflow
diagram
if,
if
that's
useful,
like
what's
the
actual
problem
with
using
the
deployments,
but
since
we've
agreed
on
using
demos,
I,
don't
know
if
it's
a
prior
on
it
right
now,
but
that
was
one
code
and
then
we
didn't
get
that
far
into
upgrades
or
AJ
no,
but
stated
that
the
faces
is
a
documentation,
flash
clarification
and
effort,
and
if
it's
important
in
the
way
that
we
have
to
basically
get
an
understanding.
What
we're
doing
before
we
starting
to
build
out
any
more
things.
C
I
already
wrote
the
doc
for
that.
What
QA
team
is
doing
now
and
will
reiterate
on
that
one
as
we
agree
on
upgrades
and
AJ
design
later
so
I
think
that's
in
in
pretty
good
shape
and
a
stretch
goal
for
1/8
is
to
actually
expose
these
faces
and
API
in
a
correct
manner
for
other
consume
external
consumers
like
cops
cargo
ng
key,
so
that
but
a
stretch
goal
so.
A
Waiting
for
Tim
to
join
us,
he
says
that
he'll
join
a
similar
fate.
So
that's
fine!
Now
we
already
go
started,
though.
Does
anyone
wanna?
Actually,
we
should
various
wait
to
Tim's
join
and
then
we
can
bring
both
in
an
hour
an
up
speed
at
same
time
then,
but
please
proceed
with
whatever
you
talking
about
recursive,
okay,.
C
Yeah,
so
if
we
get
started
on
a
little
bit,
an
upgrade
I
think
we
all
agree
that
the
cube
medium
upgrade
command
will
just
talk
to
our
controller
on
some
kind
or
post
a
custom,
resource
definition
or
post
a
custom
resource
of
them
kind
or
in
another
way,
in
form,
a
controller
that
you
should
upgrade
this
cluster
I
I
think
we
all
agree
on
that.
But
are
we
on
the
same
page
instead
of
having
like
the
logic
built
in
the
cube
ADM
and
you.
E
C
A
Can
upgrade
the
upgrader
yeah
I'm
supportive
of
that
I?
Think
that
make
sense
to
me.
Maybe
you
could
just
explain
something
like
when
you
say
like
talking
to
a
controller.
Presumably
you
mean
by
the
API
server
right,
yeah.
B
F
D
For
our
our
upgrades
right
now,
we
run
the
the
thing
that
manages
upgrades
constantly
in
the
cluster,
but
it's
actually
not
doing
the
constant
reconciliation
loop
right
now,
if
kind
of
it's
like
you
trigger
it,
and
it
runs
the
upgrade.
That's
just
kind
of
sitting
there,
all
the
time
kind
of
gone
back
and
forth
in
cases
where
we
might
want
it
to
actually
constantly
run
in
the
cluster
like
if
you
have
nodes
joining
it
old
versions
or
something
like
that,
bringing
them
back
up
to
date,
but
it
could
go
either
way.
D
B
D
A
A
Mean
is
there
maybe
the
maybe
this
is
a
detail
that
we
don't
miss
go
into
right
now
but
like
if
you
had
five
master
cluster
and
only
four
of
the
Masters
were
up.
When
you
start
the
upgrade,
does
the
upgrades
controller
just
wait
until
the
fifth
one
comes
back?
How
does
it
know
how
many
masters
they
were
meant
to
be.
B
D
B
B
C
So
one
really
really
happy
thing
that
comes
to
mind
when
I
cooperate.
Nodes
is
I,
mean
there's
potential
to
use
jobs
or
demon
sites
of
some
kind
temporarily
and
NS
endpoint
to
the
whole
host
namespaces
and
such
things,
but
I
don't
think
we'll
go
that
way,
but
I
mean
it's
technically
technically
possible,
but
I
really
think
it's
out
of
COBOL
anyways.
C
C
D
Speak
a
little
bit
to
how
we've
been
doing
it,
which
it
actually
works.
It's
pretty
simple
and
works
pretty
well,
but
and
the
pattern
would
have
to
change
based
on
the
underlying
costs
OS,
but
essentially
deploy
an
agent
to
all
the
nodes
via
Damon
said,
and
that
agent
is
just
looking
at
its
own
node
object
for
annotations.
Did
it
say
this?
D
Is
the
cubed
container
image,
URL
and
tags,
and
then,
when
that
gets
changed,
the
agent
just
writes
that
to
an
environment
file
that
a
Google
service
unit
is
consuming
and
then
essentially
just
it
just
runs
onto
it-
is
I
a
doctor
or
by
a
rocket,
but
it
actually
works
pretty
well,
because,
essentially
you're
just
saying
download
this
new,
the
container
image
we
start
the
Google
comes
back
up.
The
agent
is
like
100
lines
of
code,
just
watching
its
own
minute.
It's
not
that
crazy.
D
A
Cool
sounds
plausible
as
something
to
tackle
later
I
mean
I've,
always
been
at
the
opinion
that
the
thing
that
installs
things
should
be
responsible
for
upgrading
them,
just
as
a
general
rule.
So
the
thing
that
installs,
the
cubelet
is
the
package
manager,
and
so
it
should
be,
is
a
human
using
the
package
manager
or
a
contact
management
system
using
the
package
manager,
and
so
it
should
be
that
same
thing.
That
is
responsible
for
upgrading
it
and
a.
B
I
think
the
way
are
described
as
the
package
owner
is
still
responsible
for
doing
it,
but
the
automation
system,
the
kicks
end,
execute
that
command
is
itself
run
by
group,
IDs,
yeah
and
that's
out
of
the
end
using
puppet
or
chef
or
something
else
to
sort
of
reach
around
behind
the
system
and
run
it.
It
upgrade
use
a
diamond
set
effectively
to
run
macula
right
for
you
when
we
want
operate
yeah.
B
Yeah
we
need
to
have
a
weight,
upgrade
cube
proxy
right
and
any
other
sort
of
system
demons
that
are
run
as
demons.
That's
right!
So,
if
we're
running
fluent
thee
as
a
demon
set,
that
needs
to
be
upgradable
as
well
I,
don't
know
if
we
were
doing
that
today
or
if
we
allow
users
to
just
install
it
on
puffins
yeah.
C
C
C
B
B
C
A
I,
don't
think
we
should
ever
use
the
latest
AG,
but
I
think
that
qadian
upgrades
could
I
guess
it
could
like
I,
guess
it's
going
to
have
to
go
and
reach
out
to
a
container
registry
and
list
the
tags
or
something
to
see
which
versions
of
the
controller
they
maybe.
B
It
can
be
built
built
into
the
cube
admin
release.
I
could
say:
I
know
that
these
are
the
versions
that
I
should
use.
There
can
be
a
map
in
there
saying
if
your
clusters,
at
this
version,
I
gotta,
use
a
different
version
of
the
upgrade
image,
yeah,
otherwise,
I
don't
know
what
to
do.
I'll
just
use
the
latest
version
of
it
very
much
yeah.
C
A
B
B
A
C
A
B
I
mean
it's
also:
we
don't
necessarily
have
to
cut
a
new
cube
admin
for
every
release,
every
patch
release.
Just
for
this
purpose
right,
you
could
say
something
like
if
you're
running,
1.7
dot.
Something
then
use
this
version
and
if
we
cut
new
versions
of
one
seven
and
we
just
keep
using
that
version,
whereas
if
we
find
it
bug,
will
cut
in
queue
badness
as
okay.
B
Now
it's
not
107
anything,
it's
one
dot
7.0
31.7
up
for,
but
if
your
1.7.5,
you
need
this
new
version
of
the
thing,
because
we
fix
the
bug
in
it
and
then
we're
up
to
that.
Then
at
that
point
that
also
divorces.
The
cube
admin
really
schedule
from
the
main
through
Teddy's
release,
schedule,
which
we
kind
of
want
to
do
at
some
point
anyway.
A
That
seems
reasonable,
but
what,
when
you
say,
one
box
seventh
of
anything
and
how
does
keep
admin
know?
Which
versions
are
available
so
upgrade
to
other
than
having
it
hard-coded
in
its
source
code,
serious
thing,
I.
A
I
find
feel
to
say
upgrades
relations.
I
didn't
want
current
on
latest,
but,
like
give
me,
the
latest
version
of
Kingdom,
SES
or
dry
run
show
me
them
they
want.
The
latest
version
of
communities
would
be
if
I
try
to
upgrade
to
it
and
so
I
don't
want
to
have
to
force
the
users
always
know
what
version
they
want.
Yeah.
C
C
B
C
B
Maybe
a
little
better
than
inspecting
tag,
the
tags
we
can't
tell
if
the
highest
number
tag
is
actually
a
stable
release
or
if
it's
just
a
higher
number
tag
right.
So
there
are
times
are
like
we'll
cut.
You
know
one
six,
five
and
it's
got
some
bug
and
we
tell
people,
don't
actually
use
that,
but
we
did
cut
it
and
then
we
cut
one
six,
one,
six,
six
right
after
right,
so
I
think
just
relying
on
the
somatic
version
number
being
higher
is
not
safe
enough.
A
Okay-
and
this
may
want
to
be
text
files
and
Jessica
Nigri
of
the
free
trips
with
branches,
as
in
one
seven,
one,
eight
one,
nine,
second,
okay,
so
it's
not
a
way
of
asking.
Would
that
be,
and
I
should
probably
actually
or
someone
should
make
a
proposal
for
local
CLA
user
experience
should
be
fifty
Badman
upgrade
because
we're
kind
of
flying
blind
until
we
have
that
I
didn't
make
a
rough
sketch
on
that.
B
A
F
B
A
C
D
We
went
through
that
we
were
hard
coding,
known
versions
into
a
binary
for
the
same
reason
we
wanted
it
like.
You,
don't
want
this
upgrade
to
upgrade
to
a
version
that
we
didn't
know
about,
because
there
might
be
something
that
it
has
to
handle
in
that
version.
So
not
just
like
an
arbitrary.
You
know:
I'm
gonna
upgrade
to
this
new
version
and
it
might
or
might
not
work.
D
There
was
a
lot
of
overhead
with
us
to
the
point
that
it
actually
did
become
kind
of
painful,
where
we
were
constantly
having
to
rebuild
this
tool
and
recompile
it
for
essentially
patch
versions,
and
it
didn't
actually
matter
we
weren't
adding
in
these
custom
things.
So
we
kind
of
went
half
half
at
this
point.
B
E
Totally
calm,
so
Lucas
yeah
want
to
point
out
I'm
working
on
a
VR
that
I,
if
I,
remember
well,
was
asked
by
Jacob,
where
we
are
leaving
in
a
pre-flight
check.
A
warning
when
you're
you
are
using,
you
are
standing
with,
could
mean
a
kubernetes
ratio
which
is
much
older
than
Cooper
the
mini
version.
You
did
a
review
if
I
remember
was
to
they're
in
this
er.
E
C
C
Yeah
I'm
I'm
coming
to
understand
and
in
the
same
manner
as
we
say,
I
mean
in
an
air-gapped
variant
of
cuba
mean
now
you
can't
use
the
latest
placeholder
of
table.
It
will
air
out
with
saying,
like
you,
have
to
specify
expanded
version.
Otherwise,
I,
don't
know
what
the
pistol
and
I
think
that's
actually
a
fair
like
compromise
between
both
both
parties
like
if
you
want
to
upgrade
to
something,
and
you
have
no
internet
access.
You
just
have
to
tell
the
cube.
Am
that
I
want
to
upgrade
to
different?
C
B
Think,
like
Aaron
was
saying
it's
probably
okay
fled
to
work
across
patch
releases,
but
it's
scary
to
do
that
across
my
releases.
If
you
wouldn't
want
cube
m18
forward,
upgraded
kubernetes
1.9,
the
part
isn't
know
enough
of
a
crew
noise
1.9
and
what's
changed
with
the
system
to
do
that
safely.
Yeah.
C
Yeah
but
I
actually
think
that
what
Robert
said
diagram
was
what's
really
reasonable.
Like
we
hard
code,
the
version
that
cube
ADM
should
use
like
the
controller
version
tag
that
cubed
M
should
use
like
we're
saying
that
we
working
one
or
whatever
of
the
creator,
it's
hard-coded
into
a180
and
first
in
like
184,
we
notice
that
something
has
changed
changed
in
the
way.
C
D
D
One
option
might
be
this:
the
upgrade
controller
itself
is
just
kind
of
built
very
generically,
and
it
knows
certain
operations
like
apply
these
flags.
This
manifest,
or
you
know,
change
this
image
field
of
this
manifest
to
perform
an
update,
but
the
actual
upgrade
itself
is
just
providing
a
payload.
That
says
here
are
the
new
things
that
you
need
to
know
about.
C
D
Well,
I
mean
ideally
not
necessarily
business
logic
per
se,
but
like
what
needs
change,
I
get
declarative.
If
you
can
manage
the
entire
manifest.
That's
the
easiest
that
this
idea,
your
update,
payload,
would
be
like
here's,
your
cube,
DMS
manifest.
You
need
to
apply
this,
and
the
code
in
the
update
operator
is
essentially
just
saying.
Oh
I've
been
given
this
manifest.
I
need
to
just
apply
it
and
wait
for
it
to
be
applied.
I'm
going
to
update
DNS,
so
I'm
going
to
watch
that
that
deployment.
Look,
it's
done,
cool
move
on
to
the
next
component.