►
From YouTube: 20180626 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
June
26
2018.
This
is
the
normal
sync
clustered
lifecycle
on
meeting
today.
What
we
had
planned
originally
was
to
go
through
the
112
planning
document
which
I'll
share
in
a
second,
but
before
we
start
to
do
that,
are
there
other
topics
that
folks,
or
here
in
this
group,
would
like
to
discuss.
A
A
A
A
I
was
thinking
we
could
just
start
from
the
top
down
and
walk
through
some
of
the
items,
and
if
there
are
things
that
are
missing,
that
folks
want
to
address,
feel
free
to
feel
free
to
add
to
the
doc
right,
and
we
can
please
append
to
the
end,
though,
so
we
can
start.
You
know
just
at
least
have
some
order
to
how
we're
talking
about
stuff,
I.
Think
the
first
thing
that
I'd
like
to
discuss,
because
it's
not
finished
for
112-
is
to
talk
about
the
docs.
B
C
B
A
There's
also
there's
also
the
one
thing
that
we
did
in
the
Docs
that
we
haven't
completed
is:
is
cleaning
up
the
coop
ATM
repo
to
be
the
canonical
state
of
features
and
details
that
aren't
in
the
main
documentation
that
people
ask
questions
about,
but
really
should
not
live
in
the
main
documentation.
These
are
people
who
are
like
power
users
who
want
to
go
off
within
their
own
lands,
and
we
want
to
be
able
to
point
them
at
a
proper
location,
but
we
also
don't
want
to
confuse
or
put
it
in
the
main
Docs.
C
C
A
B
So
I'm
not
sure
about
the
co
initiative.
I
guess
we
like
decided
some
things
in
the
previous
meeting,
but
I
think
we
like,
like
we
need
more
to
discuss
more
here
and
I'm,
considering
like
making
a
one
one-on-one
with
Jennifer.
If
she
agrees,
of
course,
perhaps
timid
if
you
join
us
well
to
like
I
need
I
need
like
a
direction
to
for
the
movement
of
files
between
folders
like
okay,
we're
gonna
move
this
from
from
here
to
here,
and
we
are
going
to
include
this
new
stuff
in
the
file.
A
A
It's
basically
good
medium
to
GA.
What
are
the?
What
are
the
items
that
we've
as
a
as
a
sig
feel
are
the
action
items
that
we
think
are
required
in
order
for
us
to
get
kuba
DM
to
GA,
obviously
getting
the
config
apparatus
to
be
v
1
beta
1,
at
least
it
is
important,
so
comments
questions
there,
yeah.
E
A
Prefer
to
have
a
camp
that
outlines
the
details
of
the
changes
we
would
like
to
have
I
think
we
played
a
little
fast
and
loose,
not
a
little.
We
played
very
fast
and
loose
in
the
111
cycle
and
I
think
we
should
be
much
more
strict
as
we
start
to
approach
GA.
So
that
way,
it's
clear
what's
been
decided,
so
anybody
can
take
a
look
at
it
and
understand
yeah.
C
C
C
C
We
can
just
rename
it
in
the
late
in
the
cycle
if
we
feel
confident
enough
for
it
to
be
better.
In
the
worst
case.
It's
just
gonna
line
this
version,
one
off
the
tree,
then,
as
for
Britta
and
and
Mike
toughen
and
others
in
the
Zig,
had
very
legit
comments
on
embedding
cutter
components,
component
config
versus
having
them
them
in
different
files
or
different
documents,
I'm
kind
of
convinced.
That
is
what
we
should
do.
C
So
our
master
configuration
file
or
like
our
cubed
and
configuration
file,
is
one
first
one
young
document
with
the
init
or
master
configuration
whatever
we
call
it.
Then
we
have
the
cubic
configuration
cube
proxy
configuration
API
server
configuration
whatever
those
the
other
ones
are
optional,
of
course,
but
if
these
wants
them
and
as
new
components,
Co
fix
are
available,
we
can
just
have
them
in
in
the
same
file
with
different
young
documents.
I
also
have
some
working
progress
there
locally.
You.
C
It
was
some
of
the
discussion
was
on
if
you
have
the,
which
one
be
the
one
pipe
types
document,
API
type
documents
in
Google
Docs.
There
was
also
some
discussion
there,
where,
where
won't
be
a
one.
D
C
So
yeah
I
mean
if
we
have
one,
we
have
one
file
with
three
different
documents:
one
for
the
qubit
and
configuration
one
for
the
base.
Cubelet
configuration
and
one
for
the
base,
cubed
cubed
proxy
configuration,
then
config
in
the
cubed
iam
global
struct
may
or
will
affect
the
cubelets
component
configuration
like
we
have
now.
So
if
we,
if
you
set
the
service
subnet
on
the
whole
class
for
the
whole
cluster
in
cuban
config,
it
will
also
set
the
B&S
IP
in
the
cube.
C
B
A
A
The
next
one
is
make
the
coup
proxy
component,
big
config
to
V
1
beta
1,
and
my
my
comment.
There
is
who's
going
to
drive
that,
because
that's
out
of
our
hands
right,
that's
that's
the
coup
proxy
yeah,
no
one!
That
said
that
that
was
one
of
the
problems
that
like
no
one
currently
owns
it.
So
the
only
drivers
for
making
that
change
would
be
us.
F
C
C
C
C
Cubed
M
accepts
that
config
file
uses
all
the
instructions,
but
if
you
specify
so
you
have
a
comfortable
full
with
options
and
then
you
specify
a
node
name
on
the
CLI,
because
no
name
is
different
per
host
that
you're
running
on,
but
the
config
file
is
is
like.
Then
it's
gonna,
the
CLI
flag,
its
gonna
override
the
node
name
value
in
the
config.
H
C
A
A
A
C
H
A
If
I,
if
my
high-school
experience
working
in
a
drive-through,
helped
me
I,
think
I
parsed
what
you
said
and
I
kind
of
agree,
I,
don't
think
we
should
be
spending
more
time
on
it,
because
there
isn't
somebody
to
drive
it
through
to
completion
and
it's
it's
full
of
peril
and
the
people
who
have
done
it
before
basically
tried
many
different
conditions
for
doing
aspects
of
it.
A
H
A
Exactly
like
there's
nothing
that
prevent
so
yeah
there's,
no
there's
nothing
preventing
you
from
like
pivoting
on
your
own
right.
If
you
really
care
that
much
to
do
it
that
way,
you
can
have
your
own
pivot
semantics
and
for
each
a
it's
a
lot,
less
dangerous
right.
The
problem
that
we
always
got
hung
up
around
the
axle
when
we're
trying
to
deal
with.
How
do
we
deal
with
the
degenerative
case
in
the
single
single
master
right
and
then
it
becomes
really
that's
the
degenerative
case
where
we
have
to
deal
with
DC,
outages
and
everything
else.
A
A
C
So
yeah
I've
also
started
thinking
about
like
the
the
scope,
more
detail
it
on
the
scope
of
Cuban
Minh,
and
if
we
I
mean
the
initial
thought
was
like.
Okay,
we
have
some
OS.
Underneath
we
execute
a
command
to
get
a
master.
We
then
execute
a
command
on
another
node
on
another
machine
to
get
a
node.
If
we
then
want
to
upgrade
any
of
these,
we
executed
another
command
and
it's
for
me.
It's
fully
after,
like.
C
C
A
The
ginormous
foot
gun
and
like
you
know
it
could
either
be
great,
and
you
know
you
can
less
double
taps
or
you
can
just
blast
off
your
own
leg.
Yeah
I'm,
a
fan
of
just
making
it
explicit
and
simple,
and
it
simplifies
the
code
and
we
can
defer
to
other
technologies
to
start
to.
Do
that
abstraction
layer
for
for
upgrades,
making
the
more
seamless
so
I.
Think
if
you
manage
things
through
cluster
API,
you
could
have
another
abstraction
layer
which
could
allow
you
to
do
it
more
in
a
much
more
seamless
way.
C
Exactly
I
also
think
I,
also
like
the
thinking
of
doing
the
whatever
get-ups
or
like
controller
more
l-like,
sync
like
the
rest
of
kubernetes
using
the
cluster
API
for
these
just
change.
One
knob
and
the
cluster
will
roll
out
its
upgrade
itself
stuff,
rather
than
saying
2,
cubed,
em
and
making
cube
am
the
owner
of
upgrading
everything
in
even
in
AJ
modes.
Just
like
that
so
yeah
they.
F
On
just
a
net
CD
in
general
I
would,
I
would
like-
and
I
work
very
focus
on
cube
ADM
in
this
at
the
moment
in
the
stock.
But
I
would
like
to
propose
the
manager
work
sort
of
see
if
we
like
that
and
see
if
we're
willing
to
make
it
a
sync
repository
under
this
particular
sake.
So
that
might
actually
help
some
of
this
at.
A
The
end
yeah
I'm,
totally
fine
with
if
there
are
people
who
are
going
to
take
ownership
of
it,
I
think
the
one
problem
we've
kind
of
had
as
a
community
is.
We
had
a
lot
of
good
ideas
and
then
they
bit
rot,
and
if
there
are
people
who
are
like
yourself,
who
are
going
to
own,
maintain
and
track
and
upgrade
and
integrate,
then
I
have
no
qualms
at
all.
C
Me
neither
I
think
that
sounds
really
good
and
having
yeah.
It
is
definitely
like
kind
of
Cuba.
I
am
focused
at
the
moment,
but
like
it's.
If
not
because
we
don't
want
any
other
project,
it's
just
that.
We
want
to
get
this
done
first
and,
like
then,
focus
on
other
problems
like
for
myself
at
least,
but
if
people
are
like
yourself
want
to
want
to
do
a
team
manager,
a
CD
manager,
please
yeah.
A
Think
the
next
thing
in
the
list
was
rationalized
with
coaster.
Api
I
know
that
we
started
to
do
pieces
of
this,
and
there
are.
There
are
there's
overlap
between
the
configuration
for
cuve
idiom
and
some
of
the
API
constructs
for
the
cluster
API,
such
as
machines,
and
how
machines
join
and
I
kind
of
prefer.
If
there
was
a
way
for
them
to
leverage
some
of
the
configuration
apparatus.
A
So
I
know
they
want
to
be
abstracted
from
all
the
Plymouth
scenarios,
but
the
whole
purpose
of
making
Kubb
ATM
the
core
nugget
is
that
other
installers
could
use
that
nugget
along
the
way
right.
That
was
kind
of
the
premise
right
and
there
are
pieces
that
exists
inside
of
the
cluster
API,
which
I
do
think
are
redundant,
such
as,
like
the
taint
apparatus
for
initial
initialization
and
other
thoughts.
There
I
think.
F
A
F
C
So
if
I'm
looking
at
the
cluster
spec
right
now
and
it's
it's
kind
of
completely
empty-
it
has
cluster
Network.
Nothing
is
talking
about.
Like
the
the
full
cluster
view,
it
has
custom
network
with
services
and
pods
service
domain
that
we
should
probably
make
that
align
because
we
have
the
same.
We
have
exactly
similar
kind
of
networking
structs
with
that
it's
called
networking
and
it
has
service
subnet
as
a
string,
I
think
and
poured
some
method
string
and
service
domain.
C
A
Absolutely
I
think
I
think
what
we're
saying
is
that
HEPA
tio
is
a
company
we're
going
to
throw
down
on
cluster
API
pretty
heavily
in
112
and
rationalizing
these
to
make
sense
right.
So
so
we'll
try
to
help
drive.
That
I
just
want
to
put
that
as
a
feature
item
for
us
to
address
going
forwards,
because
it
doesn't
make
a
lot
of
sense
to
have
redundancy
of
the
same
Flags
and
features
percolate
all
the
way
through,
because
if
you
have.
A
C
It's
unclear
to
me
also.
Maybe
justin
has
more
context
how
we're
gonna
configure
like,
for
example,
the
API
server
and
or
if
that's
out
of
scope
is
configuring.
The
API
server
control
amend
your
scheduler
proxy
cubelets,
what
not
in
scope
or
out
of
scope
for
the
cluster
API,
because
that
is
essential.
What's
doing.
F
F
A
F
A
C
Like
yeah,
but
thinking
about
like
what
should
say,
100%
vanilla
thing
that
deploys
the
PLO's
cluster
API
on
your
bare
metal
machines
using
terraform
or
whatever
it
would
look
something
like
the
normal
class
for
AP
I
have
a
tool
for
it,
talk
to
terraform
to
your
nodes
and
then
check
in
the
full
cubed
up
config,
with
all
the
young
documents
and
all
the
stuff
in
provider
config.
Something
like
that.
Yeah.
C
C
Full
control
using
cube
Adam
in
it
cool
move
phases
to
in
it,
I
have
a
couple
of
others,
other
things
as
well.
We
might
want
to
do
for
112
I
looked
into
this.
If
we
can
get
it,
that
is
okay,
but
it's
probably
the
first
time
I
think
it's
gonna.
C
H
H
I
I
think
the
whole
idea
is
that
actually
we
will
have
subcommands
under
in
it
or
under
join,
in
which
you
can
access
the
phase.
Executions
like
the
ad
hoc
ones.
The
Alpha
phases
command
is
going
to
be
broken
out
for
UX
into
in
it
and
joined.
So
this
is
not
changing
the
directory
structure
of
the
kubernetes
codebase.
It's
changing
the
CLI
structure.
Does
that
make
sense
in.
A
C
Yeah,
it
all
depends
on
what
what
kind
of
scope
we
wanna
we
want
to
set
here
like
if
it's
a
purely
CLI
change
or
if
it's
like
more
I,
mean
you
met
him
internally.
We
have
you
seen
my
Denise's
task,
executors
stuff
Justin
by
the
way,
while
you're
here
you
said
you
had
some
kind
of
task
executors
and
cops.
Could
you
link
to
the
source
code
I.
C
Thank
you
yeah,
so
it
also
kind
of
depends
on
whether
we
want
to
do
something
more
modular
like
for
example.
Now
now
we
have
every
face
is
a
snowflake,
and
that
is
bad,
like
e-code,
wise
inside
of
Cuba
name.
So
so,
if
we
I
guess
that,
if
we
want
to
do
this,
we
might
want
to
wait
for
the
refactoring
of
that
before
and
that
is
gonna
take
time.
C
A
Like
I'm,
not
gonna,
buy
into
all
of
that,
yet
without
seeing
deets
right
like
I,
want
details
before
I'd
buy
into
it,
I
can
see
a
path
forward.
That
is
much
more
sane
for
us,
that
is
traditional
I
do
realize.
I
do
I
have
seen
the
code
structure
of
the
way
it
is
today,
but
what
you're
proposing
is
also
on
the
opposite.
A
C
A
F
A
F
The
problem
yeah
yeah,
the
problem
is
like
with
the
like.
Just
all
the
retry
and
a
bash
script
is
just
so
painful
or
like
getting
like
the
hardest
thing
of
cops.
Then
the
flake
that
they're
most
like
risky
things
up
since
the
bash
script,
which
runs
on
eight
of
us
up,
which
downloads
the
bootstrap
go
program
in
a
relaxed
like.
A
I
I
I
Already
ok
perform
this
in
Eastern.
Martial
master
going
workflow
is
part
of
what
we
were
talking
Timur
shortly
before
so
the
goal
is
to
streamline
the
creation
of
second
and
third
and
so
on,
master
node
and
so
making
for
make
it
possible
to
create
a
very
reliable,
a
great
Ron
hae
achieve
Mecosta,
so
the
police
to
streamline
what
people
are
doing
today
a
by
invoking
kudu
Padmini
in
several
time
and
then
changing
manually,
some
config
around
there.
I
So
if
the
proposal
is
there
I
ever
really
the
updater
of
the
proposal,
we
can
simplify
even
more
by
removing
support
for
supporting
or
not
the
missing
part.
There
is
that
now
the
process
you
must
assume,
as
that
we
have
an
externality,
ABCD
and
dollars
scaled
up
or
gone
and
I
PCB,
so
the
worker,
my
side
is
nearly
completed
and
I
think
that
the
design,
the
resting
us
story
for
the
users
for
the
final
user,
so
my
opinion
should
be
in
4j
I.
A
C
A
A
A
C
Yeah
well
we'll
talk
more
about
that
later,
but
I
in
any
case,
I
expect
that
that
we
would
design
like.
Even
though
we
don't
have
a
code,
we
would
design
version,
won't
be
the
one
to
do
all
this
stuff
or
to
support
this
workflow
that
we're
like
trying
to
do
trying
to
achieve.
But
anyway,
I
haven't
caught
up
a
lot.
G
C
C
I
A
A
F
I
H
I
C
I
C
So
we
know
what
things
about
Adams
going
forward
before,
starting
to
mark
with
the
config
API
I
think
that
is
probably
the
most
productive
way
forward.
Sync
with
you,
sync
with
others
like
from
sega
api
machine
or
maybe
like
I'm
senior
architecture,
office
hours
or
what,
whatever
the
broader
community
before
starting
to
checking
code.
That's
I,
really
I
really
have
to
check
that
pier
up,
because
yeah.
F
A
There's
certain
jurisprudence,
where,
like
we,
don't
necessarily
need
buy-in
and
I
think
it
makes
sense,
but
I
think
we
should
also
the
only
other
group
that
I
think
of
API
machinery
should
not
be.
One
of
them
is
the
is
the
cig
apps
group,
because
the
overlap
with
apps
and
add-ons
is
really
weirdly
large
and
conflating
to
the
point
where
I
always
see
almost
all
a
ton
management
now
underneath
siga
I,
so
I.
F
C
It
I
haven't
just
made
up
my
mind
yet
how
to
prioritize,
but
we're
kind
of
we
kind
of
need
to
do
it
as,
as
we
said
earlier,
it's
we're
not
in
a
good
place
with
with
the
structuring
of
the
code
of
the
bunch
of
tokens,
because
they're
not
exposed
as
they
should
be
and
say,
KPI
machinery
really
don't
want
to
have
stuff
with
this.
That
is
like
this
in-between
between,
like
between
an
explicit
API
group
and
type
and
like
just
using
a
well-known
constants
for
our
secrets.
C
It's
something
in
between,
and
we
have
there's
also
like
dependency
or
dependency
issues
that
they
are
API
sure
server
should
be
able
to
access
it.
People
using
clients
go
should
be
able
to
access
it
cubed
it
should
be
able
to
access
it,
etc.
So,
moving
today,
for
example,
kids
that
I
our
bootstrap
was
a
proposal
and
I
think
we
need
to
start
executing
on
that
who's.
A
C
A
C
Yeah
so
well,
I
eat
hoppin
me
from
CNCs,
and
we
just
concluded
that.
Well,
we
nobody
has
hasn't
done
anything
to
the
RPM
repos,
yet
I'm.
Just
thinking
like
how
do
we
want
to
prioritize
is
isn't
we
just
have
to
accept
that
we're
the
owners
of
the
implicitly
of
the
Deb's
and
rpms
and
where
they
are
and
etc,
and
we
need
buy-in
from
Sagarika
tech,
chure
and
all
that
kind
of
stuff.
But
do
we
want
to
say
that
cubed
M
is
GA
if
we're
using
you
know
installation
instructions
and
deprecated
repo
from
Google,
so.
A
I
have
a
question
here
because,
like
the
hosting
of
the
Deb's
and
the
RPMs
as
part
of
the
installation,
structures
requires
that
precursors,
step
and
there's
nothing
that
precludes
us
from
just
creating
a
canonical
container
for
a
given
release
that
does
the
installation
via
loopback,
hosted
environment
that
has
the
binaries
for
the
RPMs,
the
dubs
as
part
of
the
installation
process.
So
you
can
do
a
yum
install
from
a
local
container
and
it'll
be
fine
right
yeah.
A
You
can
set
up
a
separate
the
back
repo
to
make
this
happen
right,
and
this
gets
us
out
of
the
business
of
one
trying
to
get
a
Googler
to
push
to
some
location
and
it's
a
terribly
managed,
app
yum
repository,
and
it's
just
a
container
like
everything
else.
We
just
treat
it
as
such.
You
know
to
tote
the
house,
the
artifacts,
that
we
need
to
give
to
people
who
are
strapping
cool.
H
A
H
A
A
C
But
not
a
replacement,
so
I
still
think
we
need
that.
Then
it
needs
to
go
to
CN
CF.
We
can't
we
can
yeah.
It
should
and
I
mean
getting
that
I.
Think,
like
a
short,
quick
fix
kind
of
thing,
is
we
make
basil
demson
rpms,
just
as
the
release
rpms,
that's
an
RPM,
which
is
we
have
really
really
small
Delta.
We
have
closed
that
during
the
cycle
and
we
just
push
the
actual
files
two
to
two
locations.
We
push
it
to
the
Google
thing
and
we
push
it
to
CN,
CF
and
in
our
instructions.
C
We
start
referencing
use
this
package
or
you
use
this
app
3po
from
CN
CF
and
then
we're
kind
of
then
we
have
like
this.
Okay,
then
we
we
will
support
the
Google
repo
for
a
year
or
something,
but
it
still
is
it's
more
GA
kind
of,
because
we
we
say
that
if
you
go
forward,
you
should
use
this
CN
CF
back
sweeper.
But
for
all
this
we
need
steering
committee
like
buy-in
and
stuff,
and
it's
like
a
political
thing.
Yes,
but
everyone
kind
of
agrees
so
I
just
think.
C
A
F
Good
well,
this
is
it
from
CN
CF
that
is
working
on
the
problem
as
I
understand
it
is
the
the
signing
key
is
run
from
CF
that
is
working
on
how
they
would
actually
hold
the
signing
key,
and
if
so,
I
can
we
just
talk
to
them
and
figure
out.
Where
can
we
help
them?
Cuz
I.
Is
that
the
biggest
issue
no.