►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180206
Description
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.zgqqzba6ssut
Highlights:
- Demo & discusssion of https://github.com/kopeio/etcd-manager
- kubeadm on GKE
- Splitting kubeadm out of the main repository split
- Status of kubeadm moving to GA
- Publishing the sig mission statement
- Flag proliferation and cleanup
A
B
You
hi
everyone
good
morning,
so
yeah
I
wanted
to
show
something
that
I've
been
working
on
interpret
I've
been
calling
the
at
CD
manager,
which
is
sort
of
similar
to
the
at
CD
operator,
except
that
it
doesn't
rely
on
kubernetes.
So
we
avoid
some
of
the
problems
with
perceived
or
real,
with
circular
dependencies
around
at
CD
kubernetes
at
CD
type
situations.
B
Let
me
share
my
screen:
I'm
gonna
do
a
quick
or
quick
demo
and
I
expect.
This
will
be
a
vibrant
topic
of
conversation
in
terms
of
you
know,
sort
of
expanding
our
remit
a
little
bit.
We're
not
explain,
you
know
addressing
our
entire
remit
and
how
we
want
to
do
that.
I
would
like
to
say
that
this
is
not
a
cop's
project.
This
is
very
different
from
cops.
B
So
sorry
what
this
does
is
I've
started
a
team
manager,
I'm
binding
to
27001
we're
gonna
use
one
two
and
three
I've
create
a
cluster
name,
and
we
have
this
concept
of
a
concept
of
a
backup,
sorry
concept
of
a
backup
store
or
where
to
go,
backup
store,
which
is
sort
of
the
canonical
store
of
recovery
information.
If
everything
goes
wrong,
we
have
a
data
directory,
we're
storing
our
data.
B
We
say
where
we
want
to
bind
at
CD
two
and
something
that
is
important
is
we
have
a
notion
of
a
quarantine
at
CD
configurations,
so
we
will
often
during
upgrades
or
things
which
we
consider
to
be
otherwise
unsafe.
We
will
we
still
need
to
run
at
CD
and
we
bring
it
up
in
a
quarantine
mode
where
normal
clients
can
reach
it.
B
So
that's
how
we
get
around
a
lot
of
the
issues
regarding
upgrades
and
things
like
that,
but
anyway,
I
see
all
right,
so
you
can
see
we're
in
a
state
where
this
STD
manager
is
saying
that
it
is
just
sitting
there.
No
cluster
specs
set
must
seed
a
new
cluster
and
what
that
means
is
it's
had
to
look
at
the
backup
store.
It's
had
a
look
at
every
whether
it's
running
it
City
locally.
There's
nothing
running
yet,
and
it
says
you
have
to
tell
me
to
go
ahead
and
create
a
cluster.
B
The
idea
of
communicating
through
the
backup
store
is
that
this
way
we
can
sort
of
separate
out
the
the
manager,
I
guess
from
the
actual
running
a
vet
CD.
But
if
we
flip
back
to
that
manager,
okay
pretty
fast-
and
it's
already
started
at
CD
running
until
the
backup
seed
it
started
running
at
CD
and
it's
already
actually
done
a
backup
to
that
backup
store.
So
if
we
have
a
look
into
that
backup
store
which
is
temp
at
CD
manager,
let's
do
an
LS
dash,
R,
okay,
there's
a
lot
in
there.
B
You
can
see
that
we
have
some
backups.
They
are
state
that
date
stamped
and
then
they
are
the
standard.
Cd
two
backups
in
this
case
was
running.
It's
D
there's
a
meta
file
in
the
top
of
them.
A
data
directory
has
the
@cd
data
that
you
would
expect
and
if
we
would
keep
we
keep
watching
this
particular
in
backups.
Then
you'll
see
that,
like
every
couple
of
minutes,
it
will
do
another
back
up.
B
There
so
we
now
have
another
back-end,
but
where
it
gets
more
interest
is
obviously
we
can
do
a
put
I'm
just
gonna
copy
that
and
we
can
do
I
get
and
it
obviously
works
better
work.
And,
let's
now,
let's
now
go
from
one
node
to
three
nodes,
so
we're
gonna
start
two
managers
on
our
other
two
virtual
nodes,
they're
on
one
two:
seven:
zero,
zero
one,
one,
two,
seven
zero
zero
three
and
these
two
are
going
to
discover
the
peers
which
we'll
talk
about
soon
so
they've
all
gossiped
and
found
each
other,
but
they're.
B
Then
gonna
sit
there
and
not
actually
start
because
remember
our
HDD
cluster
specs
said
to
only
run
a
cluster
with
a
single
member.
But
what
we
should
see
soon
is
we
should
see
it
log,
the
cluster
state
I'm
just
gonna,
see
if
it
actually
logged
it.
You
can
find
cluster
state
there.
We
are
there's
the
cluster
state,
so
you
can
see
our
cluster
state
the
controller,
so
node
two
has
actually
become
the
leader
of
our
little
gossip
network.
We
have
our
clusters,
it
runs
a
control
loop
iterates
through
there's
our
actual
state
of
HCV.
B
We
have
a
single
node
at
CD
running
on
0
0
1.
As
you
would
expect
you
can
see.
We
have
a
bunch
of
peers
that
are
all
like
the
STD
manager
processes
that
are
sort
of
sitting
there
ready
to
go,
and
only
one
of
them
is
actually
going
to
be
running
at
CD,
which
is
that
third
one,
which
is
why
it
has
a
little
disjointed
there,
but
that's
the
state
of
the
world.
B
Now,
if
I
go
and
tell
it,
let's
actually
go
ahead
and
let's
resize
that
twist
a
cluster
size
3.
So
we
have
a
node
in
at
CD
or
a
key
in
at
CD,
which
has
our
cluster
spec
in
it.
You
can
see
it
says
member
count
1
at
CD
version,
2
2
1
in
there
and
what
we're
gonna
do
is
we're
gonna,
say
member
count.
3
remember
count
3
right
there.
B
So
if
we
do
that,
we've
written
that
to
Ed
CD
and
now
what
will
happen
is
our
elected
controller,
which
I
believe
is
known
to
well,
as
already
picked
that
up.
That
was
a
little
faster
than
I
wanted
it
to
be,
and
has
decided
to
start
to
more
sed
members.
So
we're
just
gonna
run
at
CD
members
on
the
remaining
two
notes
and
if
we
look
for
members
there
for.
B
Members
ID:
we
have
three
three
members
in
our
at
CD
cluster,
so
our
City
cluster,
we
just
expanded
from
size,
one
to
size.
Three,
it
all
happen
totally
automatically
I
can
show
disaster.
Recovery
or
I
can
just
talk
about
it,
but
essentially
you
can.
Obviously
you
can
do
a
simple
like
stop
of
any
pod.
In
this
case
we
start
it
will
see
its
local
ed
city
state
and
recover.
That's
very
easy.
B
B
B
We
will
dump
all
the
keys
and
write
them
back
in
and
the
advantage
of
that
is.
It
means
that
we
can
go
between
any
two
versions.
The
disadvantage
of
that
is
the
biggest
disadvantage
of
it.
What's
a
little
slower,
not
particularly
slower,
but
also
that
we
lose
the
resource
versions
in
NCD,
so
every
watch
will
effectively
break
and
I've
been
talking
a
little
bit.
So,
let's
work
a
little
bit
about
that
to
Daniel,
actually
there's
something
coming
in
Trinities,
where
we
effectively
randomized
the
resource
version
to
stop
people
treating
it
like
a
number.
B
Okay,
well,
so
I
need
to
look
more
what
the
exact
behavior
is
and
whether
we
need
to
bounce
all
the
nodes
or
whether
we
can
like
change
the
resource
version,
so
it
is
higher
than
the
previous
ones.
So
it
isn't
we
don't
what
we
really
don't
wanna
do
is
we're
gonna
rewind
resource
versions,
I
believe,
but
anyway,
more
work.
Another
to
do,
I
think
the
by
now
it
should
have
finished,
does
not
okay.
That
is
a
great.
We
know
it's
a
real
demo
because
it
failed.
Oh.
B
It
was
my
fault
for
holding
pulled
into
the
output.
Okay,
that
will
come
in
a
minute,
then
give
a
second
I
had
I
had
screen,
pulls
the
leader
and
I.
Guess
that
stops
execution
I
think
that's
what
went
wrong,
let's
see
so,
but
actually
what
it's
doing
in
this
case,
because
it's
a
not
trivially
safe,
oh
I,
think
I
broke
it
because
it's
not
trivially
safe
upgrade.
It
will
do
a
full
backup.
B
It
will
quarantine
each
know
during
the
backups
and
it
will
do
a
full.
Restore
I
really
did
break
it
badly.
That
is
a
real
demo.
Okay!
Well,
it
should
come
back.
It
might
take
a
little
longer
because
it
looks
like
s
to
reelect
a
leader
but
yeah.
That
is,
that
is
the
there's
a
readme
there.
There
are
there's
no
view
of
the
code,
there's
the
list
of
the
shortcomings
which
everyone
is
welcome
to
commit
to
contribute
to,
and
that
is
effectively
the
yet
CD
manager
that
I
have
been
working
on
for
well.
B
I
thought
it
was
much
less
time,
but
apparently
it's
been
six
months
now,
so
that
is,
that
is
where
I
am
I'm
hoping.
We
can
try
to
take
more,
can
take
get
more
automation
around
ed
CD
management
in
a
way
that
isn't
tied
into
the
particular
installation
tools.
So
just
I
don't
know
if
you're
watching
chat
there.
A
B
So
that's
I
was
gonna.
Show
my
screen
before
I
do
that
I
will
do
so.
Currently
there
is
a
so
we
have
a
VFS
layer,
virtual
file
system
there.
We
are
you
snowin
from
cops,
but
it's
a
couple
of
lines
of
code
and
I'd
like
actually
to
move
that
out
of
cops
because
it
doesn't
doesn't
have
anything
to
do
with
even
kubernetes.
Really
it
should
be
a
generic
go
pluggable
DFS.
So
there's
a
view.
B
Facilitation
for
the
file
system
which
we're
using
here
so
file
colon,
slash,
slash,
slash,
temp
discovery,
there's
a
you
can
bind
that
to
s3
or
GCS.
Instead,
each
one
writes
a
little
file
that
says
you
know:
Who
am
I.
Well,
here's
my
address
like
come
talk
to
me.
Then
all
the
STD
managers
will
gossip
amongst
each
other,
and
so
we
use
discovery
as
seating,
and
we
also
then
gossip
to
discover
all
the
peers.
B
I
would
like
to
have
another
discovery
mechanism
is
pluggable
and
I
want
to
have
discovery
mechanisms
that
use
cloud
api's
where
they
are
available
so,
for
example,
on
AWS
they're
the
what
the
way
cops
did
it
was
they
use
them.
We
use
the
volumes
API
and
so
the
other,
so
that's
a
nice
way
of
doing
discovery,
but
it
also
is
a
way
of
doing
exclusion.
B
So
there
is
a
flaw
here,
which
is,
if
you
have
like
six
nodes
success,
CD
managers
running
and
you
have
a
segment
three
dot,
a
network
segment,
then
three
of
them
and
three
of
them
could
form
two
independent
ICD
clusters
and
so
the
way
around.
That
is
either
to
make
sure
that
you
don't
run
more
than
enough
enough
to
form
a
forum
or
two
forums,
so
don't
run
for
only
run
three.
If
you
want
an
sed
cluster
of
size,
three
and
there's
also
a
basic
blocking
implementation
in
there
right
now.
B
But
it's
it's
a
little
early,
but
I
think
discovery
and
locking
almost
go
hand
in
hand
because,
for
example,
on
AWS
volume
and
GCE
volume
mounting
can
act
as
a
lock.
And
we
can
say
you
have
to
get
one
of
the
persistent
volumes
in
order
to
proceed,
and
then
we
can
ensure
that
you
never
run
more
than
three
@cd
managers
or
no
more
than
three
a
team
get
that
CD
managers
actually
participate,
even
if
you're
actually
running
a
dozen.
For
example.
B
B
D
B
B
You
need
a
shared
file
system
in
which
you
put
your
backups
and,
in
this
case,
we're
using
it
for
discovery,
and
we
probably
want
it
for
locking
as
well,
unless
you
say
well
on
bare
metal,
I,
I
know
what
I'm
doing
and
I'm
only
gonna
run
three
X.
You
do
managers
three
masters,
for
example,
but
definitely
trying
to
keep
it
pluggable
and
relatively
easy
to
sort
of
combine
the
various
guarantees.
But
I
feel
like
one
of
the
ways
you
can
guarantee
on
bare
metal.
B
Probably
swing
that
on
an
ass,
it
doesn't
need
to
itd
itself,
won't
write
to
it
in
normal
operations,
so
it
won't
be
a
an
I/o
bottleneck,
but
you
know:
I've
been
playing
around
with
multicast
as
a
means
of
discovery
as
well,
which
makes
sense
and
some
bare
metal
installations,
although
introduces
all
sorts
of
fun
in
others.
But
that's
one
of
those
things
that
I'm
hoping
will
make
modular
enough.
D
E
I,
don't
do
so
a
COO
bidi
and
maybe
needs
to
do
anything
if
it's
considered
external.
We
just
get
the
data
from
the
manager
and
stuff
it
in
that
way,
because
I
have
enough
concerns
because
then
then
it's
just
component
a
does
this
job
I'm
going
to
be
takes
over
from
there
and
does
that
job
and
if
we're
gonna
consider
an
external
thing,
which
is
what
it
does
and
I
had
questions
about,
how
it's
being
stood
up.
If
it's
an
external
thing,
then
why
I
complete
the
yeah?
We
could.
D
D
I
mean
having
separations
the
concerns
is
pretty
important,
but
yeah
I
was
just
thinking
in
terms
of
like
usability
like
having
COO
Betty
I
might
generate
TLS
sets
and
put
them
in
the
right
place
and
not
expect
users
to
like
nest
that
properly
can
get
a
little
bit
tricky,
sometimes
but
yeah.
Maybe
we
could
that's
a
general
yikes
issue
that
we
can
welcome
separately,
Isabella,
Falcon
or
perhaps.
B
No
matter
what
I
want
to
work
with
cube,
ADM
I
think
I
think
that
the
it
is
is
unhealthy
to
have
the
cops
cube,
ATM
gke
everyone
likes
or
to
have
their
own
little.
It's
like
enclaves
and
I
think
it
would
be
nice
to
have
something
where,
at
the
very
very
least,
we
agree
that,
when
we
back
up
to
s3
or
GCS
or
a
file
system
that
we
back
up
in
a
particular
format
and
I
have
one
that
is
a
straw
man
and
not
unreasonable.
B
B
But
yeah-
and
there
are
two
ways
to
do
that,
one
of
which
is
to
have
it
come
in
through
the
seed.
Remember
when
we
seeded-
and
we
said
cluster
size-
one,
you
could
I
guess
specify
their
certificates
at
that
time
and
we
would
write
them
into
the
backup
store
and
then
we
would
go
from
there
or
I
guess.
The
other
way
would
be
for
the
controller
to
go
and
create
those
certificates,
so
the
elected
controller
to
go
and
create
those
certificates
and
of
them
into
the
backup
store
as
well
right.
D
You
could
support
both
like
you,
could
allow
users
to
specify
like
manually,
create
and
then
specify,
or
you
could
have
sort
of
TLS
enabled
by
default,
where,
if
they
didn't
supply,
the
the
controller
would
do
for
them
like
that
becomes
like
impossible
that
happen
on
TLS
clusters
right
because
either
all
you
have.
It
are.
E
B
B
B
E
B
D
B
So,
during
a
during
an
upgrade
in
general
with
that
CD,
but
in
particular
in
the
way
we're
doing
with
that
CD
manager,
we
will,
for
example,
if
we're
doing
that
bit
from
2
to
3.
There
will
be,
we
will
load
in
all
the
keys,
so
there
will
be
a
time
when
it
C
does
not
yet
have
all
the
keys,
and
we
don't
necessarily
if
we
don't
basically
want
kubernetes
to
be
in
there,
seeing
like
half
the
world,
and
so
there
is
a
quarantine.
B
Do
it
we
bring
up
at
CD,
but
we
bind
it
currently
to
a
different
port.
At
CD,
3
supports
binding
to
a
domain
sockets
the
UNIX
domain
sockets.
So
we
could
instead
do
that
which
would
be
much
cleaner,
obviously,
but
doesn't
seem
to
be
supporting
than
C
2,
but
anyway,
the
the
idea
being
that
we
were
able
to
bring
up
at
CD
during
cluster
transition
operations
and
do
things
on
it
without
it's
a
way
of
introducing
read-only
mode,
because
we
essentially
hide
it.
So
yes,
so
that
so.
B
Like
a
shuffle,
but
the
yeah
during
upgrade,
let
me
see
if
I
have
the
steps,
the
exact
steps
for
an
upgrade.
So
we
we
shut
down
a
CD
on
four
zero
zero
one
on
each
and
bring
it
up
on
eight
zero,
zero
one.
Instead
we
do
that
on
each
member
of
the
cluster
so
effectively
no
clients
can
reach
it
anymore.
We
at
CD
manager
knows
where
it
is.
It
knows
the
secret
exhibit
zero
one
port
and
so
is
able
to.
You
know,
do
backups.
Do.
B
At
that
point
so,
and
then
we
and
then
we
stop
it
on
eight
zero.
When
we're
ready,
we
stop
it
on
eight
zero,
zero
one
and
bring
it
back
up,
but
on
four
zero
zero
one
separately.
We
also
have
four
restores
another
Etsy
deke
like
an
ephemeral,
ICD
cluster
at
CD
node,
which
we
do
bring
up
to
read
a
backup
because
it
doesn't
seem
like
there's,
that's.
That
seems
to
be
the
best
way
to
read
a
backup
in
any
versions
format.
B
So
we
will
bring
up
a
temporary
at
CD
node,
restore
the
backup
into
it.
Read
the
files
from
the
backup
copy
them
into
the
cluster
that
we're
doing,
which
might
be
a
different
version
into
our
real
cluster,
which
might
of
course
be
different
version,
and
so
that's
how
we
that's
how
we
do
it.
It's
it's
almost
harder
to
talk
about
than
it
is
to
do.
F
B
Doing
doing
search
is
actually
great.
I
didn't
I
hadn't
thought
about
that.
That
is
a
good.
That
is
a
good
idea.
The
other
way
is,
you
know,
to
the
main
sockets,
but
I
think
I
actually
prefer
because
I
think
I
prefer
certificates,
because
they're
supported
on
both
version
2
and
version
3.
So
that's
that's
nice,
but
on
the
other
hand
we're
only
going
to
version
3
I,
don't
imagine.
People
are
going
back
to
version
2,
so
yeah,
but
yes,
I,
agree
that
there
is
a
I.
D
B
So
so
that's
that's
the
thing
which
I
hope
everyone,
so
one
of
the
one
that
first
like
action
ends
for
me,
is
to
spit
out
the
backup
logic
out
of
this,
so
that
we
can
run
it
as
a
sidecar
pod
in
any
HCV
container,
and
it
currently
will
back
up
to
anything
that
is
a
DFS
target,
VFS
being
the
layering.
The
virtual
class
system,
therein
cops
and
the
implementations
are
s3.
B
Gcse
final
system
opens
back
swift,
there's
a
bunch
of
them,
and
it's
relatively
easy
to
add
more
so,
yes,
definitely
like
the
idea
is
that
it
should
be
almost
the
backup
store.
Location
should
be
not
a
of
the
many
reasons
not
to
use
it.
Cd
manager
I
would
hope
that
the
backup
store
location
is
not
going
to
be
the
one
that
anyone
fixes
on.
A
B
So
it
is
a
community
project,
of
course,
but
yes
I
think
we
we,
you
know
we
want
to
get
to
exit
III.
This
seems
like
the
we
wrote
this,
because
it
is
a
problem
that
we
have
that
it
seems
like
the
best
way
to
solve.
My
ideal
plan
is
that
in
our
next
release
we
get
backups
working
and
in
the
at
some
stage
we
make
it
optional
to
do
at
city
manager.
B
A
Are
chat,
yeah,
okay
and
then
there
was
a
question
about
or
a
comment,
maybe
about
pod
versus
systemd
and
Tim
responded.
So
I
think
that's
probably
worth
doing
verbally
criseyde.
That
pod
versus
systemd
is
sort
of
implementation
details
and
it's
not
hard
to
switch
between
them
until
a
break
on
your
depends
and
quotes
there
so
that
we
capture
that
in
the
recording.
E
It
depends
upon,
if
you,
if
you
take
on
the
semantics
of
owning,
if
you
say,
there's
a
separation
of
concerns
and
the
estate
manager
only
deals
with
at
CD
and
does
its
container
abstraction
that
way
it
separates
it
out.
But
if
you
deal
with
pods,
then
you
have
to
incorporate
all
the
dependencies
of
quoting
the
couplet
and
managing
a
static
manifesto,
some
kind
so
I
like
the
idea
of
the
separation
of
concerns,
I
really
I,
think
having
it
do.
A
B
I
I
welcome
input,
I,
don't
know
if
someone
thinks
that
it's
better
to
spawn
off
a
system
view
process
than
just
to
run
a
process
as
a
child,
then
let
me
know
it's:
it's
certainly
nice
to
run
it
as
a
child,
because
you
have
a
lot
more
controlling
visibility
about
it,
but
honestly
I,
don't
if
I
need.
If
someone
feels
strongly
then
find
an
issue
and
we
can
talk
about
it,
I
guess.
F
B
A
A
question
so
we
have
a
script
right
now
in
the
entity
pod
at
least
one
that
we
were
on
gke,
that
will
do
a
migration.
If
you
set
some
environment
variables
between
at
city
to
entity
3
as
part
of
the
upgrade
I
know,
it
was
sort
of
specific
to
that
city
to
debt,
to
d3
migration,
because
I
was
sort
of
a
stoptheworld
migration
and
it
did
not
work
for
multi
instance
at
C,
D
necessarily
or
at
least,
was
never
tested.
B
I
mean
so
this
this,
in
my
mind,
is
migrated
if
needed,
but
with
coordination
across
the
the
members
of
the
cluster
and
doing
that
without
having
a
centralized
without
like
creating
another
turtle
right
so
like
having
having
using
the
gossip
approach
to
discover
nodes,
doing
a
what
I
call
like
a
loose
leader
election
or
a
weak
leader
election
and
then
pivoting
to
Etsy
d4
for
the
strong
leader
election.
So
this
is
this
is
I
mean
it's
the
same
approach,
it's
I
mean
it
should
be.
It
should
be
the
same.
B
I
think
that
I
think
that
they
do
I
think
they
do
an
in-place
upgrade
I
think
they,
one
of
the
things
I
wanted
to
figure
out
is
when
we
should
and
should
not
do
an
in-place
upgrade
versus
the
dump,
the
keys
and
put
them
back
in
approach
and
what
exactly
what
the
consequences
are
in
terms
of
what
happens
if
you
mess
around
with
resource
version
behind
kubernetes
patch,
is
it
sufficient
to
restore
API
server
like
what
happens
exactly?
This
would
be.
A
Cool
the
NIC
put
a
question
in
chat
or
I
guess
a
comment
that
he
agrees
with
Tim
about
reducing
completion,
the
bootstrapping
process
and
three
has
a
question
about
how
this
sets
up
H
a
across
instances
and
how
how
we
discover
peers
I
think
we
covered
that
a
little
bit.
But
it
might
be
worth
reiterating
since
I.
B
B
I
would
ensure
that
but
yeah
we
can
add
and
remove
notes,
and
we
can
and
if
there,
if,
if
we
should
be
running
an
H,
a
cluster
and
we're
not,
it
will
pick
three
peers
and
we'll
go
and
make
RPC
RPC
calls
to
each
one
of
them
to
ask
them
to
start
an
entity
trustor
and
it
effectively
is
a
static
configuration
that
is
controlled
by
the
leader
controller.
So
that's
it's!
It's!
B
D
B
B
Yes,
level
up
until
I'm
an
interesting
comment,
which
is
that
we
are
expect
to
be
recreating
the
kubernetes
concepts
and
I
think
that
is
true.
I'd
say
we
are
recreating
the
minimum
that
is
required
to
not
do
to
not
have
a
circular
dependency
on
its
CD
or
to
not
have
a
perceived
circular
dependency
on
its
eating
and.
D
B
A
It
was,
it
was
so
spawn
Etsy,
the
inside
of
that
same
static
pod,
as
opposed
to
creating
a
separate
pod,
because
I
think
there's
some
advantages
of
having
SED
in
its
own
pod
sort
of
as
a
process
within
a
pod
where
you
get
things
like
resource
isolation
to
some
degree
there
and
process
management
by
the
coop.
This
way.
B
F
E
F
Right
I
mean
that
that
that's
really
I
was
all
sort
of
leading
up
to
that
that
it
kind
of
can
become
really
complicated
and
I.
Think
at
the
moment
it's
a
it's
fairly,
simple
things
and
that's
something
to
be
appreciated
about
it
right,
that's
like.
Perhaps
we
don't
need
all
of
the
D
complicated
parts.
Just
you
know.
A
lower-level
piece
that
is
required
before
can
run
a
complicated
people.
That's.
E
That's
the
part
where
I
was
going
back
to
system
D
aspects,
because
the
system
D
has
all
the
tooling
to
do
low-level
system,
management
processes
and
quarantine
and
isolation
and
monitoring
I
mean
the
couplet,
arguably
reinvented
pieces
of
system
D
versus
flipping.
That
argument
right.
So,
if
you
do
that,
you
get
the
pieces
without
having
to
inherit
the
world.
F
F
F
C
B
Where
you
know
we
have
you
bet
container
in
Jeanette,
city
manager
or
assistant
dean,
running
su
manager,
directly
I
think
those
are,
those
are
fine
alternatives
and
different
people,
probably
two
different
mechanisms
based
on
their
preferences.
I.
Think
the
more
like
the
more
concerning
like
thing
in
my
opinion,
is
the
idea
that
we
are.
B
B
Where
would
you
like
to
go?
Should
we
I
mean
there's,
obviously,
there's
a
github
repo
for
further
discussion?
If
it's
github
repo
type
things
we
can
talk
about
it
in
seed
cluster
lifecycle
slack,
those
are
the
two
channels
that
I
obviously
was
welcome
to
send
me
an
email
or
slack
privately.
If
they
want
to
do
that
and
yeah
I
think
yeah,
Chris,
yeah
I
agree
see
secret
lifecycle
slack.
Does
that
work
for
yeah.
A
So
we
can
try
to
pull
people
into
a
cig
meeting
in
the
future
or
we
can.
We
can
do
something
out-of-band
with
people
that
we
think
are
interested,
maybe
send
something
out
to
you,
know:
communities
dev
or
follow
up
on
the
existing
email
thread
Justin
to
see
who
would
be
interested
in
having
that
conversation.
A
Okay,
I
think
so
Justin.
Why
don't
you
follow
up
with
the
eco
machinery
mailing
list
or
select
channel?
We
can
continue
this
conversation
in
the
souq
cluster
lifecycle,
slack
channel
and
we'll
take
the
last
20
minutes
of
our
meeting
to
go
through
the
rest
of
our
agenda.
Sign
up
for
joining.
You
can
probably
drop
at
this
point
if
you're
not
interested
in
the
rest
of
our
agenda.
A
So
I
wasn't
here.
Last
week,
I
was
reading
through
the
the
meeting
notes
and
all
that
there
were
a
couple
of
things
that
people
said
they
wanted
to
follow
up
with
me
on
so
I
went
to
Tim.
If
you
want
to
scroll
down,
there
were
a
couple
of
things.
I
said
follow
up
with
Robbie.
If
we
want
to
go
over
those
briefly,
some
wonderful
good,
it.
E
Would
be
nice
if
we
could
have
a
canonical
list
from
the
Google
folks
of
what
are
the
blocking
issues?
I
know
you
guys
had
worked
on
a
separate
document.
What
are
the
blocking
issues
that
are
preventing
the
GK
folks
from
wholesale
flipping
to
a
Covidien
type
of
deployment,
because
I
think
as
a
community?
A
So
there
is
a
Google
engineer
who
is
driving
that
forward,
but
it's
taken
me
a
little
while
I
think
there
were
a
couple
of
other
things
on
the
blocker
list
too,
but
I
think
that
was
one
of
the
longer
polls
and
it
wasn't
worth
trying
to
burn
down
a
lot
of
the
other
stuff.
Yet
until
we
had
sort
of
a
the
end
of
that
tunnel
in
sight,
because
when
I
mentioned
that
to
folks
like
Don
and
Bowie
and
so
forth,
you
know
from
Cigna
and
SiC
Network.
E
We
have
a
just
least
a
tracking
issue
that
we
could
periodically
look
at
I,
think
that
would
be
useful
and
fruitful
because,
as
I
triage,
the
backlog,
you
know,
there's
a
bunch
of
issues
and
understanding
the
relative
priority
and
how
the
dag
might
flow
for
execution.
Be
super
helpful,
because
that
way,
if
folks
want
to
you
know,
if
there's
Help
Wanted
on
issues
they
can
see
like
oh
before
we
can
actually
implement
XYZ.
We
need
to
do
this.
A
Sure
so
I
don't
think
Chris
is
on
the
call,
but
Chris
was
Christus
Brasi
from
Google
different
Chris,
but
spelled
the
same
way
as
Chris
Nova
was
was
working
on
that
a
little
while
back
and
he
has
a
doc.
That's
internal
and
I'll
ask
him
if
he
can
make
an
external
version
of
that
doc.
I
think
most
of
the
things
that
came
out
of
his
investigation
got
added
to
Justin's
doc.
A
E
C
E
E
A
There
was
a
says
open,
and
this
was
something
that
Chris
was
gonna
do,
but
I
sent
a
pull
request
for
it
yesterday
to
update
all
of
our
meeting
times.
I
think
we're
just
waiting
for
the
top-level
owners
of
the
community
repo
to
get
that
merged,
and
the
last
one
was
logging
slash.
Finding
a
canonical
issue
for
tracking
the
repository
split
I
assume
that
that
Ilya
was
the
cube
admin
code
out
of
the
main
repo
into
the
cube
admin
repo,
or
was
that
something
different.
F
A
A
couple
of
things
mentioned
that
doc
were
using
sort
of
build
visibility,
rules
to
ensure
that
we
weren't
becoming
more
entangled
in
the
main
repo
in
either
direction.
Then
we
wanted
to
be
so.
The
code
could
be
sort
of
moved
out
wholesale
I'm,
not
sure
if
we've
done
any
of
those
prerequisite
steps
or
if
we
were
starting
to
get
more
entangled
into
the
main
code,
because
it's
pretty
easy
to
add.
Oh
there's
a
constant
I
like
in
the
cube
admin
code.
A
Let
me
let
me
import
that
right,
so
it
might
be
worth
revisiting
those
things
and
taking
some
of
the
prerequisite
steps
and
every
time
I
check
on
the
status
of
moving
cube
control
out.
It's
like
slipped
by
one
or
two
point
rule
aces.
It
seems
like
we're
getting
farther
away
rather
than
closer,
so
I
think
one
option
here
is
to
go
and
revisit
lucas
points
out
to
the
doc
is
kind
of
stale
yeah,
so
the
doc
could
produce
a
refresh
and
it
needs
a
new
owner
if
we're
gonna
refresh
it.
A
So
if
someone
is
interested
in
exploring
this
issue
I
would
it
would
be
great
if
someone
could
go
sort
of
take
that
doc
and
write
a
current
version
of
it
or
maybe
again
turn
it
into
a
github
issue?
One
thing
I
was
gonna
say
is
right:
now
we
are
releasing
cube.
Admin
in
lockstep
with
kubernetes,
and
one
thing
we
might
want
to
consider
is
switching
to
a
different
release.
A
Cadence,
so
something
like
cops,
doesn't
you
know,
have
it's
code,
the
main
repo
doesn't
try
to
release
day
and
date
with
the
kubernetes
release,
who
knows
releases
and
then
cops
will
release
a
new
version
of
cops.
That
adds
support
for
the
cube
new
coverage
release
and
it
might
make
sense
to
move
cube
AB
into
more
of
that
sort
of
model.
A
Because
right
now,
if
we
find
a
bug
in
cube
admin,
we
were
forced
to
sort
of
lobby
for
a
new
kubernetes
release
to
be
cut
to
fix
a
bug
in
cube
Atman,
which
doesn't
necessarily
make
a
lot
of
sense.
And
if
that's
the
world
we
want
to
be
in
then
I.
Don't
think
we
have
anything
blocking
us
from
moving
our
code
out,
because
the
main
thing
we
were
getting
as
being
in
part
of
the
main
repo
and
trying
to
continue
getting
with
Jakob
stock
that
he
wrote
was
being
tied
into
the
release
process.
A
E
I
think
this
is
inextricably
tied
with
the
if
gke
is
going
to
wholesale
default
to
using
this,
because
the
the
art
of
build
artifacts
and
release
process
and
testing
are
kind
of
in
this,
it's
again
a
feature
circle.
If
we
move
out
of
the
repo
we
might
have
issues
with
setting
up
all
this
configuration
for
how
we
build
test
and
deploy
all
the
artifacts
together,
so
the
III
I'm
remiss
to
want
to
do
that
until
folks
are
bought
into
ownership.
E
A
Yeah
I
mean
that's
a
great
point
like
if
we,
if
we
do
sort
of
adopt
our
own
release
cadence,
then
we
need
to
make
sure
we
can
staff
at
release
cadence
and
actually
keep
up
with
it
right,
whereas
you
know
piggybacking
on
the
main
release.
Process
means
that
there
is
a
release
team
that
is
sort
of
doing
that
for
us.
A
G
E
One
of
the
things
I
wanted
to
do
is
to
create
a
G
milestone
for
the
comedian
repository
and
go
through
the
entire
list
and
make
sure
that
they're
all
have
the
GA
tracking
on
the
milestone.
So
that
way
we
could
have
visibility
into
that.
I
have
not
done
that
bit.
Yet
what
I've
done
in
your
absence
was
basically
go
through
the
entire
PR
backlog
and
issue
backlog
and
I
still
need
to
finish
off
the
cuvette
diem
issue.
G
A
G
E
G
Think
I,
don't
necessarily
think
we
need
component
configuration
for
the
order.
I'm
mostly
concerned
about
the
cubelet,
because
that
is
what's
external,
like
you
Miriam
mostly,
we
can
manage
the
static
ports
as
we
upgrade
things,
but
managing
the
cubelet
is
really
hard
yep,
so
that
is
yeah.
Then
I
got
to
answer
that
question
Oh.
A
Okay,
we
only
have
five
minutes
left
so
I'm
going
to
breeze
through
the
next
agenda
topic,
which
is
Justin.
You
have
the
the
mission
statement
dark
that
you
had
volunteered
to
turn
into
a
markdown
file.
Please
go
ahead
and
do
that.
It
also
doesn't
appear
to
be
shared
with
kubernetes
data
work
really
announcer
committed
users.
If
you
could
share
the
doc
with
a
wider
group
of
people,
so
I
want
to
join
our
signaling
lists.
A
E
Because
some
of
those
command
line
flags
we're
creating
we've
created
and
there
we
want
to
kill
them
right.
So
I'm
torn
with
the
struggle
of
whether
or
not
what
we
want
to
do
in
the
long
term,
because
I
think
the
the
UX
experience
of
getting
this
in
GA
means
that
we
have
to
support
these
things
in
long
run.
And
we
know
that
we
want
to
get
rid
of
a
lot
of
those
some
of
those
options.
E
E
That's
that's
the
ideal
case
that
I
want
to
get
to
and
I
think
we
might
want
to
do
like
have
as
a
precursor
an
audit
before
we
go
to
GA,
to
make
sure
that
these
flags
are
what
we
want
to
support
for
the
long
run.
I
think
that's
probably
my
my
main
kicker
there,
because
you
know
there
are
PRS
right
now
that
we
need
to
roll
back
with
regards
to
like
pass-through
parameters.
E
A
A
Hopefully,
Chris
can
fix
her
Mike
before
next
week,
so
she
can
speak
up
instead
of
just
chatting.
Also,
we
were
talking
in
chat
code.
Freeze
is
February
26th,
Jace,
Jace
and
I've
an
email
about
45
minutes
ago,
reiterating
that
that
is
coming
up
very
soon.
So
unless
people
be
confused
that
it
might
be
in
March,
it
is
not
in
March
it
as
much
sooner
than
March.
A
It
is
in
about
three
weeks
three
weeks
from
yesterday,
so
we
are
rapidly
approaching
code
freeze
and
there
are
quite
a
few
issues
currently
open
for
the
110
milestone
and
32
for
cube
admin.
So
we
got
a
lot
of
work
to
do
in
the
next
couple
weeks
if
we
want
to
close
those
instead
of
punting
them
to
next
release.