►
From YouTube: 2017-05-02 17.01.19 SIG-cluster-lifecycle 166836624
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Right
so
this
is
the
surplus
lifecycle
leaking
on
the
2nd
of
May
pledge,
70,
happy
May,
everyone
and
sorry
that
I've
been
conspicuously
absent.
The
last
two
weeks
just
had
a
very
busy
time,
babadook
on
and
then
recovering
from
dr.
Khan,
but
I'm
back
and
hacked
back.
So
is
there
anything
that
anyone
needs
to
fill
me
or
the
group
in
on
before
we
start
going
through
the
agenda
today.
C
D
Hopefully,
folks
can
see
it
so
we
sent
around
last
week
sort
of
a
rough
draft
for
high
availability
incubating
in
there
isn't
really
much
sets.
D
There
is
some
there's
some
minor
things
in
there
that
I
think
still
requires
further
vetting
in
sort
of
GOC
work
on.
So
why
don't
I
just
give
a
brief
overview
of
some
of
the
pieces
here.
I
think,
what's
important
to
note
is
the
the
non
goals
that
are
stated.
If
the
document
is
that
I
don't
try
to
tackle
certificate
or
token
management,
because
that's
the
whole
separate
space
bootstrap
discovered
mechanisms
that
I
think
is
in
the
same
category,
because
those
things
can
be
integrated
but
I
think
and
first.
D
And
I
try
to
divide
up
the
pieces
here,
so
it
doesn't
try
to
be
an
all-encompassing
proposal
at
this
point
right
because
there's
so
many
it
just
tries
to
divide
the
pieces
into
actionable
items.
Then
there's
text
anime
and
I
talk
about
some
of
the
existing
problems,
so
one
the
existing
problems
that
we
know
to
be
is
currently
we
lock
on
in
points
how
this
has
known
problems.
It's
have
a
Phaneuf
problem
from
the
API
server
right,
because
endpoints
is
broadcasted
to
every
single
element
in
the
cluster.
D
Every
sin,
even
if
it's
an
invitation
requires
a
sort
of
you
know
broadcast
to
the
whole
cluster.
So
one
of
the
modifications
is
to
lock
on
config
maps
and
that's
an
action
item
to
define
how,
as
how
we're
going
to
handle
self-hosting
in
the
world
where
we
want
to
be
able
to
load
the
component
config
versus
passing
a
whole
bunch
of
command
line
parameters,
so
that
component
that
they
would
be
pass
me
a
config
map.
D
So
that's
listed
down
here
in
this
section.
Is
there
any
questions
about
that
that
I'll
stop
for
a
second.
E
D
D
It
is,
it
is
a
separable
concern
that
we
can
address
right.
So
the
question
was
how
we
are
going
to
do
AJ.
So
it
is
a
fundamental
component
of
how
you
do
active
passive
locking,
so
it
addresses
the
concern.
But
yes,
it
is
a
separable
piece
and
we
don't
it's
not
a
requirement
for
us
to
like
do
active
passive
with
the
scheduler
and
controller
manager.
D
D
D
The
default
configuration
for
Kubb
ADM
is
something
that
we
want
to
debate.
I
think
we
want.
We
absolutely
don't
want
to
say
that,
like
it
limit
us,
we
still
want
to
be
able
to
provide
the
existing
mechanisms,
like
passing
in,
like
a
blue
stripe,
a
P
ssed
new
center
API
or
the
potential
to
pass
in.
You
know
other
potential
backends
that
we
don't
get
currently
support,
but
this
would
be
like
you
know:
the
default
for
a
simple
self
hosted
cluster.
So
quick
question.
G
Here,
Tim
and-
and
you
know,
even
though
we
work
at
the
same
company
I-
just
read
this
this
morning
to
the
one
big
platter.
So
oh
I
had
just
discussed
this
up
with
you
offline
I.
One
concern
that
I
have
here
is
that
there's
this
sort
of
sequencing
order
at
first,
you
must
call
queue
admin
in
it
and
then
you
must
call
start
the
other
masters
and
then
you
must
call
cube
admin
migrate
to
make
things
work,
and
this
makes
it
much
more
difficult
to
orchestrate
this
with
something
like
terraform
or
or
cloud
formations.
G
D
There's
a
Sasha
had
all
made
that
similar
comments
and
I
think
that's
totally
possible
to
do
if
you
seed
it
with
with
something
like
a
master
set
up,
something
that
seated
that
says,
I'm
going
to
have
three
right.
If
you
see
that
similar
to
what
he's
done
in
this
PR
here,
which
is
four
four
seven
three
I
think
that
that
would
automatically
configure
right,
yeah.
I
D
G
I
D
E
D
E
Be
I
think,
like
you
know,
Brendan
I
think
we've
talked
about
like
a
lot
of
the
issues
and
I
think
we
were
trying
to
discuss
them
into
girl.
Like
all
the
edge
cases
there.
I
would
say
that
it
is
very
much
not
on
controversial
to
do
this
right.
I
feel
like
this
is
the
one
thing
that
is
totally
then
requiring
like
a
lot
of
like
explaining
to
users
that,
yes,
it
is
safe.
E
D
Think
the
other
piece
here
too,
is
that
I
don't
want
to
not
support
passing
in
existing
mechanisms
like
that.
That's
what
this
current
PR
was
all
about
was
just
to
use
the
existing
sed
bootstrap
API
to
get
rolling
right.
So
I
think
you
know
eventually,
there's
like
this
slow
migration
path
towards
you
know,
from
beta
to
GA
that
we'd
have
to
Tibet
right
in
the
meantime,
support
both
paths.
What.
D
I
I
mean
just
really
all
depends
on
how
you
think
about
failure
inside
the
clouds
right,
like
you,
can
back
it
by
EBS.
If
you
like,
super
paranoid
about
everything
you
can
sacrifice
and
storage
and
rely
on.
Essentially
a
CD
replication,
as
your
raid
or
you
can
just
like
put,
which
is
what
DK
does
is
run
a
single
as
to
be
on
replicated
block
storage.
And
then
you
don't
care
about
a
che
like.
Essentially,
we
provide
users
with
all
options,
because
some
people
are
less
tolerant
of
failure
than
others.
I.
G
I
E
Wouldn't
face
I
mean
we
could
say
that
and
then
it
will
happen
right
like
every
time
we
said
that
will
never
happen.
It
happened
right
into
a
serval
of
business
or
recovery
scenario
right,
but
I
love
it
yeah.
But
then,
once
you
say,
disaster
recovery
then
you're
talking
about
like
going
up
to
define
your
SUV
backup
strategy
and
your
windows
loss
right.
It's
there's
like
all
the
stuff
that
I
had
to
deal
with
it.
My
previous
database
company-
and
it's
like
it's,
not
you
get
into
the
weeds
here,
so
yeah
I
think
well.
E
G
E
G
E
D
G
E
G
Okay,
I
think
what
we
can
say
is
that
we're
relying
on
the
underlying
infrastructure
to
provide
some
persistence
of
disk
if
all
disks
are
lost
at
once
then
you're
in
trouble
and
then
maybe
like.
Let's,
let's
get
the
basics
down
here
and
then
provide
a
way
for
cloud
specific
extensions
to
be
able
to
do
things
like
swap
disks
in
as
things
reboot
method,
yeah.
I
B
A
K
D
I
think
we
have
same
default,
so
we
can
have
the
seed
have
configurability
right.
So
if
we
pass
an
initial
seed
as
we
start
up,
you
know
sort
of
profile.
If
you
want
to
call
it
that,
but
what
we
expect
things
to
do
so
that
way
we
get
rid
of
the
explicit
step
and
have
a
more
declarative
means
by
which
we
can
stand
up
everything.
Then
then
the
user.
It
just
is
your
own
adventure
right,
the
user.
Has
we
enumerate
this.
A
H
D
Are
there
other
questions,
or
should
we
move
on
to
the
next
little
bits,
keep
movin
all
right,
so
API
servers
like
we
currently
have
a
hard
dependency
on
load
balancers
for
load,
balancing
the
EPI
server.
We
don't
want
to
necessarily
you
know.
We
want
to
be
able
to
have
an
alternative
that
may
exist
there,
but
we
also
a
number
of
issues
that
we've
seen
in
ritand.
Basically,
people
aren't
configuring
their
load
balancers
properly.
D
So
as
an
action
item,
we
want
to
make
sure
that
we
have
good
documentation
going
forwards
and
as
leverage
that,
instead
of
the
main
checkpoint
list
to
say,
update,
all
documentation
is
supported
list
of
best
practice.
Little
concert
configuration
because
there's
a
number
of
issues
that
have
occurred
in
the
field
or
have
resulted
in
changes
directly
into
kubernetes
itself,
like
the
simplest
of
which
is
that
a
lot
of
people
are
leveraging
the
health
checks
and
load
balancers.
D
Then,
there's
a
bunch
of
other
things
setting
your
timeouts
properly,
because
if
you
have
a
very
low
or
default
time,
oh,
but
you
often
find
is,
if
you're
connected
to
a
client,
it
might
disconnect
you
or
you
might
have
other
problems
where
you
have
bandwidth
problems
between
your
nodes
because
they're
forcing
relist,
because
your
client,
your
timeouts,
keep
on
triggering
so
there's,
there's
a
bunch
of
minor
things
there
that
we've
learned
over
time
how
we
want
to
properly
configure
the
little
bouncers.
The
second
part
is
being
it
just
times.
E
Pop
in
there
detection
is
great
those
two
issues,
though
others
are
those
things
we
have
to
fix
upstream,
like
for
a
I,
think
the
problem
is
that
the
are
back
or
that
the
authentication
is
problematic,
with
a
health
check
and
for
B
I
think
like
if
we
were
to
send
a
ping
or
a
keepalive.
Would
that
solve
the
problem?
Why.
E
Because
if
you
turn
on
so
the
health
check
is,
if
you
have
to
enable
anonymous
authentication,
which
means
that
then
your
your
one
full
fee
mistake
away
from
your
cluster
being
accessible
to
the
world.
So
a
lot
of
real-world
customers
are
unhappy
with
that,
and
so
they
don't
do
it.
So
you
can't
have
a
health
check
unless
you
turn
on
the
along
support
or
something
I,
don't
remember
it.
D
Well,
that's
I
see
the
issue
I,
don't
we
have
to
address
it
for
sure,
I
think
the
ability
to
health
check
an
API
server
is
fundamental
right,
like
okay
and
it's
part
of
cure
that
is
proper
right
to
health
check
your
workloads
to
see
if
their
lives
to
properly
little
balance,
I
think
I
think
we
need
to
be
able
to
fix
that
problem.
Yeah.
E
G
It
depends
on
health
checks.
How
keep
lives
work
with
that
particular
load.
Balancer
some
load
balancers
will
keep
it
alive.
Others
will
have
a
total
limit
on
the
length
of
any
connection
right
I
mean
there's,
there's
different
ways
to
limit
that
right,
and
so
you
know
losing
a
connection
is
painful
here
because
of
resyncs.
E
D
D
D
It
occurred
that
the
API
servers,
so
if
you
don't
you're
going
to
do
a
lot
of
cast
mission
cast
mission,
cache
misses
so
you're
going
to
have
this
weird
scratching.
Behavior
performance
expectations
will
not
be
exactly
what
you
expected.
This
is
really
for
the
high
skill
books.
I,
don't
think
I,
don't
think.
A
lot
of
other
folks
will
care
as
much.
We
did
talk
about
the
connection
timeouts,
expensive
realists
and
the
last
bit
is
just
the
way
that
a
lot
of
folks
deploy
for
sensitivity.
D
I
D
G
Question
on
the
load,
bouncer,
stuff,
Tim,
I'm,
sorry,
sorry
to
rewind
a
second
here:
what's
the
experience
going
to
be
like
for
users?
Are
they
going
to
say,
I
want
a
load
balancer
and
then
it's
just
going
to
be.
Like
you
know,
their
hands
are
specify.
You
know,
here's
the
address
to
use.
For
that
thing,
are
they
going
to
configure
the
load
balancer
themselves?
D
G
So,
just
you
know:
there'll
be
an
extra
parameter,
saying
I'm
a
little
balanced
here
and
then
configure
everybody
to
connect
through
this
load
balancer,
and
then
it's
left
up
to
the
user
to
figure
out
how
to
configure
that
load.
Balancer
yeah
when
I
think
I
think
we
outside
management,
for
you
know
other
frameworks
and
stuff
like
that.
Yes,
okay,
yeah
and
with
cloud
formation,
it
kind
of
gets
to
be
a
pain
in
the
butt,
because
you
can't
know
the
name
of
the
load.
Balancer,
that's
pointing
to
you
in
cloud
formation
right.
D
D
D
Think
Brandon
and
I
have
gone
back
and
forth
on
this
about
how
exactly
we
see
the
proxy
with
a
service
such
that
we
could
have
an
initial
bootstrapped
on
load
balance
endpoint,
which
is
specified
as
part
of
the
manifest
that
you
know,
gets
off
all
right,
and
this
is
by
no
means
a
the
answer.
This
is
like
a
hack
way
of
doing
it.
D
So
you
know
all
options
are
welcome
at
this
point,
but
the
proposed
idea,
I
think,
is
to
have
some
type
of
bootstrap
parameter
that
you
specify
into
the
proxy
that
allows
to
set
an
initial
service
right,
and
this
service
will
not
collide
with
the
existing
services.
That
would
be
like
one
of
the
requirements,
and
that
way
the
kulit
can
still
talk.
The
iptables
rules
to
the
existing
API
serves
how
that
done
yet
is
still
like
a
it's
more
like
an
RFP
to
the
to
the
networking
folks
that
manage
cryptography
well,.
G
I
What
we've
done
in
a
garage?
We
essentially
have
like
a
snapshot
ER
that
snapshots
the
status
proxy,
essentially
IP
tables
and
systems
this
and
then
restores
them
before
things
come
back
up,
and
this
is
the
result
of
circular
dependency
on
the
CD,
and
so
the
similar
resolution
opposite
side
server,
so
that
we
could
make
it
a
bottle
service.
G
I
I
would
argue
this
is
the
best
decision,
so
in
F
to
be,
we
have
smart
clients
early
on
and
then
everyone
screwed
it
up
period,
and
so
now,
like
XE
b,
will
hang
tightly
forward
to
the
current
leader.
All
requests,
if
you
put
on
one
to
give
you
the
option
of
being
smart
in
order
to
reduce
load.
But
if
you
mess
up,
we
are
going
to
penalize
you.
I
C
G
And
I
think
that
that's
an
acceptable
way
for
us
to
work
for
kubernetes.
Also,
if
we're
running
active
active
on
the
API
server,
then
hitting
any
API
server
will
work.
But
the
client
for
sed
still
actually
keeps
track
of
a
set
of
servers
that
can
talk
to
it
doesn't
require
the
clients,
don't
require
you
to
actually
put
a
load,
balancer
or
some
sort
of
IP
tables
magic
in
front
of
your
a
TD
cluster
right.
There's
a
required.
G
Allows
you
to
do
it,
we
can
do
the
same
thing
here.
Right,
I
would
I
feel
like
it's.
Not
the
official
thing
that
we
you
know.
There's
the
correct
way
to
use.
Kubernetes
is
actually
my
mind
is
not
to
use
the
proxy
at
all
not
to
use
cluster
IDs,
but
have
enlightened
workloads
that
know
how
to
talk
to
the
endpoints
API
to
actually
get
the
list
of
things
that
you
know
are
the
service
right.
I
mean
it's
like.
G
If
the
clients
are
smart
enough,
they
should
be
able
to
do
this
type
of
thing,
either
the
of
a
proxy
or
via
a
library
that's
built
into
the
into
the
client
I
having
you
know,
the
endpoints
API
is
a
piece
of
crap,
but
we
could
actually
go
through
and
actually
have
the
cluster
itself
exposed
as
an
endpoints,
API
type
of
thing,
and
have
everybody
know
how
to
actually
talk
to
that
and
then
cache
value
so
that
it
can.
You
know
resync
itself
as
it
go
in.
D
I
think
there's
a
general
theme
is
that
we
can
put
the
logic
into
two
places
like
one
is
you
build
into
the
client
and
two
you
build
it
into
some
proxying
magic?
How
exactly
that
proxying
magic
occurs
is,
is
I,
think
the
debatable
piece
that
we
all
kind
of
like
you
know,
look
at
and
you
know,
try
to
squint
to
make
sense
out
of
it.
Yeah.
A
I
just
wanted
to
mention
that
this
I
think
this
exact
discussion
there's
a
discussion
we
had
in
Berlin,
where
we
were
talking
about
which
I
just
had
a
link
to
a
small
client
versus
I
think
the
option
that
you're
calling
the
as
the
proxy
option.
We
called
infrastructure,
agnostic,
DNS
or
something
but
I-
think
it's.
The
same
idea
is
that
the
cluster
itself
provides
some
like
way
for
dumb
clients
to
find
the
API
servers
as
they
move
around.
If.
H
G
E
Is
that
that's
what
infrastructure
like
the
idea
is
so
I
think
this
is
great
and
I
like
this,
and
it's
it's
at
least
as
elegant
as
DNS,
but
I
think
like.
We
can
also
reduce
these
we'll
have
like
a
local
DNS
server.
Just
like
we
have
a
few
DNS
server
right
like
there's.
No
need,
there's
no
need
to
be
a
full
route,
53
server
or
a
full
like
fine
server.
Just
like
reminding
IP
tables,
we
can
do
some
DNS
magic,
okay,.
G
What,
if
we
plumb
through
to
the
queue
bled
into
the
components,
the
standard
clients
a
way
to
specify
an
alternate
DNS
server,
instead
of
actually
going
through
the
local
resolve,
you
make
that
be
a
parameter,
and
then
you
can
run
like
a
little
mini
thing
that
does
whatever
magic.
It
needs
to
do
to
be
able
to
make
that
thing
work.
So.
D
A
Think
the
interesting
thing
that
I
want
to
try
and
get
to
is
whether,
as
a
group,
we
feel
like
smart
client
or
smart
server
is
the
way
to
go
because
I
don't
know.
Maybe
you
need
to
define
what
you
can
buy
that
or
you
can
have
an
opinion
on
it,
but
if,
if
we
like,
what's
the
most
the
feeling
in
the
room
about
that
was
different,
what
do
you
mean
by
smart
server?
A
Exactly
would
I
mean
something
like
an
infrastructure,
agnostic,
DNS
service,
or
this
idea
which
I'm
seeing
here,
which
is
to
use
the
the
cube
proxy
and
set
up
IP
tables
records.
These
are
all
examples
of
the
same
thing
which
is
like
make
the
infrastructure
I
then
make
communities
and
sell.
Give
you
some
we're
finding
the
API
servers
as
they
move
around
again.
I
teach
a
cluster
okay.
A
Sorry,
I
wasn't
being
fair
I
will
the
distinction
I
was
going
to
draw
the
safety
between
the
clients
that
sauce
the
API
server
versus
something
behind
the
API
server,
giving
you
some
way
of
binding
them
and
then
right,
I'll,
write
fiction,
but
I
was
just
wondering
whether
we
have
as
a
group
an
opinion
on
which
of
those
like.
If
you
draw
that
line,
which
line
do
we
have
a
preference
for
for
which
I'm
for.
J
I
I
E
Don't
think
as
far
as
I
understand
and
it's
a
shortcoming
of
cops
as
well
like
none
of
these
proposals
address
like
the
actual
discovery
issue,
they
just
move
it
down
a
layer
right
so
like
you,
Ness
doesn't
solve
it.
I
cabled
some
solve
it
like
the
core
like
extending
to
cuddle,
doesn't
solve
it.
It
just
says:
well,
once
we
have
a
seed
that
is
valid,
we
can
find
the
other
ones
I.
G
G
Mean
you
think
it's
what's
happening,
I,
don't
know
how
much
we
want
to
continue
down
this
discussion
here
might
be
words
having
some
dedicated.
You
know
time
to
really
dig
into
this
and
approach
it
with
like.
Let's
brainstorm,
all
the
different
are
things
trying
to.
You
know
create
dichotomies
there,
and
then
this
would
do
a
little
bit
of
Wayne
exactly.
D
So
like
this
here
basically
ways
to
options,
neither
which
I
particularly
like
nothing,
feels
elegant
right.
It
all
feels
like
I'm,
like
putting
bondo
in
something
so
I
think
enumerate
the
state
space
here,
but
I
identifying
this
as
like
the
key
thing
that
a
lot
of
people
have
desired
over
time.
The
question
is,
it
doesn't
prevent
you
or
hamstrung
you
from
doing
this.
Other
load
balance
perch
right,
which
is
the
which
is
the
current
case
that
exists
today.
I.
E
Say
that,
like
I,
think
I
think
where
I
get
interesting
is
what
we
do
in
cops.
Is
we
have
we
query
the
GCE
or
ativ?
If
ap
is
you
could
imagine
it's
like
the
classic
discovery
problem?
You
can
imagine
you
a
broadcast
if
you're
on
bare
metal,
I
think
the
the
point.
Why
that's
relevant
is
because
we
probably
don't
want
to
build
that
into
crew
cuddle,
so
we
do
want
to
build
it
into
a
separable,
replaceable
layer,
and
so
then
that's
why
this
iptables
thing
is
nice.
That's
why
DNS
is
nice.
The.
B
G
It
doesn't
seem
crazy
to
me
to
actually
you
know,
change
the
crime,
including
cute
cuddle,
so
that
it
has
multiple
mean
resolution
strategies
or
DNS
ends
up
being
one
of
those
in
the
default
one,
but
then
we
can
actually
plug
in
other
things,
and
one
of
those
other
things
that
plugs
in
might
be
DNS
has
been
explicitly
named,
DNS
server.
So
you're,
not
you
know,
stuck
configuring
split,
DNS
on
machines
and
stuff
like
that,
or
even
something
as
stupid
as
here's
something
in
hosts
file.
G
G
D
So
I
think,
for
the
purposes
of
you
know,
you
know
keeping
things
rolling.
I
think
we
can
add
the
item
because,
most
of
the
other
bits
we
can
break
out
as
a
separable
actual
items
where
we
can
start
to
execute
on
like
the
config
Maps
is
a
non.
D
You
know
it
can
be
broken
up
completely
separately
and
I
know
I,
don't
know
if
Mike
is
on
the
call,
but
he
has
proposals
that
he
has
been
working
on
to
move
sort
of
the
components
into
a
component
fig
that
gets
loaded
via
config
map,
but
he
has
plans
that
he's
going
to
work
on
with
this
release.
Icon.
That's
a
separable
action
will
know.
B
D
Same
with
this
other
item
of
the
contentious
ones,
you
need
to
still
be
handed
out.
I
realized
that
there's
no,
it
feels
like
discussing
paint
shades
with.
You
know
it's
kind
of
subjective,
but
the
you
know
I
think
we
can
rally
to
a
certain
point
and
we
should.
We
should
do
that,
but
I
think
we
can
break
it
out.
As
a
separate
issue.
I
said,
I
other.
D
G
One
thing
that
I
think
would
help
me
a
little
bit
here
is
to
is
to
really
double
down
on
what
is
the
flow
of
what
people
have
to
type
in
different
scenarios,
or
at
least
you
know
what
goes
into
the
computers
I
think
goes
into
the
terminal
or
you
know,
gets
run
and
then
really
nails.
I
know
like
so
I
want
to
expand
on
this
a
little
bit
and
and
I'd
love
to
sort
of
talk
about
what
will
it
take
to
get
rid
of
that
Microsoft
step
because
I,
yes,.
D
I
think
that
was
part
of
this.
There
was
a
think
it
make
it
as
a
doc,
but
as
part
of
this
discussion
and
up
over
up
here
right,
he
had
the
same
common
work
wanted
to
remove
my
grade
and
then
I
think
that's
actually
this
one
and
then
I
mentioned
that
if
you
seat
it
with
the
original
master
configuration,
then
I
think
he
gets
rid
of
migrate
entirely.
So
this
this
bottom
portion
with
the
example
flow
should
be
reworked
from
the
government
and
then.
A
This
looks
really
good,
so
thank
you
very
much.
Tim
for
driving
this
process
forwards
and
yeah
I,
wouldn't
even
say
that
when
you've
got
listed
on
the
contentious
is
contentious
debate.
It's
more
that
we
just
don't
know
what
the
best
answer
is
yet
so
yeah,
maybe
that's
a
subtle
distinction.
Actually.
A
A
L
/
I
decide
you
want
to
play
some
words
when
you
land
directly
I
think
there
are
obviously
problems
with
Cuba
right,
but
I.
Think
one
of
the
sort
of
systemic
problems
was
that.
E
We
put
we
when
we
were
thinking
about
Halloween
still
kubernetes.
We
thought
about
Cuba
and
we
didn't
put
as
much
effort
into
the
documentation
and
as
much
effort
into
like.
Is
this
installable
right
because
we're
like?
Oh,
we
just
put
it
into
Cuba
and
there's
no
problem,
because,
yes,
it's
a
horrific
script,
but
it's
done
and
I
love
what
we're
doing
in
this
SIG
in
terms
of
like
addressing
that.
But
at
the
same
time
we
are
repeating
that
mistake
to
a
degree
we're
not.
E
G
We've
seen
a
similar
situation
with
the
cloud
providers,
where
they're,
essentially
under
documented
and
the
only
documentation,
is
looking
at
the
various
startup
scripts
that
configure
them.
So
I
hear
what
you're
saying
there
Dustin
I
think
the
point
that
Tim
had
around
documenting
best
practices
around
setting
up
load
balancers
in
front
of
H.
A
masters
I
think
definitely
moves
towards
that.
I.
E
I
think
yes
and
I
think
I,
like
the
idea
of
two
bit,
am
being
an
existence,
proof
right.
We
should
absolutely
produce
something
that
that
works
right.
We
shouldn't
just
be
like.
Oh
here's,
the
document
and
go
and
I
can
stall
that
because
then
we
have
the
same
problem
where
we
like.
Oh
it's
so
easy
to
write
in
the
document
in
the
HR
running.
It
doesn't
work
right.
So
it's
great
that
we
have
something
that
is
an
existence
proof
that
is
ideally
under
ete
testing
and
like
continuously
tests
that
works
and
still
works.
E
E
Like
cops
is
sort
of
integrated
with
uranium,
and
we
don't
sort
of
have
this
problem
directly
like
we
were
able
to
enter
it
at
the
source,
level
and
reuse
cube
a
DMV.
The
h2
is
I,
see
that
what
we
are
recreating
with
that
is
a
more
sophisticated
Cuba
right
like
it
is
a
cube
of
bit
like
so.
Let's
both
tube
ATM
is
the
salt
in
this
case
and
the
IQ
buffers
the
batch
script
that
drives
it
like
in
the
new
world,
given
salt
and
Cox's
the
batch
script.
I
So,
like
so
the
waiver,
so
we
kind
of
have
this
problem
and
we
do
like
started
telling
it's
including
engine
and
it
resets,
and
what
we
ended
up
doing
is
essentially
removing
the
pin
pointing
Indian
and
they're
relying
on
other
templating
engines,
so
I
in
our
Taconic.
Installing
out
all
the
like
things
required,
install
terraform,
generates
and
templates
stuff
and
then
really
to
case.
J
I
J
A
A
G
Do
want
to
push
back
on
the
Cuba
analogy
a
little
bit
if
cube
admin
is
the
salt.
In
that
analogy,
cube
admin
can
be
used
for
multiple
drivers,
whereas
that
salt
was
really
sort
of
like
intertwined
with
the
bash
in
a
in
a
super
unhealthy
way,
which
meant
that
every
scenario
that
anybody
would
want
to
do
would
cause
them
to
plumb
everything
through
everything,
whereas
you
know
the
pattern
with
the
pattern
with
cube
admin
is
that
we
can
provide
sort
of
a
easy,
discoverable
path.
Your
cube
admin
that'll
fit
a
lot
of
cases.
G
We
can
provide
some
advanced
configuration
and
not
every
user,
not
every
framework.
That's
driving
it
is
going
to
have
to
configure
everything
and
that
there
can,
if
we
have
multiple,
whether
it
be
you
know,
cops
or
ansible
or
terraform
or
whatever
or
CloudFormation,
that's
driving
it
that
will
actually
keep
those
interfaces
cleaner
when
there's
multiple
clients,
so
I
totally
understand
the
danger
and
I
think
the
danger
here
is
is
pasting
over
too
much
complexity
with
too
much
in
cube
admin
so
that
it
becomes
a
really
opaque
black
box.
G
You
know,
let's
we're
done
it,
but
if
we
come
to
a
black
box
to
users
and
so
I
think
making
sure
that
we
document
h8a
hard
way,
making
sure
that
we
document
the
phases
of
cube
admin
and
support
those
making
sure
that
cube
admin
actually
can
be
used
as
a
like
as
a
sort
of
education
tool
for
those
who
are
interested
to
actually
understand.
What's
going
on
under
the
covers
right.
So
I
think
those
are
all
things
that
we
simply
to
help
prevent
this
stuff
from
I'm.
Going
that
way,
yeah.
J
Exactly
and
that's
one
thing
I
want
to
work
on
then
during
the
summer,
when
I
get
it
and
basically
make
the
code
for
Justin
to
use
better
than
it
is
Wow,
it's
like
ideal,
so
to
say,
and
the
most
workers,
consumers
and
the
documentation
soon,
when
I
have
time,
I'm
gonna
update
the
cases
proposal
once
again
to
reflect
the
current
state
and
I
mean
just
me.
You
have.
You
probably
have
some
comments
more
comments.
Now
that
you
have
integrated
yeah.
E
J
E
A
E
When
we
think
Justin's
original
point,
our
mission
statement
for
the
sig
has
been
labeled
working
progress
for
a
really
long
time
and
it
looks
like
no
one's
made
any
edits
for
like
seven
or
eight
months
and
I,
don't
really
see
it
come
up
like
I'd
love
to
see
it
become
a
crisp
enough
document
that
it's
the
it's
a
meter.
Stick
it's
the
rubric
that
we
use.
A
E
So
this
is
half
broadcast
of
status
and
half
solicitation
for
new
owners.
So
a
few
of
these
are
in
progress
or
done.
We
were.
We
had
endo
and
tests
for
our
master
branch,
but
not
for
the
release
branch
that
got
done
right
away.
Our
new
poll
job
has
been
a
review
for
a
while
I
think,
that's
close
to
LG
TM
and
someone
from
ends
Prada
signed
up
for
I,
think
they're
going
to
do
to
the
test
grade
level.
E
So
you
can
actually
configure
that
when
your
test
start
failing
that,
you
can
actually
proactively
notify
the
sig
which
be
great
because
our
child
kept
going
red
and
no
one
noticed
early
long
time.
It's
red
right
now,
because
someone
else
broke
it
and
we're
trying
to
fix
it
right
now,
but
the
things
that
are
unknowns
and
I,
maybe
I,
can
use
the
tactic
of
long
awkward
silences
to
peer
pressure.
E
People
to
volunteer
for
these,
but
I'll
just
go
through
really
quickly,
so
adding
new
end-to-end
variants
that
use
other
CMI
providers,
possibly
your
first
class
citizens
like
bridge
right
now.
We
just
use
we've
met.
So
it's
one
signal,
but
we
could
have
many
more
signals.
If
we've
met
breaks,
we
it
won't
be
easy
enough
to
tell
who's
that
hold
so
the
more
providers
that
we
have
better
signals.
We
have,
let's
see,
having
a
process
for
triaging
and
fixing
cue,
bending
and
doing
tests
right
now.
E
It's
been
kind
of
a
solo
job
I'm,
definitely
willing
to
help
out
the
how
like
I've
had
these
notes
piling
up
of
how
our
end-to-end
tests
work
and
how
did
you
bug
them
for
a
long
time?
And
it's
just
taken
me
a
long
time
to
create
a
consumable
document
out
of
that.
But
I
don't
know
if
I
have
enough
bandwidth
this
quarter
to
to
really
define
a
triage
process.
There
have
been
many
different
scenarios
under
which
they
break
there's
legitimate,
cube
admin,
regressions,
there's
regressions
in
our
dependencies
in
the
test
infrastructure
that
we
build
upon.
E
You
know
there's
breakage
of
the
C,
my
provider,
there
they're
always
different
ways,
so
once
we
figure
out
maybe
having
a
rotation
or
some
way
that
once
we're
notified
that
something
breaks,
someone
can
do
a
quick
investigation
to
find
out
what
broke
and
then
having
some
sort
of
policy
around.
Who
should
actually
fix
that.
Ideally,
the
person
broke
things.
E
Obviously,
because
we
were
alpha
for
so
long
and
kubernetes
has
kind
of
had
a
shaky
release
process
overall,
I
looked
into
copses
release,
process
and
I
really
liked
that
it's
much
more
clear
about
the
expectations
from
one
release
to
another.
The
steps
to
perform
a
release,
cube
admin-
we
didn't
have
an
explicit
like
make
sure
you
set
up
the
end-to-end
test,
make
sure
we
have
make
sure
that
we
release
new
Debian
and
RPM
packages,
all
the
steps
that
are
kind
of
tribal
knowledge
that
if
someone
goes
on
vacation,
we're
obviously
going
to
miss
something.
E
Now
that
we're
beta
it'd
be
great
to
enumerate
those
and
make
sure
we
have
an
explicit
checklist
for
every
release
and
the
last
one
is
more
kubernetes
wide.
But
one
of
the
things
that
we
got
criticized
for
during
this
interesting
event
at
the
one
600
release
was
poor
communication.
So
we
had
multiple
github
issues,
open,
essentially
tracking
the
same
failure
of
one
602
Badman.
It
wasn't
clear
which
one
was
authoritative.
E
There
was
really
high
noise
to
signal
for
any
particular
community
watching
them.
So
if
you
were
a
user
interested
in
when
things
are
going
to
get
fixed,
when
you
can
expect
a
new
release,
it
didn't
really
address
that
for
you
that
if
you
were
trying
to
coordinate
who's
released
who's
fixing,
what
in
what
ways?
It
didn't
really
address
that
like
they
were
just
catch-all
issues.
E
E
So
I
think
this
would
be
a
great
task
for
the
PM
team
or
someone
on
the
PM
team,
but
if
anyone
the
the
sig
is
interested
in
attempting
to
craft
such
a
document,
even
just
bare-bones
starter
document
that
degrades,
but
does
anyone
want
to
raise
their
hand
for
any
of
those?
Otherwise,
I'll
continue
to
pest
you
next
week.
So
on
the.