►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180313
Description
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.xsikgkn152he
Highlights:
- Rolling etcd back to 3.1.12 for the 1.10 release
- 1.9 to 1.10 upgrade tests
- HA upgrade doc
- Code freeze Wed
- Future discussions needed on kubelet dynamic config & CoreDNS
- Deleting old getting started guides
A
B
B
The
way
we've
been
structuring
upgrades
is,
we've
always
been
doing
it
as
a
feed-forward
mechanism.
So
it
should
work
and
I
know
the
code
that
was
written
awhile
ago.
Did
this
roll
check
save
the
data
update
rollback,
a
failure
so
I
have
to
verify
whether
or
not
the
standard
upgrade
scenario
works
right
because
there
was,
there
was
a
couple
different
upgrade
paths
that
we
had
supported,
so
I
need
to
verify
that
it
does
do
an
upgrade
to
three
one.
Twelve,
this
kind
of
leads
into
a
different.
B
The
next
topic,
which
is
the
up
the
upgrade
tests,
have
never
worked
inside
of
the
the
test,
infra
it
and
the
reason
they
have
never
worked
and
I.
Don't
know
that
here,
because
it's
kind,
this
issue
is
really
conflating.
There's
a
bunch
of
mecca
nations
that
go
on
as
part
of
the
upgrade
tests
that
don't
exists
in
the
other
tests.
So
some
of
the
other
tests
are
very
straightforward.
B
They
install
committee
and
from
a
given
target
release
with
known
artifacts
and
then
way
to
go,
but
the
upgrade
test
takes
artifacts
from
a
cross,
build
location
which
is
built
by
a
different,
separate
test.
So
it's
very
confusing
I,
don't
understand
the
reason
why
it's
done
this
way.
I'm
sure
Lucas
had
reasons,
because
you
usually
did
everything
for
a
reason,
but
I
don't
understand
the
logic
there
and
I
was
working
with
the
tester
for
folks
to
try
and
understand
why
we're
doing
this.
B
B
B
But
the
problem
is
like
test
grids,
test
grids
noisy
too
right.
So
there's
has
to
be
like
this
fine
line
between
what
is
useful
signal
and
what
is
what
is
useful
signal
that
people
will
pay
attention
to
and
what
is
just
noise
problem
is
a
lot
of
people
myself
included,
because
we
have
so
much
email
from
the
communities
project,
err
on
the
side
of
like
filter
and
bucket
all
the
things
and
make
sure
that
only
those
that
are
super
important
through
the
gauntlet.
That
is
the
Gmail
filters.
B
B
But
that's
that's
the
state
of
that
I
mean.
Does
anyone
have
any
insight
into
I?
Think
the
most
important
thing
to
get
all
the
artifacts
building
out
of
one
out
of
the
master,
whatever
branch
they
have
on,
is
to
have
Basel
being
able
to
do
cross
films
and
I
know
that
was
a
little
one
limitation
that
we've
had
for
a
long
time?
Does
anyone
have
any
insight
into
the
state
of
that
one.
A
B
B
That
also
leads
into
like
my
next
topic
is
the
jobs
are
super
complicated
I
didn't
actually
write
this
one
down.
The
jobs
are
super
complicated.
When,
when
is
cluster
API
able
to
come,
that's
so
we
can
get
rid
of
kubernetes
anywhere,
because
debugging
debugging,
the
output
from
communities
anywhere,
is
actually
really
difficult
because
it
kind
of
obvious
it
it.
It
has
a
level
of
obfuscation
around
cube,
ATM
in
it,
where
we
don't
actually
see
some
of
the
details.
A
So
the
short
answer
to
that
is:
we
want
to
stabilize
the
API
a
little
bit
before
we
start
rebasing
tests
on
top
of
it,
because
otherwise
making
changes
is
gonna,
break
test
grid
and
we
don't
want
to
break
test
grid
as
we
tweak
the
API.
The
goal
is
Google
wants
to
have
that
by
the
end
of
t
1,
which
is
the
end
of
this
month.
Talking
to
Audrey,
oh
I,
think
we're
a
little
bit
behind
on
that
goal,
and
so
it's
likely
to
happen
sometime
middle
of
next
really
cycle.
If.
B
It's
possible:
we
will
throw
resources
on
helping
get
rid
of
kubernetes
anywhere
to
get
cluster
API
in
place,
because
we're
also
interested
in
fostering
API,
so
we'd
happily
do
that,
so
that
it's
much
more
maintainable,
because
right
now,
no
one
maintains
kubernetes
anywhere.
Really
it's
kind
of
this
ill
maintained
thing
and
when
issues
happen
there
we
have
to
poke
the
people
who
have
the
most
knowledge
about
how
is
set
up,
and
it's
really
difficult
to
understand
so
I
think
I.
A
Yeah
it'll
be
great
and
I
think,
like
I,
said,
I
think
we're
getting
we're
getting
closer.
You
know,
we've
been
doing
a
lot
of
refactoring
trying
to
move
away
from
CR
DS
to
api
aggregation,
which
caused
a
lot
of
churn
in
the
code
like
I
just
tell
GTM
to
PR
yesterday
that
deleted
8
million
lines
of
code,
because
there
was
lots
of
vendor
dependencies
so
again
like
we
don't
want
to
be
doing
that
sort
of
thing
and
break
all
of
the
cube
adventists
that
are
blocking
tests
as
we
are
churning
something
that
we're
calling
alpha.
A
A
And
that
being
said,
you
know
people
are
actually
working
on
the
cluster
API,
unlike
arrays
anywhere,
so
even
biting
the
bullet
sooner.
If
we
do
breaking
things
at
least
people
are
working
on
it
or
you
know
incentivize
to
fix
them
as
as
opposed
to
kubernetes
anywhere,
where
it's
all,
when
somebody
kind
of
twist
your
arm
a
little
bit
you're
like
oh
I,
guess
I'll
go
I'll,
go
fix
that
because
I
really
want
to
be
working
on
something
different.
Yeah.
B
Exactly
I
mean
the
problem
I
faced
is
that
no
one
understands
the
details,
so
we're
all
kind
of
like
playing
a
game
of
Clue,
and
we
have
enough
breadcrumbs
sometimes
to
be
able
to
diagnose
the
problem,
but
there's
no
one
that
actually
owns
it
and
I
feel
like
once.
We
I
want
to
have
concrete
ownership
of
some
of
the
testing
infrastructure
so
that
way
like,
if
things
fail,
we
can.
We
can
loop
in
the
right
folks
at
a
whim
to
help
fix
it.
D
B
Can
do
multi
node
on
a
single
node
and
that's
part
of
there's
a
separate
effort,
that's
ongoing
in
the
testing
Commons,
which
is
a
sub
project
of
suggesting
where
we
have
a
spec
that
we're
trying
to
create?
That's
basically
how
how
do
we
create
the
this
client
interface
library
for
folks
to
imports
that
provides
a
consistent
means
to
basically
spin
up
a
1
to
n
node
in
cluster
behind
the
scenes?
So
that
way
they
can
do
all
their
ete
testing
locally
right.
B
That's
useful
for
a
lot
of
people
who
want
to
do
both
integration
and
end-to-end
testing
in
a
sandbox
single
node
environment.
That
looks
like
a
multi
node
and
people
do
this
today
already,
but
it's
it's
duct
tape
and
bondo
and
mr.
wizard
tricks,
and
we
want
to
create
a
single
client
interface
library
that
gives
a
programmatic
interface
for
people
to
write
tests
against.
B
There's
a
look:
we
have
a
lot
of
inertia
so
in
folks
are
working
on
it
and
we're
going
to
have
a
spec.
So
if
people
want
to
help
out,
there
is
a
select
channel
called
testing
Commons
and
the
top
of
that
slack
channel
is
basically
the
spec
that
we're
trying
to
rally
focus
on
an
ideally
dims
I
would
love
to
replace
local
cluster
up
with
something
like
that.
That's
my
long-term
goal.
We
got
it.
A
B
Yeah,
there's
there's
a
broad
swath
of
folks.
This
would
do
two
things
for
cluster
lifecycle
to
it.
It
provide
the
entire
community
with
a
means
of
spinning
up
local
clusters
through
programmatic
way,
for
them
to
write
their
tests.
So
if
people
are
writing
components
that
are
external,
like
the
cluster
API,
they
could
write
integration
tests
without
them.
Having
to
you
know
it
would
be
a
local
spin,
and
this
would
also
work
for
other
things.
B
A
A
Was
curious,
how
that
overlaps
with
mini
cube,
so
mini
tube,
is
also
sort
of
a
local
kubernetes
development
environment
that
you
know
it
was
not
created
by
a
cluster
lifecycle
but
sort
of
maybe
nominally
falls
under
state
clothes
life
cycle
in
the
same
way
because
it's
used
to
create
sort
of
local
clusters.
The.
B
One
big
difference
there
is
mini
cube
is
a
turducken,
so
it's
like
it
starts
up
a
VM
that
docker
and
inside
of
it
and
then
does
all
the
other
stuff.
That's
one
thing,
but
the
other
thing
is
that
we
want
to
build
against
any
arbitrary
artifacts
that
are
built.
So
we
want
to
be
able
to
specify
it
can
basically
run
against
head
of
master
and
build
those
components
and
test
those
components
locally,
and
you
can't
do
that
with
a
cube.
D
B
So
I
could
talk
with
those
folks
I'd
happy
to
kind
of
loop
them
into
the
conversation,
because
I
think
if
we
can
ideally
get
to
a
place
where
all
these
things
converge,
I
don't
want
to
I,
don't
want
to
own
or
maintain
any
of
it.
I
just
want
the
interface
to
be
able
to
write
my
tests
against
that.
You
know
it's
consistent,
yeah.
A
That's
why
I
brought
many
cube?
Is
it
you
know
cuz
they're,
a
set
of
folks
there
that
are
trying
to
solve
a
similar
use
case,
which
is
you
know,
I
want
to
run
something
locally
to
to
flow
through
Reddy's
or
to
test
michael
ratley's
code,
and
it
would
be
nice
if
we
could
get
those
people
least
involved
in
the
discussions
of
the
testing
common
stuff,
because
they
might
be
interested
in
helping
you
shape
that
project
and
adopting
it
as
well.
B
B
But
the
the
side
go
there
too
is
we
don't
just
want
to
have
straight-up
din
notes?
All
the
time
is
that
we
want
to
be
able
to
provide
the
capabilities
to
spin
up
pieces
of
the
control
plane
for
doing
performance
level
testing.
So
people
could
you
want
to
have
be
able
to
have
a
programmatic
interface
to
say,
I,
have
these
components
and
here's
your
qadian
config
for
pieces
of
it
and
then
go
so.
Ideally,
we
need
component
config
for
some
of
this
stuff
because
we
would
want
to
be
able
to.
B
You
know,
turn
down
some
things
and
turn
up
other
things
with
different
knobs.
So
that
way,
if
a
person
wanted
to
do
like
scheduler
testing
against
hollow
notes,
I
don't
want
to
spin
up.
You
know
a
thousand
din
notes,
a
single
machine,
but
I
do
want
to
spin
up
a
thousand
hollow
notes
and
then
do
a
performance
level.
Two
integration
test
against
that
person,
elimination,
test,
yeah.
A
I
was
in
a
point
out
in
chat
that
many
Cubans
may
be
more
targeted
towards
app
developers,
or
it
sounds
like
what
you're
working
on
is
really
targeted
at
kubernetes
developers
that
are
trying
to
configure
koreas
clusters
in
different
ways
for
performance
testing
or
for
individual
component
testing
in
more
of
a
needy
environment.
And
so
maybe
there's
not
a
lot
of
overlap
there,
but
I
think
that's
definitely
worth
talking
and
try
to
see.
If
maybe
we
can
use
some
of
the
same
facilities
under
the
hood
to
to
get
those
two
projects
going.
B
B
Hey
there
should
we
go
into
the
next
topic.
Yes,
the
next
topic
is
the
H
a
upgrade
doc.
So
Jamie
and
1-9
created
the
original
like.
How
did
you
how
to
do
H
a
with
kubernetes
the
hard
way
or
with
Covidien
the
hard
way,
but
we
didn't
actually
publish
a
doc
that
outlines.
How
do
you
upgrade
that
so
I
know
Martin
had
created
the
doc
and
he's
been
giving
a
feedback.
B
B
There's
other
ongoing
work,
which
is
part
of
the
other
breakout
session,
where
we're
helping
to
define
the
next
States,
and
there
is
a
lot
of
Rexie
ongoing
work.
There's
proposals
for
master
join
scenarios.
There
are
also
other
external
documentation
that
people
are
working
on
for
it's
a
life
cycle
and
there
is
there's
other
work
with
regards
to.
How
do
you
store
individual
nodes
state
for
the
controlled
plane,
hosts,
I'm,
trying
to
say
master
notes
in
Nha
environment,
because
currently,
right
now,
we
there's
an
issue
which
exists
in
the
upgrade
dock.
B
B
A
B
Actually
know
the
dates
for
that
we'd
probably
have
to
talk
with
the
sig
release
team,
but
usually
they're,
pretty
asynchronous,
and
that
can
merge
things
on
non
release
boundaries.
So
I
think
that
that
seems
reasonable.
I
would
like
to
have
the
dock
in
place
because
part
of
what
people
will
do
is
they'll
try
to
upgrade,
and
we
want
to
make
sure
that
that
user
story
is
not
explosive.
B
A
B
Yeah,
that's
the
there
was
another
person,
a
friend
who
it
was
if
you're
on
the
call
right
now.
That
was
also
looking
at
updating
the
documentation
for
the
massive
plethora
of
configuration
options
that
have
been
added,
because
there
isn't
actually
a
good
document
for
that
and
I
do
think
that
part
of
the
GA
release
cycle.
B
We
should
update
an
ever
well-defined
set
of
config
knobs
that
are
documented
for
folks
to
use
you
can
generate
and
take
a
look
at
the
one
that's
created
as
part
of
the
config
map
that
gets
stored
on
your
cluster,
but
there's
not
a
there's,
not
a
document
to
reference.
That
basically
says
here
are
all
the
knobs,
and
this
is
where
all
the
things
that
does.
B
So
if
you
want
to
have
coffee
or
whatever
feel
free
to
hit
me
up-
and
the
plan
is
to
my
current
tentative
plan
is
to
try
to
triage
the
GA
list
and
the
111
lists
to
be
an
actual
actionable
set
of
items
that
we
should
go
through
as
a
sig
2
and
get
people
to
sign
up
for,
because
I
know,
people
are
interested
in
certain
pieces
and
be
able
to
have
a
little
bit
more
rigor
with
our
execution
process.
So
that
way
we
can
we
can.
B
We
can
get
the
train
flowin
this
this
cycle
with
that
was
the
first
one
without
Lucas.
There
was
a
lot
of
bumps
and
starts,
and
things
and
I'd
like
to
smooth
it
out
for
next
cycle.
So
that
way,
folks
who
want
to
contribute,
have
well-defined
execution
paths
because
there
there
have
been
a
lot
of
new
folks
who
have
been
spun
up
this
cycle,
who
have
more
interested
in
contributing.
A
B
And
I'll
be
more
actively
involved
too,
and
try
to
clean
up
things
as
they
go
to
try
and
get
it
in
a
state
that
anyone
can
make
sense
out
of
because
right
now,
it's
it's
difficult
to
divine
from
two
repos
five
tests:
what
and
and
Docs
what
the
current
state
of
everything
is
I
know
I
want
it
to
be.
I
want
to
disambiguate
that
stuff,
such
as
clear
as
day.
A
In
terms
of
what's
your
plan
for
December,
you
waiting
I
know
that
we
started
to
have
a
number
of
you
know.
Math
linked
and
Chatta,
cthe
related
to
cue
bad
name
sake
for
AJ,
which
also
it
kept
for
cubeb
and
join.
Does
it
make
sense
to
make
like
a
top-level
kept
for
cube
admin
and
use
that,
to
sort
of
be
the
entry
point
for
a
sort
of
current
status
that
there's
a
section
caps
for
implementation
history?
We
can
link
to
the
sub
caps
for
specific
pieces
I
like.
B
B
So
I
would
be
totally
game
for
that,
because
I
think
that
would
help
to
have
a
single
point
of
entry,
which
is
an
ill
maintained,
wiki
page,
but
something
that
we
actually
have
to
maintain
as
developers
as
we
go
along,
because
we
will
enforce
some
level
of
process
and
that
that
that'll
give
us
a
means
by
which
to
check
every
iteration.
What
we're
actually
doing
it
right.
A
That's
kind
of
why
I
was
wondering
if
you
had
a
different
plan
or
if
maybe
that
was
a
reasonable
plan
to
go
forward,
because
it
sounds.
It
seems
like
we're
as
a
community
starting
to
move
in
the
direction
of
using
caps
and
our
sig
is
starting
to
send
out
more
PRS
for
caps.
And
maybe
that's
a
good
place
to
sort
of
coordinate.
As
like
the
coordination
point
or.
B
I
think
there's
two
parts:
that's
one
part
I
think
the
second
part
is
is
making
sure
we
actually
have
clarity
of
the
issues
in
the
kuba
TM
repo,
with
relative
prioritization
and
the
breakdown
of
some
of
the
items.
That
has
always
been
an
ongoing
problem
because
of
the
split
in
repositories.
People
get
very
confused
and
they
file
issues
still
on
kubernetes
kubernetes,
especially
newcomers.
So
what
I
want
to
do
is
is
to
try
and
distill
down
I
did
it.
B
B
B
A
B
A
Do
you
know
if
anybody
from
contra
Beck's
is
working
on
auto
labeling
of
PRS
right
I,
know
that
there's
been
some
effort
to
Auto
level
issues
and
assign
cig
labels
to
issues
and
sort
of
keep
issues
associated
with
cigs
and
there's
some
work
to
auto
assign
PRS
to
reviewers
and
owners?
But
it
sounds
like
maybe
what
you're
asking
for
is
based
on
the
contents
of
a
PR?
Can
we
auto
assign,
like
the
area
labels
and
this
emotive
owners
like
assign
it
to
us
to
a
Sager
or
something.
B
Some
of
that
is
done,
but
I
think
having
the
person
be
explicit
about
it
up
front.
It
actually
helps
a
lot,
especially
if
you
want
to
loop
in
PR
reviewers
right,
because
what
happens
is
even
if
it
has
the
single
label.
The
people
who
are
the
reviewers
of
that
content
area
aren't
necessarily
notified
of
when,
when
something
has
arrived
and
I
do
have
my
filter
set
up
to
allow
this.
This
particular
area
of
PR
reviews
to
come
through
the
gauntlet
of
github
filters.
B
So
I
think
if
people
who
are
new
to
the
community
wants
a
issue.
Looped
in
I
have
a
half-filled
in
comments
where
please
at
CPCC,
at
cig,
cluster
lifecycle,
PR
reviews
on
any
PRS
that
you
may
to
the
upstream
repo,
and
that
will
help
loop
in
the
right
folks.
I
do
know
that
there
are
a
couple
of
folks
in
China
who
are
very
quick
to
review
and
they're
great
and
usually
what
I
like
to
do
is
if
I
don't
get
to
a
review
in
time,
and
somebody
else
has
already
started.
A
All
right
in
the
last
agenda
item
that
you're
filling
in
right
now
is
a
code
freeze,
this
Wednesday.
So
from
what
I
understand,
I've
been
following
along
very
closely
with
this
release,
but
based
on
last
releases
that
I
think
it'll
cut
the
branch
on
Wednesday
and
try
not
to
until
anything
else
in
right,
yeah.
B
So
everything
that's
in
the
110
milestone
is
on
the
convenien
repo
I'm,
just
gonna
punt
out
right
now,
like
probably
today,
so
unless
there
are
things
that
are
added
the
sig
I'm
going
to
punt
that
I'm
going
to
triage
the
tracking
issue
that
we
have
within
the
main
repository
the
only
one
that
I
think
requires
attention.
The
only
issue
that
I
can
think
of
that
requires
attention
is
setting
the
defaults
for
the
release.
B
He
uses
the
tags
from
a
stash
location
on
GCS
buckets
to
determine
what
is
the
latest
latest
and
then
he
sets
the
min
supported
version,
but
I
need
to
verify
that
this
is
all
correct.
Still
in
that
build
against
that.
I
can
actually
get
the
beta
versions
in
there
as
part
of
the
latest
build
stuff,
and
then
I
can
get
the
actual
release,
tag
for
110
and
I.
Don't
need
to
update
some
Const
or
some
variable
inside
of
the
code.
B
B
B
E
E
B
And
the
problem,
the
problem
that
I
mentioned
earlier
in
the
call
the
test
grid
issue,
the
one
nine
to
one
10
upgrade-
is
that
that
particular
job
is
the
only
job
that
uses
that
location
of
the
CI
cross
bucket,
which
the
artifacts
are
not
being
published
or
updated.
So
I
need
to
talk
with
testing
Friday
about
figuring
out.
Why?
B
B
E
B
Other
other
test
grid
stuff,
all
excreting,
so
regular,
110
release
stuff.
All
the
issues
have
been
fixed
that
I'm
aware
of
we
haven't
had
any
blockers,
so
the
straight
up,
110
release,
looks
clean
if
folks
want
to
validate
or
do
some
type
of
upgrade
testing
manually.
That
would
probably
be
beneficial
from
the
community
perspective
until
we
can
figure
out
what's
wrong
with
this,
this
intestine
for
a
problem.
B
D
B
That's
my
thing,
I
was
just
talking
about
a
second
ago
was
the
defaults.
Thank
you
that
so
that
that
PR
is
the
one
I
need
to
hunt
down
this
cycle
to
try
and
understand.
If
you
have
a
link
to
his
OPRS,
that
would
help
to
to
understand
what
is
the?
What
is
the
final
step
in
the
puzzle
to
make
sure
that
the
defaults
are
all
seen
for
the
reviews.
Yeah.
B
B
B
Was
the
other
one
accordion
s
questionable
by
default?
I,
ideally
I'd
like
to
get
those
things
in
the
1:11
cycle.
So
when
we
have
like
a
111
planning,
we
should
probably
identify
all
of
the
core
pieces
that
we
want
to
get
in
place.
I
don't
know
if
any
other
people
have
seen
it,
but
we
have
definitely
seen
it
with
a
weird
coup.
B
A
A
It's
getting
better
is
based
on
both
number
of
nodes
and
number,
of
course
right.
So
if
you
have
lots
of
small
nodes,
it'll
go
up.
If
you
have
a
smaller
number
of
big
nodes,
it'll
also
go
up,
as
we
assume
that
basically,
the
number
of
cores
in
your
cluster
will
determine
the
rough
ratio
to
matching
scores.
If.
B
A
Well,
I
think
that
the
from
what
I
can
tell
from
Signet
work
the
plan
is
to
switch
the
accordion
s.
So
yes,
yeah
I,
just
I'm
a
little
surprised
that
the
the
cube
and
deployment
of
QB
nests
would
not
be
replicated
but
coordinates
would
be
replicated
because
it
seems
like
well
like
replicating
the
existing
deployment
would,
but
also
they.
B
A
A
I'll
I
talked
to
Mike
talking
about
dynamic
config
a
couple
weeks
ago.
I
will
sync
back
up
with
him
again
shortly
and
try
to
figure
out
where
we
are
on
that
last
I
talked
to
him.
He
thought
that
part
of
it
was
gonna,
make
it
for
110,
but
not
the
whole
thing,
I'm,
not
sure
how
much
ended
up
making
it
him
before
the
freeze,
I,
don't
know
if
this
wasn't
enough
for
us
to
use
it
I,
don't
think
the.
F
A
B
We
need
the
whole
kit
and
kaboodle
in
order
for
us
to
enable
by
default
and
I
would
like
to
enable
by
default,
because
there
are,
there
are
issues
with
unit
file
updates
right
and
we,
ideally,
if
that's
store,
that
information
is
stored
on
the
cluster
and
upgraded
as
part
of
component
upgrade.
That
would
alleviate
this
problem
entirely,
because
the
the
packages
that
we
create
are
not
the
best
and
well
versioned
and
maintained.
B
Hopefully,
as
we
push
those
artifacts
into
basil,
we
can
simplify
them
and
get
out
of
the
business
of
having
to
maintain
mucked
up
unit
files
right
that
you
know
that
we've
had
a
number
of
issues
where
command-line
parameters
and
options
have
changed
and
they
haven't
percolated
as
part
of
the
release
and
as
a
process
of
upgrading.
Some
of
these
things,
things
have
failed,
but
if
the
state
of
the
Kubik
configuration
is
stored
on
cluster,
this
greatly
simplifies
the
problem
and
gets
upgraded
as
part
of
the
version
semantics.
So.
A
A
All
right,
we're
sort
of
at
the
end
of
the
agenda
does
anyone
else.
We
have
15
minutes
of
slack
time
here,
I'm
happy
to
give
back
to
people
and
let
people
go
early,
but
if,
as
long
as
we're
all
here
does
anyone
else
have
anything
they'd
like
to
add
into
the
agenda
for
today,
Timms
I
mentioned
in
chat
that
he
had
to
PRS
he
dropped
into
slack,
but
he
had
to
leave.
So
we
don't
need
to
discuss
those
right
now.
But
if
you
are
reviewing
PRS,
please
go
take
a
look.
F
And
I
can
quickly
mention.
I
gave
an
update
to
sick
dogs
on
the
deletion
of
some
really
stale
dogs
getting
started
guides
directory
on
you
know,
there's
been
a
bit
of
movement
there
and
it's
yeah.
It's
looking
looking
okay
of
as
a
result
of
this
effort
in
this
relief
cycle,
I
managed
to
delete
like
three
guides
straight
away,
and
there
are
a
few
more
bits
of
content
that
that
I
sort
of
questionable
and
I'm
waiting
for
people
to
to
respond
on
that
on
issues
and
website.
Reaper
of
not
checked
back
yet
yeah,
some
proper.
A
Say
awesome
yeah
all
been
shown,
we
also
I,
think
have
cleaned
up
a
couple
of
the
getting
started
guides
related
to
keep
up.
Mike,
Denise
and
I
have
spent
some
time
over
the
last
couple
months,
ripping
out
a
whole
bunch
of
the
cube
up
code
that
was
no
longer
being
used.
I
think
we're
down
to
basically
only
two
places
where
it
works
now,
which
is
GCE
and
was
it
sent
to
us
I
think
is
the
other
one.
A
Is
there
a
couple
folks
that
are
still
using
this
into
West
version
of
Cuba
and
even
in
the
GC,
when
we've
been
trying
to
rip
out
variants
of
it,
supported
the
container
BM
support
and
container
like
support
that
we're,
not
testing
and
we
don't
think
anybody's
using.
So
we
are
trying
to
sort
of
slim
down
that
code
surface
and
delete
the
Associated
getting
started
guides
with
that
as
well.
A
So,
hopefully
that
that's
helping
clean
things
up
a
little
bit
to
again,
as
Tim
alluded
to
where
they
were
sort
of
paving
the
way
to
eventually
be
able
to
switch
to
cluster
API
and
delete
all
the
cube
code.
I
know
it's
been
on.
Brian
grants
hit
list
for
a
while.
It's
delete
the
whole
cluster
directory
and
deleting
the
queue
up
code
is
part
of
that.
B
B
That's
a
that's
a
log,
it's
the
whole
add-ons
v2
conversations.
That's
happened
for
a
long
time,
but
I
think
as
we
Trek
towards
GA
I,
think
that
should
be
on
a
hit
list
target
for
potential
items.
I.
Think
now
now
that
I
have
my
bearings
at
least
I'm
gonna
plan
to
spin
in
folks
and
try
and
get
execution
and
a
lot
of
the
stuff.
So
we
we
actually
have
resources
now
to
execute
on
a
bunch
of
this.
So
yeah.
A
And
like
why
is
it
Google?
Bones
is
sort
of
bubbling
up
the
list
of
things
that
we're
gonna
start
to
care
about
more.
We
thought
we'd
found
an
owner's
driver
from
our
end,
but
that
didn't
turn
out
to
work
out,
so
we're
not
looking
for
another
person,
foreign
to
help,
help
provide
that
conversation
to
so
I.
Think
that
you
know
in
the
next
release
cycle
or
two
we're
gonna
actually
start
putting
some
weight
behind
moving
forward
with
a
better
add-on
story.
A
Alright,
thanks
Elliot
for
the
update,
that's
that's
great
great,
to
see
progress
on
the
on
cleaning
up
our
documentation.
I
think
that
documentation
is
is
often
something
that
we
overlooked
and
we
should
probably
be
spending
more
time
on
it
than
we
do.