►
From YouTube: Kubernetes SIG Architecture 20180906
Description
A
Oh
all
right
welcome
everybody.
It
is
Thursday
September
6th
2018.
This
is
the
kubernetes
sig
architecture.
Meeting
I
am
your
co-chair.
Jay
singer,
DeMars
and
I
will
be
kicking
meeting
off
here
shortly.
If
you
want
to
follow
along
after
the
fact,
the
agenda
is
at
bit
lean,
slash,
Sega
architecture,
and
you
can
see,
notes
there
and
links
to
all
the
plot
requests
and
other
things
that
we'll
be
talking
about,
go
ahead
and
dig
into
the
agenda.
There
is
our
standard
walking
the
board's
right
now,
there's
not
too
much
on
the
boards.
A
A
B
B
Clayton
and
I
are
the
only
improvers,
because
we
basically
every
set
of
tests
that
I
look
at
some
new
issue
comes
up,
so
we
need
to
flush
those
things
out
and
get
them
documented
and
Tim
has
commented
that
we
need
to
understand
the
Delta
between
the
way
the
tests
actually
are
and
the
way
they're
intended
to
be
because
we
definitely
have
a
lot
of
problems
in
those
areas.
We
need
to
identify
those
things
and
fix
them,
like
we've
had
tests,
depending
on
the
public.
B
In
the
past,
we've
had
a
bunch
of
tests
depending
on
event,
field
string
contents,
which
should
not
have
been,
which
later
broke
in
versions.
Q
test,
especially
two
releases
of
versions,
q
between
the
cubelets
and
the
control
plane,
so
yeah,
the
tests
we
have
are
very
flawed
because
the
way
they
were
originally
assembled
is
folks.
B
Looked
at
tests
that
existed
and
tried
to
identify
which
ones
they
thought
would
be
portable
and
those
got
labeled
conformance
and
then
the
whole
conformance
and
certification
program
was
developed
years
later
and
use
that
test
as
a
bootstrap
set.
But
the
bootstrap
set
is
flawed
in
any
ways.
In
any
way,
some
some
place
I
actually
created
a
list
of
the
number
of
ways.
You
know,
because
they're
disproportionate
number
of
tests
based
on
people
who
are
meticulous
about
writing
tests.
B
You
know
some
tests
were
written
in
different
fashions,
like
his
unit
tests
or
integration,
tester
conformance
tests
and
Aaron
is
tackling
that
last
last,
one
right
now,
you
know.
So
there
are
a
bunch
of
reasons
why
this
set
of
tests.
We
have
is
sort
of
a
very
skewed
and
problematic
sample
and
we
just
need
to
like
start
taking
a
whack
at
that
and
chipping
away
those
problems.
But
if
we're
gonna
expand
the
set,
we
need
people
to
actually
fan
out
the
reviews
identify
issues
they
think
are
not
covered
by
the
documented
policies.
B
A
B
There's
a
plan
owners,
approvers,
reviewers
participants
right
so
clean
and
I
are
the
current
approvers,
so
clean
and
I
have
to
sign
off
on
it.
You
know
Tim
and
Erin
are
doing
their
feet
on
the
ground,
so
I'm
happy
to
also
have
them
be
approvers,
as
we
you
know
want
to
include
them
as
we
expand
out
the
set
of
approvers
anyway.
B
A
E
B
But
I'll
put
a
caveat
for
privileged
operations,
specifically
I,
wouldn't
kick
them
out
immediately.
I
think
that
we
need
to
develop
some
some
kind
of
profile
scheme,
so
such
that,
like
I,
want
the
base
profile
to
be
as
broad
and
as
wide
as
possible
or
what's
been
called
the
default
profile.
The
currently
only
profile
we
have,
and
then
we
need
to
think
of
bundles
of
features
and
behaviors
that
make
sense
in
other
contexts
like
I.
E
That
is
very
important
for
us
to
move
forward
and
so
like
the
kicking
out
as
much
as
possible
as
we
want
to
broaden
while
we're
refining
our
criteria
and
make
sure
there's
an
appropriate
amount
of
effort
going
towards
growing
it,
so
that
people
can
rely
on
them
as
well
as
rationalizing
them.
So
we
don't
back
ourselves
into
corner
yeah.
B
So
I
think
one
thing
that
would
be
helpful
for
the
future
profile
discussion
is
for
us
to
start
categorizing
these
things
as
we
encounter
them
right.
So
currently
we
have
criteria
saying
you
know
if
it's
not
portable
or
it's
optional
or
it's
privileged
or
whatever,
then
it
shouldn't
be
in
the
base
conformance
profile.
So
we
should
start
categorizing
those
things
as
well.
They
may
be
candidates
for
other
profiles.
B
E
But
that's
why
I
think
like
what
I
was
suggesting
was
I
may
be
an
I
should
double-check
this,
but
I.
Think
if
the
test
has
the
word
feet
like
has
a
feature
tag
in
it,
we
don't
consider
it
as
part
of
conformance
and
it's
not
run
as
part
of
the
default
suite,
there's
also
a
node
feature
that
is
used
for
node
conformance
tests
to
kick
things
out.
Similarly,
I
was
proposing
like
adding
a
feature
tag
saying
like
feature
privileged
access,
and
that
way
it's
categorized
for
future
use.
E
We
could
define
profiles
in
terms
of
the
features
that
they
include
and
it
also
gets
kicked
out
of
conformance
that
way.
Yeah
I
think
so.
I
think
it's
important
the
like
there
are
conformance
tests
that
test
features,
and
so
we
we
definitely
need
to
get
that
in
this
doc
nailed
down
like
what
do
we
mean
like
I,
could
go
either
way,
but
I
would
absolutely
agree
like
needing
privilege,
like
testing.
E
B
F
We
already
have
the
tag
facility,
I,
think
just
demarcating
with
privileged.
It
is
totally
legit
and
fine,
and
if
people
have
the
want
to
be
able
to
exclude
them
for
some
reason,
I
think
just
enumerated
in
the
state
space
alone
currently
and
make
it
accurate.
But
the
the
feature
subset
is
usually
isolated
for
things
that
are
gated
that
are
opt
in
behaviors
they're,
not
for
things
that
are
that
are
default.
F
Parts
of
the
system,
it
I
think
having
a
tag
or
enumerate
as
separate
set
of
tags
that
have
these,
and
you
know
privileged-
is
one
example
right
where
it
might
require
elevated
privileges
to
be
able
to
run
these
things
and
people
its
de
marketed
as
such,
and
how
we
deal
with
that
from
a
conformance
perspective.
I
think
needs
to
be
rationalized.
I,
don't
have
a
good
answer
for
you
right
now,
but
I
think
I
think
we
should
talk
about
it,
maybe
Etta
and
it
good
for.
B
B
Let
me
just
explain
where
I
was
going
with
that
question:
I
know
to
pull
the
full
detail,
so
the
user
wanted
to
say
verify
that
openshift
on
line
passed,
the
conformance
suite
or
something
like
that.
They
would
get
a
bunch
of
test
failures,
presumably
for
things
that
were
actually
blocked
by
the
by
the
policies
of
that
environment.
B
B
E
Came
up
Tim
on
the
on
the
point
about
tags:
the
DUII!
Is
there
a
doc
right
now
that
we
can
link
into
this
to
clarify
the
existing
tags
that
we
could
then
just
open
PRS
from
this
to
propose
the
changes
for
the
new
tags
or
how
they
fit
in,
like?
Yes,
try
to
follow
test
tags,
but
I
always
fall
behind
there.
F
E
A
talk
about
tests
that
spells
out
feature
tags
and
conformist
tags
and
savage.
Are
you
thinking
of
something
that
enumerates
the
specific
feature
tags
calling
out
when
you
should
use
feature
versus
skip
serial
disruptive
yeah
I
helped
write
that
thought
so
I,
just
like
I
kind.
I
want
us
to
live
in
a
world
where
the
set
of
conformance
tests
is
defined
by
a
single
tag,
called
conformance
I.
E
F
You
think
I
do
think
going
back
to
Brian's
point,
though
the
world
is
not
as
simple
as
that.
Operators
who
have
privileged
access
can
absolutely
run
all
of
it,
but
that
person
have
the
ability
or
capability
to
be
able
to
run
the
test
in
a
sandboxed
environment.
They
should
be
allowed
to
do
that.
Well,.
E
The
idea
of
getting
the
tags
set
up
correctly
will
benefit
all
as
I'm
very
happy
that
this
is
why
I've
opened
up
an
issue
to
get
rid
of
that
skip
list
from
asana
boy
and
have
that
pushed
upstream,
which
is
why
again
I
come
back
to
you.
The
policy
should
be
a
conformance
tag
for
the
based
profile
and
then
anything
additive.
On
top
of
that
should
be
that
this
isn't
sticky.
This
is
provider
instead.
B
E
So
in
theirs
there's
a
couple
of
variants
of
this
like
that
I
run
into
a
lot
is
we
have
one
way
of
doing
it
for
the
key
tests,
which
sometimes
leads
to
pain
with
features
people
don't
always
realize
that
their
features
aren't
actually
being
tested,
etc,
etc,
etc.
They
do
the
ete
test,
and
then
they
just
assume
everything's
going
well
I.
E
Think
we
want
to
make
sure
that
the
ete
test
line
up
with
it,
which
means
just
making
sure
that
the
additional
change
and
the
getting
the
tags
right
but
I
was
actually
gonna
say
like
feature
is
one
variant.
Conformance
is
another
variant,
the
characteristics
of
the
test,
whether
they
can
or
cannot
be
run
away
from
home
and
the
the
subset
of
whether
its
profile
or
variant
or
subset,
of
conformance
for
a
specific
use
case.
We
should
try
to
line
that
up
to
the
terminology
we
use.
When
we
talk
about
the
conformance
program.
E
Cool
Thank,
You
Jase,
but
just
the
other
comment
here
is
with
all
this
tack.
Talk
of
a
litany
of
tags
and
a
taxonomy
of
tags
may
I
remind
you,
we're
jamming
a
bunch
of
text
into
a
single
string.
We're
really
like
if
we
need
this
level
of
metadata
about
our
tests
may
be
like
prepending
or
appending
tags
into
at
the
test.
E
Name
is
not
the
appropriate
level
of
storing
that
information
yeah,
but
it
kind
of
works
today
and
like
I,
don't
want
to
I,
wouldn't
want
to
block
us
on
getting
profile
a
little
bit
of
subs
categorization
and
on
something
like
we
have
to
completely
change
how
kinko
works
or
something
like
that.
I
would
keep
those
separate
yeah.
B
For
people
who
attend
the
CF
conformance
working
group,
if
the
topic
of
profiles
is
put
on
the
agenda,
please
ping
me
and
I'll
show
up
to
discuss
that.
My
current
thinking
and
people
are
welcome
to
so
make
suggestions
is
that
you
know
we've
discussed
a
bunch
of
tags
so
far.
I,
don't
think
we
want
30
profiles
like
the
cross-product
of
all.
These
things
is
going
to
be
totally
incomprehensible
to
users
is
unlikely
to
result
in
portability,
which
is
the
whole
call.
B
B
Well,
maybe
that's
a
different
comment,
but
yeah
so
I
like
a
few
profiles,
so
someone
could
say
we
need
a
cloud
provider
profile.
We
support
the
privilege
profile,
a
big
option,
or
something
like
that
makes
sense,
but
allowing
people
to
select
like
a
cross-product
of
dozens
of
different
combinations
of
sets
of
feature
sets
seems
like
it
would
be
just
a
disaster
with
respect
to
the
certification
aspect.
I.
G
Had
a
question
around
around
coverage-
and
this
doesn't
only
apply
to
conformance,
but
so
so
we've
done
quite
a
lot
of
work
in
excluding
in
appropriate
tests
from
various
different
Suites.
Do
we
yet
have
a
sense
of
how
good
our
coverage
is
for
conformance
and
for
non
conformance
tests
so
that
we
can
at
least
find
out
how
close
we
are
to
being
complete
and
what
we
need
to
do
to
get
there?
Yeah.
E
D
E
E
So
at
present
we
dumped
the
audit
logs,
which
are
then
pulled
up
by
API
snoop,
and
it's
so
actually
generating
that
number
and
running
and
generating
a
report
is
still
a
manual
process.
I
would
like
us
to
get
to
a
point
where
that
coverage
also
an
artifact,
but,
roughly
speaking,
we
have
about
40
version.
111
had
about
35
percent
stable
API
coverage
version
112,
as
of
right
now
has
about
45
percent,
but
again
that's
kind
of
meaningless,
because
API
coverage
doesn't
mean
functionality
or
behavior
coverage
and
ultimately,
that's
what
we
care
about.
B
So
I've
looked
at
this
in
the
past
somewhat
manually
shockingly
high.
So
one
issue
is
that
we
don't
have
good
coverage
measurement
like
the
API
one
aspect.
Looking
I
looked
through
the
API
coverage
data
that
was
collected
at
some
point
in
time
and
due
to
the
way
our
clients
do
API
discovery
and
due
to
the
fact
that
all
the
controllers
are
looking
for
resources
that
they
care
about
almost
like.
Basically,
every
API
is
him.
Endpoint
is
hit
with
it
needs
to
get
regardless
of
what
you're
actually
testing.
So
we
need
to
weed
out.
B
B
Don't
cover
a
principal
set
of
functionality,
they're
on
a
volume,
oriented
certain
kinds
of
AIIMS,
not
persistent
volumes,
but
like
secret
to
config
maps
and
projected
buying
sources,
and
things
like
that.
That's
tested,
because
the
people
working
in
that
area
we're
pretty
diligent
about
writing
tests,
intamin
tests
and
getting
them
marked
as
components,
whereas,
like
butt
pods,
we
had.
B
B
E
Was
gonna
suggest,
like
one
of
the
approaches
that
we
bandied
about,
is
looking
at
regressions
per
release
as
a
very
concrete
way
of
identifying
where
we
might
have
gaps?
You
know
every
release
I
can
bring
to
mind
three
or
four
significant
regressions,
usually
in
areas
that
are
under
covered
in
tests
where
the
semantics
just
was
never
nailed
down.
I
think
that
might
be
a
parallel
thing
that
we
could
do
along
the
calculation
of
what
the
effective
is
is
looking
at,
where
we're
getting
regressions
and
and
being
data-driven
on
that
aside,
yeah.
B
I
think
that
is
great,
especially
for
another
topic
which
I
don't
know
time
to
get
to
today,
but
reliability
I
think
that's
super
critical
for
reliability.
We've
broken
compatibility
on
something
basically
every
release
in
the
past
year
and
clearly
we
need
better,
better
test
coverage,
especially
in
versions
t-test,
and
we
actually
need
to
pay
attention
when
those
tests
fail,
because
almost
always
it
happens
if
they
constantly
fail
and
nobody
takes
any
action
until
we
try
to
cut
the
release.
B
The
for
conformance
specifically,
my
strategy
is
to
focus
on
things
that
we
believe
will
have
multiple
implementations,
because
the
whole
reason
buying
components
is
that
we
want
to
ensure
consistent,
behavior,
so
things
that
are
explicitly
pluggable
like
C,
Ric
and
ICSI.
The
utilization
of
those
things
from
the
user
API
perspective
a
components,
and
we
know
people
have
swapped
out
like
key
proxy
and
scheduler
and
cubelet
even
now,
ad
CD
things
like
that.
B
There
are
things
that
we
can
look
at
the
community
and
see
what
they've
done
and
see
the
things
that
are
explicitly
portable
and
say.
Ok,
those
things
are
gonna
have
multiple
implementations
or
they
already
most
of
them
already
have
multiple
implementations.
We
should
ensure
that
they
behave
the
same
way
so
that
users
aren't
surprised.
B
So
you
know
coupling
that
with
the
idea
of
what
is
the
most
critical
most
used,
functionality
being
pods,
and
that
thing
also
being
the
most
pluggable
and
having
multiple
implementations
I
suggest
that
we
focused
on
pods,
as
functionality
is
the
first
thing
like
I
just
went
through
and
looked
at
every
field,
and
we
didn't
have
coverage
on
almost
any
of
them
in
the
conformance
test.
So
it's
not
like
rocket
science
to
go
and
fill
those
gaps.
The
node
conformance
test
that
Aaron
is
moving
to
conformance,
have
filled
some.
B
Lightness,
probes
and-
and
things
like
that,
but
there
are
still
many
many
many
many
pod
features
it's
a
very
rich
API
that
are
not
tested.
So
that's
you
know,
after
pot
or
in
parallel
with
pod.
We
can
also
do
other
things.
You
know
there
are
some
low-hanging
fruit
is
being
pursued
in
terms
of
existing
into
in
tests.
That
just
happened
to
not
be
labeled,
conformance,
right
deployment
tests
and
staple
set
tests
and
even
sent
tests,
and
that's
fine.
B
But
it's
not
my
priority
and
test
the
exercise
sed
behaviors
that
we
explicitly
want
to
provide
guarantees
about
I
think
would
be
another
category,
especially
with
people
even
doing
things
like
sharding
at
CD
and
whatnot.
We
should
make
sure
we
have
tests
for
those
right
now.
Sadly,
our
clients,
we
rely
on
behaviors
that
we're
not
supposed
to
guarantee
so
we'll
need
to
fix
that.
B
A
A
All
right
well
we'll
hope
that
they
come
back
online
and
we'll
see
what's
going
on
there.
That
is
a
little
odd
all
right.
Moving
along
there
is
an
incoming
API
review,
so
API
reviewers
on
the
call,
if
you
can
just
take
a
look
at
that
I'll
paste
a
link
to
in
the
chat
Clayton.
This
is
you
and
the
paper,
a
review
team
I
put
it
in
the
the
tracking
board
so
once
that
gets
assigned
to
somebody
we'll
go
ahead
and
move
it
along
the
board
dims.
I
I
A
I
J
D
J
Sounded
like
there
was
agreement
that
we
don't
want
feature
gates
to
serve
as
a
way
to
turn
on
and
off
behavior
in
perpetuity
like
once
once
a
feature
has
reached
stability
and
testing
and
acceptance
and
has
reached
GA.
The
feature
gate
should
be
on
and
probably
announced
as
deprecated
in
that
release
and
then
removed
the
release
after
that.
J
J
If,
if
the
behavior
that
is
controlling
is
something
that
we
want
to
be
configurable
long
term,
we
actually
want
to
have
a
degree
of
freedom
in
long
term,
then
that
needs
to
be
an
actual
option,
and-
and
there
are
things
like
that,
right
things
that
require
additional
inputs,
additional
configuration
things
like
that,
but
every
degree
of
freedom
we
have
long-term
increases
our
test
matrix
just
enormous
ly.
So
we
we
only
want
those
for
some
things:
oh
yeah,
when.
J
Think,
ideally,
we
would
know
earlier.
So
let's
talk
about
two
scenarios
here.
One
is
the
idea,
and
then
we
can
talk
about
the
Gordian
s
area.
Ideally
we
would
know
if
this
is
a
thing
that
is
going
to
be
optional
and
a
cluster
earlier
in
the
future
cycle.
Right
and
so
the
feature
gate
serves
as
kind
of
a
backstop
like.
If
this
is
off,
then
this
feature
doesn't
exist
in
your
cluster
and
that
gates
off
code.
It
just
makes
it
easier
to
reason
about
I.
I
J
F
J
The
mechanisms
have
been
a
little
bit
ad
hoc
for
some
of
the
transitions
on
how
to
how
to
figure
out
which
one
to
use
in
a
given
situation.
I
feel
like
there's
a
couple
different
questions
here:
there's
how
to
end-of-life
the
feature
gate,
which
I
think
I
feel
like
the
thread
actually
converged
on
right,
like
we
have
a
policy
around
administrative
flags.
J
C
C
J
C
J
F
J
I,
don't
remember
requesting
it
that
way,
but
okay
and
I
believe
that
it
exists
and
it's
entirely
possible
that
I
did.
It
would
be
great
to
set
some
some
precedents
here
around
transition
around
these
features.
If
we
think
something
is
GA,
then
it
should
be
trust
of
all,
as
the
default.
I
J
If
it's
beta
one
release
or
three
months,
okay
for
us,
the
LI
flag,
if
it's
GA,
two
releases
or
six
months,
all
right,
so
here's
the
question,
then
that
statement
there
is
open
to
interpretation
is
the
flag
itself
which
governs
a
beta
feature
beta,
whereas
the
flag,
GA
I,
would
argue
that
the
flag
itself
is
GA.
The
feature
is
beta
and
therefore
you
would
probably
be
subject
to
the
GA
administrative
deprecation.
J
J
Yeah
we
can
be
nicer,
I'm,
saying
I
can
I'm
clear,
I'm
saying
we
could
actually
argue
this
one
in
both
directions.
Think
of
it
from
the
user
point
of
view
if
they
had
deployed
a
cluster
with
an
alpha
gate
and
alpha
gate
goes
away
like
we
just
decide
we're
not
doing
that.
After
all,
the
next
release
cannot.
B
J
B
J
I
J
D
J
Had
this
problem
in
the
past,
with
gates
and
stuff
that
have
been
littered
around
the
codebase,
leaving
very,
very
high
complexity
of
combinatoric,
conditionals
cyclomatic
complexity
knows
where
I
was
and
it
sucks
it
totally
totally
sucks.
We
just
need
to
be
consistent
about
it
and
we
need
to
do
it
in
a
way
that
hurts
the
least
amount
of
users,
the
least
so
I
think.
There's.
B
A
question
of
whether
the
minimum
time
period
before
removing
it
and
the
maximum
amount
of
time
period
you
can
wait
for
you
have
to
remove
it,
whether
that's
the
same
amount
of
time
or
not
so
I
have
a
problem
with
the
second
part.
Is
we
don't
have
any
enforcement
right
exactly?
We
don't
have
a
stink.
We
could
if
we
get
tagged
or
release
tagged
everything
in
rural
inter
sure
we.
J
B
Sure
that
I
would
trust
that
to
end
film
and
that's
not
how
to
remove
them,
but
to
like
start
showing
or
whatever
people
don't
do
it
apparently,
yes.
Well,
we've
done
that
internally
for
Flags
yeah,
yes,
yes,
we
have
I
was
part
of
that
after
white
lists,
all
the
flags
you
can
happen
and
put
time
bombs
on
them.
D
B
B
A
J
A
I
I
think
we
got
the
okay
for
all
the
six
staging
ones.
It
was
so
the
first
problem
was
we
used
to
use
the
K
community
to
create
to
ask
requests
for
the
repos
that
moved
to
the
orgs,
and
you
know
we
are
not
letting
people
know
that
so
some
people
are
still
using
the
k
community
so
anyway.
So
that
was
the
first
piece,
so
we
are
going
to
move
to
the
arc
for
sure
from
now
on
and
if
I
see
anything
in
K
community,
then
I
will
ask
for
people
to
move
there.
I
I
B
D
J
I
I
J
J
A
A
All
right
so
I,
this
next
action
item
is
stephen
augustus
myself
and
some
other
folks
have
been
talking
about.
The
fact
that
it
kept
being
in
the
community
does
not
really
make
sense
anymore
and
that
probably
need
to
get
those
out
and
some
other
things
moved
around.
This
is
sort
of
I
see
as
a
critical
step
in
getting
away
from
the
quote.
Unquote
features
process
in
graduating
toward
the
the
cap
is
the
sort
of
source
of
truth
for
where
values
is
tracked.
A
So
there's
a
number
of
converging
efforts
on
this
caleb
Myles
is
looking
finally
at
some
of
the
the
automation
around
caps
that
we've
wanted
to
look
at
for
a
while
and
there's
other
stuff
for
the
site.
So
basically,
this
is
just
an
interstitial
step
to
get
things
consolidated
in
one
place
and
possibly
rename
the
features
repo
to
caps,
so
that
we
maintain
all
the
issue,
history
and
stuff
that
we
have
for
features
but
essentially
start
becoming
focused
on
these.
A
As
the
the
statement
of
work
for
things
that
move
forward
so
I
would
say,
just
take
a
look
at
the
this
link.
That's
in
here
in
this
issue.
It's
twenty
five,
sixty
five
in
community
and
in
comment
as
you
see
fit,
there's
a
few
outstanding
things,
but
for
the
most
part
it's
pretty
straightforward.
Yeah.
H
There's
some
concerns
that
we
haven't
brought
up
yet
when
we
think
about
the
contributor
site,
which
I
know
George
has
been
working
on
pulling
stuff
together,
I'm
using
that
with
a
tool
like
natla
Phi.
It
only
knows
about
single
repo,
so
there's
no
good
process
to
essentially
aggregate
multiple
repos
into
a
site
that
gets
published
with
nullify.
H
B
H
B
H
I
mean
yeah,
and
that
being
said,
I
think
there
was
I
mean
one
of
the
reasons
to
do
the
contributor
site
as
a
separate
site
in
general
was
just
so
that
it
could
have
its
own.
You
know
move
forward
not
get
caught
up
in
the
whole
release
stuff,
because
it
really
is
not
it's
not
going
to
be.
You
know,
walk
to
releases
so
so
I'm
not
saying
that
we
shouldn't
do
it
I'm
just
saying
that
this
is
something
that
we
have
to
work
through.
Yeah.
A
A
It's
it's
overdue
and
everybody
wants
us
to
move
forward,
but
there
are
just
so
many
areas
in
the
fire
right
now,
it's
hard
to
turn
a
focus.
So
so,
if
you
do
have
concerns
or
things
you
want
to
work
through
on
that,
definitely
bring
it
up
in
the
PR
just
from
a
cig
governance
standpoint.
As
far
as
I
know,
sig
p.m.
A
is
a
stakeholder
in
this
and
as
bored
with
it,
Steven
augustus
is
the
current
features
lead
and
basically,
he
Horace
stepping
out
of
that
role
a
little
bit
so
we're
sort
of
seeing
the
next
generation
of
leadership
evolve
around
this
process.
You
know:
we've
seen
a
nice
threat
of
custody,
moving
from
aratoon
to
be,
and
and
now
it's
a
Steven,
some
other
folks.
So
I
really
feel
like.
This
is
a
positive
step
in
the
evolution
of
how
we
think
about
what
gets
added
to
the
ecosystem.
B
So
I
did
wanted
to
kick
this
off
as
a
topic.
I
think
one
of
the
roles,
Pirsig
architecture
needs
to
be
setting
sort
of
technical
direction
and
priorities
across
the
entire
project
for
things
that
are
cross-cutting
even
seeing
more
more
as
more
and
more
users
of
kubernetes,
you
know
are
out
there
more
more
issues
are
encountered
in
the
field
and
we're
seeing
at
least
from
our
side.
B
I
haven't
gone
through
and
triaged
issues
being
filed
on
github
lately
and
correlated
those
releases
or
anything
that
might
be
an
exercise
we
might
want
to
do,
but
we're
seeing
an
increasing
number
of
reliability
issue
issues.
Part
of
that
is
just
number
of
people
doing
things
and
part
of
it
is
people
are
putting
more
stress
on
the.
B
But
you
know
traditionally,
the
community
has
focused
a
lot
on
features,
features
features
I
feel
like
we
have,
you
know,
sort
of
an
MVP
level
of
functionality
across
a
pretty
broad
feature
set.
At
this
point.
We
need
to
start
shifting
the
balance
of
focus
and
effort
to
other
things
like
test,
flakiness
conformance
coverage
and
reliability.
So
I'm
going
to
float
that,
as
you
know,
Klain
mentioned
looking
at
things
that
were
broken
in
past
releases
and
making
sure
we
get
regression
tests
for
those
things.
I
think
that's
a
great
idea.
That
would
be
part
of
this.
B
You
have
had
a
bunch
of
compatibility,
breakages,
even
just
functionality.
You
were
behaviors
that
have
been
repeatedly
broken
so
Kim
st.
Clair
also
said
he
had
some
thoughts
on
this,
but
I
want
to
sort
of
get
this
out
there
as
sort
of
an
initiative
that
could
be
driven
by
Sagarika
fixture
to
raise
awareness
and
maybe
apply
some
organizational
efforts
in
prioritization
effort
in
this
area.
I
think.
F
Documentation,
here
too,
is
key
because
documentation
for
some
of
the
features
we
even
have
is
non-existent
or
sparse
at
best
yeah.
So
that's.
My
major
sticking
point
is
that
we've
we've
enabled
features
to
exist
over
time
and
even
publicize
their
general
availability
to
a
wide
audience,
but
we
don't
actually
have
documentation
for
the
details
of
enabling
said
features.
J
J
A
Gonna
say
actually
that
it's
not
true
that
there
wasn't
an
effort.
I
wrote
a
comprehensive
policy
document
on
this
and
circulated
and
got
buy-in.
But
what
happened
was
we
were
victims
of
the
tautology
of
open
source
which
is
whatever's
in
the
product
is
in
the
product,
so
we
wanted
to
exercise
control
and
there
was
no
real
way
to
do
that.
So
yeah,
that's
what.
B
C
I
think
the
next
quarter
we're
going
to
be
reintegrating,
like
probably
oh,
maybe
maybe
as
soon
as
like
the
the
queue
is
open
again.
So
we've
got
a
very
short
list
of
things
that
I
want
to
do
before.
I
think
it's
safe
to
integrate,
I'm
thinking,
I'm,
actually
feeling
better
about
future
branches
than
I
was
before
you
did
the
experiment
previously,
yeah
and
I.
Think
the
the
the
more
I
think
about
it.
C
The
more
I
think
if
we
are
judicious
about
assigning
different
efforts
to
a
limited
number
of
feature
branches
based
on
what
areas
of
the
codebase
they're,
primarily
going
to
be
interacting
with
I,
think
we
might
be
able
to
limit
the
problem
where
everybody
has
a
terrible
time
or
the
everybody,
except
for
the
first
person
who
we're
just
back
into
master,
has
a
terrible
time.
Yeah.
B
C
B
Sub-Area
or
component
branches
has
come
up,
as
you
know,
inspired
by
linux,
or
something
like
that
that
bundles
the
fate
of
everything
it's
into
that
branch
that
we
may
not
want
to
go
that
far,
but
groupings
of
things
that
we
feel
make
sense
together
and
it's
fine
for
them
all
to
be
in
or
none
to
be
in.
He
seems.
C
I
feel,
like
future
branches,
may
be
the
way
to
go.
Given
our
mono
repo
I
think,
an
even
better
approach
would
be
to
to
separate
everything
into
well-defined
sub
projects
with
well
tested
interfaces.
I
I
mean
that's
sort
of
a
dream
that
probably
isn't
going
to
happen
because
it's
so
much
work.
I
am
thinking
about
how
like
API
machinery
could
do
that,
because
we're
such
a
large
part
of
the
project
and-
and
we
have
so
many
interface
that
are
just
not
well.
B
G
G
That
went
through
into
production
without
being
caught
by
our
test.
Failures,
presumably
can
be
ascribed
back
to
some
sig
and
that
counts
against
their
error
budget
and
then,
as
long
as
we
have
the
right,
metrics
in
place
to
say,
sig
X
your
test
coverage
is
y
and
your
error
budget
is
Z
and
you've
expended
so
much
of
it.
In
the
past
three
releases,
we
can
perhaps
have
a
more
effective
stick.
J
B
B
G
But
ultimately,
some
sake,
some
owners,
member
of
a
known
as
file
approved
that
thing
and
yes,
a
Queenie
Anna,
so
they
are
responsible.
Even
if
they
didn't
write
the
code,
they
were
responsible
for
approving
it.
Yeah
and
I
said
bad
things
that
break
stuff.
Then
they
should
not
be
allowed
to
prove
anything
or
they
can
approve
whatever
they
like,
but
can't
actually
release
it.
Yeah.
G
B
Way
in
which
this
does
relate
to
the
future
branches
and
adding
new
things
is
that
it
that
provides
an
enforcement
mechanism,
so
one
thing
no
js'
does
is
in
addition
to
having
really
really
high
test
coverage.
Is
it
things
developed
by
their
contributors?
Don't
automatically
hit
releases
I,
don't
remember
exactly
what
their
process
is,
but
everything
is
up
releases
instead
of
being
default
into
releases.
Yeah
I,
think
that
would
allow
us
to
move
to
that
type
of
model.
If
he
said
well,
you're
sig
has
exceeded
its
air
budgets.
None
of
your
feature.
E
I
was
gonna,
say
the
data
that
we
do
have
that
come
in
after,
like
bug
fixes
that
go
into
a
release
branch
right
to
go
into.
You
know
one
eight
one,
one,
eight
two,
eight
three
like
even
just
looking
at
those
just
going
through
and
doing
the
coarsest
filter
on
air
budget
would
be
more
effective
than
nothing
right
because
they
weren't
pre
feeling
and
we're
gonna
miss
some
of
them.
But
it's
a
good.
It's
a
good
initial
dataset
that
we
could
automate
yeah.