►
From YouTube: Kubernetes SIG Architecture 20190620
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Is
me?
Well,
then,
the
floor
is
yours.
Awesome
I
have
some
slides,
but
I
am
also
starting.
My
vacation
today
in
beautiful
warm
some
law.
Welcome
to
the
ocean,
so
I
have
a
Zak.
Is
online
he's
going
to
be
remotely
pushing
those
slides
and
screens
forward?
Thank
you
for
that.
Zak
we've
been
had
a
few
questions
around
what
conformance
looks
like
and
how
far
we're
coming
and
where
we
can
go
and
that's
part
of
what
we've
been
doing
with
API
snoop
and
the
crew
over
it.
B
Iii
coop
we'll
go
through
some
of
the
questions
we
came
up
with
and
some
of
the
answers
and
in
particular
directions
that
we
came
up
with,
but
I'm
really
interested
in
seeing
what
other
questions
that
are
meaningful
to
us
that
we
can
use
looking
at
the
youth
that
the
how
we're
actually
using
our
API
to
drive
what
conformance
and
what
our
tests
look
like
so
on.
Our
first
question
is
just
which
API
zarkan
format,
I'm
8-ks,
leave
links
from
the
topper
are
persistent.
B
But
if
we
knew
which
API
as
you
can
form
it,
we
would
see
that
there's,
there's
96
there
you
go
so
we
have
96
existing
conformant,
endpoints
and
all
conformant
endpoints
are
I
focused
on
what
was
unstable,
because
sometimes
your
conformance
tests
that
hit
things
that
aren't
stable.
But
these
are
this
96
conformance,
stable
points
that
gives
us
a
base
for
where
we
are.
But
what
if
we
knew,
what
stable
core
api's
were
we're
already
being
tested,
but
we
haven't
yet
tagged
those
tests,
as
conformance
the
next
slide,
shows
that
we
have
30.
B
These
are
core
stable,
core
endpoints
that
if
you
look
at
the
check
box
is
there.
This
is
untested
and
not
conformant
tested
but
tested.
That's
30
in
points
there.
If
we,
if
we
pick
that
list-
and
we
can
see
on
the
next
slide-
that
with
those
96
tests
and
36
core
api's-
that
we
could
promote
if
they're
permanent
worthy,
we
would
get
a
16
if
we
got
half
of
them
and
up
to
a
31%
increase
in
stable
API
coverage.
I
think
that's!
That's
nice!
That's
a
nice
start!
B
If
we
go
a
bit
further
beyond
just
stable
core
and
said,
let's
expand
this.
What
if
we
knew
about
stable
everything
api's
which
already
have
tests
written
but
are
not
yet
promoted
to
conformance
the
numbers
a
bit
higher?
The
numbers
I
think
50
51
endpoints.
So
we
could
look
at
the
existing
test
hitting
51
endpoints
and
promote
those
to
conformance
if
their
promotion
worthy
and
that
would
give
us
a
total
on
the
next
slide
of
1954
TM.
That's
nice!
That's
really
nice
and
give
us,
even
if
we
only
got
half
of
those
promoted.
B
B
So
one
of
the
other
questions
we
came
up
with
it
seemed
meaningful
on
the
next
slide
was
there's
our
147
in
points
yeah.
What
if
we
knew,
which
stable
api's
were
being
hit
but
not
tested-
and
these
are
these
are
stable.
Ap-
is
that
during
the
course
of
a
our
release
blocking
job,
they
are
hit
in
some
way
by
some
application
other
than
directly
by
our
e
to
eat
tests,
and
that,
if
we
look
at
the
the
results
of
that,
we
can
see
that
there
are
95
endpoints
that
are
are
being
hit.
B
The
next
question
we
came
up
with
was
what,
if
we
knew
there's
a
we
have,
and
so
part
of
the
output
of
these
actions
was
to
create
umbrella,
slides.
So
each
of
these
questions
we
created
an
umbrella
issue
and
under
each
of
these
umbrella
issues
we
have
listed
either
all
of
the
the
tests
that
we
believe
are.
B
They
may
not
be
straight
promotable,
but
they
do
point
out
the
list
so
we're
going
to
go
through
and
triage
that
list
and
and
and
help
with
the
community's
help
figure
out,
which
ones
are,
are
valid
for
promotion,
and
that
was
for
the
first
slide
at
first
said,
and
this
one
is
our
list
of
stable,
API
endpoints.
We've
got
three
of
these
different
umbrella
tickets.
B
For
these
big
three
questions
next
slide,
the
last
one
was
what
if
we
knew,
which
untested
api's
were
not
being
hit
by
just
anything
but
being
hit
by
Kubb
star,
so
anything
in
the
kubernetes
core,
that's
hitting
the
API
server
itself
and
I
believe
that's
a.
How
many
is
that?
Let's
see
125
endpoints,
so
these
125
endpoints
are
being
hit
directly
by
coop
API
server,
coop
controller
manager,
coop,
scheduler
and
coop
proxy.
You
can
just
put
a
red
X
into
that
field
and
it'll
match
against
which
user
agents
are
hitting
the
different
endpoints.
B
B
We
came
up
with
any
answer,
but
there's
probably
other
interesting
questions
that
this
this
group
has
that
we
could
look
at
and
that's
where
we're
at
now
on
the
next
slide
yeah
there
we
are
with
your
feedback,
I'd
love
to
make
API
snoop
the
best
tool
for
identifying
and
prioritizing
untested
behaviors.
But
we
need
to
be
asking
the
right
questions
and
I
want
to
make
sure
that
we're
on
track
and
asking
and
answering
the
right
questions.
B
Devon
is
working
with
our
team
he's
over
in
the
end.
States
I
believe
up
in
Portland
zach
is,
is
the
one
presenting
the
slides
for
me,
he's
usually
down
in
Wellington,
New,
Zealand
and
Stephen
is
up
in
the
North
Island
with
me,
and
the
beautiful
Bay
of
Plenty,
and
that's
all
due
to
the
work
of
my
team
and
here's.
How
you
can
get
a
hold
of
us
is
now
that
we've
done
our
own
Q&A
session.
I'd
love
to
open
up
the
floor
for
a
Q&A
with
cigarettes.
D
Have
seen
pieces
of
this
content
have
the
conformance
presentation
at
kcon,
EU
and
hippies
sort
of
presented
some
of
the
github
issues
that
they
were
planning
and
working
against
that
the
people
in
this
meeting,
though
not
to
these
slides
specifically
my
understanding
is
this
is
what
will
be
presented.
Shanghai
yeah.
C
E
Yes,
we
saw
pieces
of
it
and
we're
working
through
pieces
of
it.
Currently
we
have,
as
mentioned
in
the
document
I
just
entered.
The
note
is
that
we're
going
to
groom
the
backlog
with
fun,
rainbows
and
kids
this
afternoon,
if
you're
so
inclined
to
join,
and
we
will
walk
through
the
gory
details
of
what
it
would
mean
to
implement
or
to
promote
some
of
those
api's
that
hippy
had
denoted.
D
Yeah
I
think
the
big
big
questions
we
walked
away
with
was
we
don't
care
about
prioritizing
behavior
specifically
like
which
of
these
are
most
relevant
within
that
context,
and
then
for
some
of
the
existing
tests
that
exercise
these
end
points.
Do
they
do
so
in
a
direct
manner
or
a
completely
Rube
Goldberg,
an
accidental
manner
and
and.
C
Also,
a
lot
of
these
behaviors
are,
you
know
at
least
ones
that
I'm
looking
at
our
optional
behaviors.
So
so
far
we
don't
cover
those
in
conformance
like
I,
don't
know
that
all
providers
support
cluster
and
certainly
not
all
support
network
policy
enforcement's
an
optional,
so
we
would
need
to
go
through
and
and
figure
out
what
we
wanted
about.
Those
kinds
of
things,
storage,
in
particular,
has
a
number
of
special
issues
that
we
need
to
figure
out
before
we
can
cover
most
aspects
of
storage
in
conformance
I.
B
Noticed
that,
with
in
storage
in
particular
that
the
the
way
that
the
framework
is
written,
if
we
were
to
use
a
different
storage
driver
depends
on
which
cloud
you're
running
it
on
those
tests
automatically
get
tagged
with
a
bunch
of.
So
it's
the
same
set
of
tests
run
against
different
drivers,
and
so
the
reason
it's
called
Gluster
in
that
list
is
because
this
test
was
run
against
the
Google
Cloud.
If
we
would
run
this
same
exact
test
suite
against
the
different
cloud
provider,
the
test
name
would
be
different.
C
Right
I
mean
we
need
to
look
at
what
storage
behaviors
are
widely
supported
and
abstract
away
the
individual
provider
using
the
in
storage
class,
or
something
like
that.
So
there
was
a
proposal
from
six
storage
a
while
back
that
we
should
go
back
and
revisit.
Probably
if,
if
we
want
to
address
the
storage
ones,.
F
Yeah
we
went
through
a
number
of
these
things
yesterday
too,
and
in
Barcelona,
like
John,
was
mentioned
in
I.
Think
there's
there's
a
substantial
amount
of
work
that
can
be
done
here
to
to
move
things
forward.
Based
on
this
data,
I
think
longer-run.
We
have
some
other
tooling
and
things
that
were
working
on,
but
that's
where
we're.
F
C
Go
back
to
Aaron's
point
about
pod
and
the
reason
we
were
focusing
on
pod
is
because
that
is
the
core
of
the
system.
It's
also
the
case
that
the
qiblah
is
highly
pluggable
as
GRI
CSI,
CNI
and
people
are
even
swapping
out
the
cubelets
with
alternate
implementations,
so
in
general,
with
conformance
since
there
is
such
so
many
untested
areas,
I
felt
it
was
important
to
focus
on
areas
that
are
more
likely
to
be
non
conformance,
which
is
areas
that
are
highly
pluggable.
And/Or
have
alternate
proven,
alternate
implementations
in
the
Eko
system,
like
team
proxy
or.
C
There
a
couple
of
instances
of
the
scheduler,
but
you
know
certainly
cubelets
and
sed-
are
ones
that
are
coming
out
more
frequently
these
days,
k3s
used
sequel,
light
or
something
like
that
and
passed
all
the
conformance
tests.
So
we
need
to
really
figure
out
what
sed
dependent
behaviors
in
the
API
server.
We
should
be
validating
so
as
opposed
to
just
looking
at
you
know
all
the
surface
area,
yet
there's
a
little
hanging
fruit.
C
D
Yeah
I
feel
like
this
is.
This
is
why
the
suggestion
would
be
to
get
this
low-hanging
fruit
out
of
the
way
I'll
sort
of.
In
parallel,
we
work
with
sort
of
the
framework
that
John
has
laid
out
and
his
cap
around
could
be.
Could
we
create
a
machine
parsable
list
of
behaviors
that
are
expected,
so
we
can
have
folks
such
as
yourself,
Brian
or
more
technically,
a
client
like
enumerate?
D
What
are
all
the
behaviors
that
be
care
to
actually
validate
and
separate
the
review
of
that
from
whether
or
not
tests
are
adequately
exercising
those
behaviors
so
that
we
can
then
start
to
check
off
the
list
like
whether
or
not
we've
done
that
the
behavior?
So
some
of
these
api's
can
be,
you
know,
included
in
that
list.
Yeah.
C
Just
just
as
one
example
of
maybe
something
that's
less
obvious,
just
by
looking
at
the
api's
that
are
called
is
do
we
have
a
test
that
tests,
whether
a
pod
on
one
node
can
actually
communicate
with
a
pod
on
another
node
right,
which
would
verify
that
pod
networking
is
working,
some
degrees
I.
Don't
think
you
have
an
explicit
test
for
that.
I.
C
I
know
that
we
didn't
the
the
you
that
you
must
have
multiple
modes
to
pass.
Conformance
only
recently
came
up
because
of
a
demon
set
for
a
different
test.
So
I
know
if
there
was,
and
now
we're
contested
for
that,
but
it
did
not
ensure
that
the
pods
are
on
different
notes
because
it
was
not
required
and
we
had
single
note
providers
passed
conformance,
okay,.
D
And
so
I
think
you're
doing
a
good
job
of
like
enumerated
the
list
of
behaviors
that
we
can't
actually
enumerate
with
the
machine
that
looks
solely
at
the
API
or
the
fields.
You
know
the
API
endpoints
or
the
fields
with
the
me
endpoints.
Our
hope
is
like
we
could
start
to
proactively
generate
such
an
exhaustive
list,
but
we
also
really
will
need
human
curation
of
the
same
list
of
behaviors.
That
cannot
be
adequately
machine
generated,
and
so
it's
about
how
do
we?
D
C
E
Know
we
struggle
as
a
community
just
as
long
as
I'm
around
here
to
have
the
carrot
and
stick
philosophy
like
what
are
the
incentives
and
what
are?
What
are
the
repercussions,
but
a
potential
repercussion
could
be
the
denial
of
promotion
for
features
from
cigars
until
certain
things
have
been
tested
by
said
cig.
Well,.
C
I
think
in
this,
in
this
particular
case,
and
yes,
we
could
use
something
like
that,
but
I
mean,
for
the
most
part,
we're
trying
to
distribute
Authority,
train,
API,
reviewers
and
all
the
individual
SIG's
and
things
like
that.
Actually,
you
just
think
a
coordinated
push
say:
look
our
goal
for
the
next
release
is
to
at
least
you
numerate,
those
core
behaviors.
Our
goal
for
the
next
couple
releases
after
that
are
to
make
sure
we
have
tests
for
some
fractions.
I,
actually
think
is
more
of
just
a
program
management
kind
of
gap.
C
F
C
Like
just
you
know,
I
asked
some
folks
an
API
machinery
for
some
watch
tests.
They
were
used
to
watch
tests
and
it
just
requires
people
to
push
and
to
drive
it
and
to
make
it
clear
that
it's
a
that
it's
important
and
he
will
generally
work
on
it.
If
that
doesn't
happen,
then
we
can
figure
out
what
to
do
next,
but
without.
E
F
C
B
We
were
on
the
next
questions.
We
were
hoping
it
might
help
us
revolve
around.
They
can
the
I'm
looking
at
the
parameters
used
like
iterating
through
node
and
using
some
of
the
existing
frameworks
by
some
of
the
community
members
that
we
have
or
knowing
which
API
is
which
parameters
are
being
called.
So
we
can
start
to
record
not
just
the
parameters,
but
the
behaviors
as
well
talk.
B
H
Note
why
would
we
be
worried
that
there's
like
diminishing
returns,
the
deeper
we
go,
I
mean
at
some
point.
We
can
identify
sort
of
key
parameters
that
we
want
to
make
sure
that
we
see
variability
and
testing
on,
and
you
know
versus
trying
to
do
something
on
a
magic
I
mean
things
like,
like
you
know,
type
equals
load,
balancer
versus
type
equals.
You
know,
cluster
IP
on
service
right
like
there's
going
to
be
a
couple
that
are
gonna,
be
absolutely
critical
that
we
want
to
see
diversity
on
but
yeah.
What
do
you
think
Brian.
C
Yeah,
so
that
the
first
level
we
don't
even
know
that
we
have
test
the
deliberately
exercised
many
of
the
fields
in
the
API
and
in
pods
back
in
particular
at
least
for
the
ones
that
meet
the
requirements
in
terms
of
their
like
OS,
agnostic
and,
and
things
like
that
and
well,
there's
still
going
to
be
Linux
for
the
most
part,
but
in
any
case
yeah.
We
don't
really
have
any
sense
of
ranges
of
values
that
are
tested
at
the
moment.
C
I
would
defer
that
for
the
most
part,
for
now
until
we
actually
have
you
know
to
make
sure
that
we
had
tested
post
start
hooks
and
liveness
probes,
and
you
know
all
of
the
features
surface.
The
testable
feature
service
of
pod,
at
least
in
some
configuration
intentionally
because
of
the
way
the
tests
are
constructed.
There
are
some
common
fixtures
where
it
says,
create
me
a
pod
and
there's
a
standard,
boilerplate
pod
and
many
of
the
tests
will
actually
validate
all
of
the
features
that
are
configured
in
that
pot.
D
Yeah
I
was
gonna,
say
I
feel,
like
we've
gone
far
afield
of
what
the
original
thing
was
here,
and
most
of
this
discussion
would
be
more
productively
held
in
the
conformance
meeting
or
prioritization
meeting
later
today.
That
sounds
good.
This
is
a
good
check-in
of
like
one
of
the
plans,
but
we
got
other
stuff
on
the
agenda.
Can.
A
E
A
Okay,
we
can
move
on
to
the
next
thing
here,
which
is
from
somebody
who
couldn't
make
it
today,
which
looks
like
a
request
to
integrate
issue
triage
into
the
meetings
or
a
question
on
it.
The
text
here
says:
there's
been
a
small
issue,
triage
call
for
a
few
weeks.
It's
at
7:30
a.m.
Pacific
time
work
load
is
now
very
low.
Maybe
better
serve
five
to
fifteen
minute
checkups
from
more
signals.
I
The
meeting,
as
you
know,
but
yeah
it's
been
going.
Okay,
we've
been
going
through
the
incoming
issues,
we've
been
going
through
some
older
issues
and
bringing
people
and
trying
to
see
what
can
be
closed
and
what
can
be
moved
to
the
other
sub
projects,
and
things
like
that,
so
in
the
incoming
queue
is
not
really
huge.
It's
just
the
backlog
that
we
have
is
a
lot
quite
right.
C
Right,
like
it's
an
issue,
that's
a
couple
of
years
old,
probably
there
are
different
people
who
are
working
in
that
area
of
the
project
now
and
they
are
not
subscribed
to
the
issue.
There's
no
manageable
team
subscription
mechanism
in
github,
where
we
can
curate
the
set
of
people
who
are
currently
receive,
like
I,
think
the
teams
are
expanded
at
the
point
that
they're
added
or
some
weird
thing
anyway,
it's
totally
broken.
C
It
doesn't
work
so
I
I
would
say
we
need
a
different
mechanism
for
getting
attention
to
those
older
issues
either
by
like
a
summary
email
to
the
relevant,
SIG's
or
or
something
but
yeah.
That's
there.
I'm
not
aware
of
a
good
management
mechanism.
Everything
we've
tried
in
the
past
is
more
or
less
failed.
I
C
J
A
All
right
that
one
was
easy
enough.
The
next
one
is
actually
a
check
in
on
the
CR
D
work.
So
a
while
ago
we
had
a
number
of
CRD
discussions,
particularly
around
doing
some
of
the
stuff
with
core
stuff,
and
it
came
up
as
an
idea
do
we
do
something,
maybe
even
birds
of
a
feather
ish,
where
a
group
goes
off
it
just
syncs
up
and
checks
in
on
that
work,
so
we
can
make
sure
that
things
are
moving
along.
People
are
unblocked
things
of
that
nature.
A
The
motivation
was
in
part,
we
need
to
we've
taken
a
number
of
actions
as
a
group
to
send
things
off
and
get
them
done
throughout
these
meetings
we've
had
and
quite
often
we
don't
come
back
to
follow
up
to
see
what
happened
with
those
actions
to
make
sure
they
happened.
Things
of
that
nature
and
I
think
it
was
Jase
who
suggested
this.
Dimms
may
be
able
to
correct
me
in
going
and
starting
to
follow
up-
and
this
was
the
item
that
he
came
up
with.
J
A
J
J
Think
cjp
odd
machine
area
had
made
some
statements
intending
that.
That
was
not
going
to
be
the
case,
but
I
think
I
think
not
to
speak
for
Jace,
but
I
feel
like
Jace
was
just
wanting
to
see
if
there
was
any
updates
to
when
in
how
CRT
should
be
used,
both
in
core
and
out
of
core
and
maybe
getting
signal
sharing
to
give
an
update.
I'd
see.
C
C
He
if
we
had
a
volunteer
to
go
to
API
machinery
to
to
find
out
what
the
roadmap
for
CRD
GA
is
I.
Think
someone
could
just
give
a
brief
update
to
this
group
or
yeah,
or
we
could
ask
them
to
give
an
update,
but
I
think
they
were
still
last
I
heard
they
were
still
all
trying
to
come
finalize
the
criteria
for
GA
for
Sierra,
DS
and
obviously
there's
there
has
been
work
in
progress
on
a
conversion
which
is
one
of
the
clear
requirements,
but
I'm
not
sure
if
they
have
a
crisp
set
of
criteria.
K
E
E
To
give
a
brief
update
the
proposal
that
we
originally
worked
on
ad
hoc
between
myself,
mr.
Hawkins,
to
your
right
and
just
in
Santa,
Barbara
and
other
folks,
is
that
we
didn't
like
any
of
the
proposals
pretty
much.
But
add-ons
was
the
one
we
disliked
the
least
and
at
the
add-on
sub
project
has
been
started,
they're,
starting
with
some
core
things.
E
C
Yeah
as
I
recall,
that
was
one
of
the
name
blockers
for
a
couple
of
the
use
cases,
no
least
had
some
additional
blockers,
which
I
don't
think
are
going
to
be
addressed
even
in
the
ga
of
CRD.
But
you
know
for
controllers
that
just
need
something
to
be
installed
by
defaults
and
every
cluster
that
was
perceived
as
the
other
than
GAF
C
or
DS
themselves.
The
main
blocking
issue
for
for
use
within
the
project.
If.
G
E
J
K
G
G
K
G
C
C
J
E
G
J
What
I
would
like
just
for
those
yeah
I'm,
sorry
just
to
refresh
those
on
the
code,
might
I've
been
in
the
past
discussion
when
we
talk
about
the
pain
points
of
installing
a
C
or
D?
Can
we
be
maybe
a
little
precise
on
what
the
pain
is?
Obviously,
Chinese
are
extremely
popular
in
the
wild.
Today,
yes,.
G
E
G
J
H
All
right
so
I
mean
so
that
it
seems
like
there's
two
issues
here,
though,
just
to
be
clear
because
we
talked
about
a
couple
of
things.
Number
one
is
like
add-ons
foresee:
RVs,
the
other
one
was
finer
grain
scopes
for
CRTs,
essentially
sort
of
namespace
relative
CRTs
that
was
mentioned
earlier.
These
are
separate
right
and.
H
A
There
was
a
a
mailing
list
thread
on
dealing
with
CR
DS,
especially
when
you
get
into
some
of
the
way
some
of
the
applications
are
using
it
right.
You
might
have
two
different
people
who
are
installing
the
same
CR
DS
and
maybe
different
controllers
working
on
the
same
charity
in
two
different
namespaces.
You
don't
know
about
each
other.
We
have
that
as
a
real-world
problem
in
the
wild
today,
and
so
how
do
you
deal
with
things
like
updates
right
when
you've
got
different
controllers
querying
for
different
versions
of
different
things
versions?
A
K
C
A
I
I
Okay,
I
had
like
three
things
that
one
wanted
a
touch-based,
so
one
was
the
random
shots.
We
had
a
make
up
here
for
that
that
got
merged.
We
still
have
about
a
dozen
of
them
left
and
we
need
to
add
a
verify
script
to
keep
keep
the
number
of
you
know
dependencies
to
just
those
and
make
sure
we
don't
add
to
them.
I
So
then
the
we
removed
three
cloud
providers,
I
think
deprecated
ones
for
the
last
two
years
or
one
and
half
years
and
that's
gone,
then
the
third
one
which
I
wanted
to
make
sure
that
everybody
here
you
know
chanson
also
was
we
have
the
the
end-to-end
test
and
conformance
testing
teams
we're
thinking
about
like
how
do
you
keep
import
aliases
in
sync?
And
can
we
use
the
same
imported
guesses
across?
You
know
some
of
the
some
part
of
the
code
base,
so
we
added
an
update
and
a
verify
script
for
that.
I
I
So
right
now
there
is
an
update
script.
If
people
run
the
update
script,
there
are
there's
a
subdirectory
under
test
e
to
e,
which
gets
validated.
There
is
a
limited
number
of
imports
in
a
file
with
the
specific
alias
that
can
that
we
actually
to
make
sure
that
are
in
sync
and
there
is
a
verify
script
which
is
disabled,
so
it
doesn't
run
when
people
want
to
add
to
it.
I
The
reason
for
adding
this
was
there
were
at
least
two
people
who
were
working
in
this
areas
who
came
up
with
the
same
idea
around
the
same
time
saying
you
know
it's
hard
to
review
stuff,
because
the
same
alias
can
mean
two
different
things
when
there
are
two
different
files,
so
that
was
the
basic
problem
that
we
were
trying
to
solve
there.
So
Tim.
G
G
All
right
I've
definitely
feel
the
sympathy
for
people
who
are
reviewing
large
changes
and
have
a
this
was
all
over
the
place.
It
wasn't
clear
to
me
that
the
warrant
did
this
level
of
work,
but
the
work
has
done
now
so
well.
No,
the
work
is
never
done
because
we
maintain
it
forever.
It's
the
ASP
code
right
and
in
depth
cost
on
others
like
anyone
using
the
code
base,
it
imposes
a
cost.
I
G
I
D
D
Because
we
discovered
that
the
are
back
API
that
we
were
using
at
a
number
of
conformance
tests
was
actually
the
beta
API
and
not
the
v1
API,
and
then,
when
Jordan
opened
up
PR
to
migrate,
he
followed
a
convention
that
I
noticed
was
in
other
places,
and
so
then
I
thought
well
like.
Is
that
an
actual
convention?
Or
is
it
just
a
personal
thing
and
it
looked
like
it
was
really
most
of
test.
Etv
was
close
to
following
that
convention.
Not
all
of
it,
though.
D
G
Not
what
I'm
I
mean
Clayton's
just
running
an
experiment:
I'm
fine
with
it,
since
it's
done
and
I
don't
actually
like
have
to
deal
with
it.
So
you
know
my
personal
investment
is
low,
but
I
am
fine
with
leaving
it
in
the
testee
de
space,
where
I
do
think.
Consistency
probably
mattered.
I.
Think
the
real
value
of
this,
though,
would
be
in
a
go
lint
test
that
would
actually
benefit
the
entire
code
base.
You
know
we
know
we
already
have
a
bunch
of
things
that
fail
go
lint.
This
would
be
one
more
reason
to
fail.
G
H
D
If
y'all
are
interested
in
iterating
on
this,
we
can
do
that,
but
it
is.
We
did
reach
the
conclusion
that,
like
this
is
an
experiment,
a
human
who
cares
about
being
really
fiddly
consistent,
can
run
this
and
then
try
to
have
fun
landing
there.
Pr
so
I
have
a
47
commit
PR.
That
touches
a
couple
thousand
lines
that
uses
this
tool
to
do
that
and
if
it
never
it
gets
in
I
guess
it
never
gets
in
and
that
that's
my
bad.
G
Who
are
spending
most
time
looking
to
eg,
find
it
useful
to
occasionally
run
this
and
get
back
to
convention?
I,
don't
really
have
a
problem
with
it.
I
would
say.
If
the
maintenance
of
this
program
itself
becomes
a
problem,
then
we
should
reconsider
the
medic
question
here,
which
I
feel
like
we
dance
around,
which
is:
we've
never
actually
documented
the
code
conventions
for
kubernetes
and
it's
ecosystem
projects
in
a
consistent
way.
G
When
I
review
I
have
a
very
opinionated
stance
that
I
don't
always
enforce
on
everyone,
but
sometimes
I'll
say
like
you
know,
you
should
go
do
this?
Why?
Because
I
told
you
so
that's
not
necessarily
the
best
thing.
I
do
think
the
people
who
are
most
like
a
lot
of
people
who
are
responding
to
this
are
people
probably
should
participate
in
trying
to
get
like
a
20
or
30
line.
Here's
our
conventions.
D
I,
looked
I
couldn't
find
a
document
that
talked
about
conventions
for
for
imports,
there's
just
one
other
one
other
quick
thing
like
the
last
time.
This
discussion
happened
since
you're.
Talking
about
the
broader
nuclear
system,
people
are
like.
Why
not
go
in
ports
like
this
is
kind
of
like
Oh
imports
is
super
opinionated
and
doesn't
have
settings.
It's
like
look.
The
alias
you
get
is
the
last
name
of
the
package.
That's
it
.
and
we're
like
what
if
we
modified
that
go,
imports
turns
out
to
be
really
slow.
This
turns
out
to
be
really
fast.
G
Imports
like
just
like,
we
have
problems
that
go
in
like
this,
is
the
tension
of.
We
have
learned
things
over
the
years
and
you're
right,
Aaron
that
there's
things
that
we
need,
that
are
just
a
little
bit
further
you're
right
that
it
should
be
documented
in
a
place
that
anybody
could
find
it
and
every
reviewer
should
be
able
to
point
to
it,
and
we
should
like
this.
Discussion
belongs
as
a
PR
to
that
it
should
exist
before
we
have
to
that
PR,
because
we
doc
like
I,
know.
G
I
E
I'll
keep
this
short
because
we're
running
low
on
time.
We
did
kind
of
like
a
minute
planning
session
for
116
like
this
is
the
first
time
the
conformance
group
actually
did
like
what
are
the
priorities
that
we
want
to
commit
to
you
for
our
given
cycle.
The
parties
are
listed
there
in
the
doc
p0.
One
of
the
things
we
think
are
the
highest
priority.
E
So
this
will
help
to
make
our
images
more
agnostic
to
platform,
and
there
are
less
of
them,
so
they'll
be
good.
There
are
some
thorns
at
our
side.
Currently,
though,
there
are
some
specialty
images
that
exist,
I,
don't
know
if
it's
possible
to
boot
that
stuff
out,
but
we'll
try
to
address
those
and
maybe
come
back
to
this
group
to
talk
about
those
GPU
is
a
thorn
also.
Last
but
not
least,
we're
going
to
go
through
our
existing
backlog
of
current
upgrade
tests
that
we
tested.
E
We
want
to
promote
from
from
existing
into
sweet
tests,
to
conformance
as
well
as
evaluate
the
Umbrella
issues
that
hippie
has
created
for
us.
Last
but
not
least,
we
want
to
start
to
do
create
a
better
criteria
for
evaluation
for
priorities
for
the
test.
We
want
to
promote
a
started
document,
some
of
the
things
we're
doing
to
make
it
a
little
bit
clearer
for
the
future.
So
that's
our
current
plan,
as
the
people
have
agreed
to
so
let
it
be
written
in
stone.