►
From YouTube: SIG Architecture Meeting 20180510
Description
A
All
right
welcome
everybody
I'm.
This
is
the
last
I'll
talk,
probably
for
the
meetings
unless
I
have
to,
but
today's
Thursday
May
10th
2018.
This
is
sick
architecture
and
hosted
by
myself,
J
Senor
de
Mars
and
co-chair
of
Brian
grant,
and
there
is
a
meeting
agenda
and
notes
available
at
bit.
Dot
ly
/
cig
architecture
if
you're
falling
along
after
the
fact.
B
B
And
Tim
Sinclair
are
you
here?
Yep
I've
got
the
agenda
up
on
my
screen,
so
I
don't
of
the
video?
Can
we
switch
back
to
the
video
great,
so
the
conversation
started
with
this
kept
from
corniness
folks
who
want
to
switch
the
default
DNS
server
in
kubernetes
to
be
coordinates,
which
I
think
everybody
is
supportive
of.
The
question
was
around:
where
do
we
pull
that
binary
container
image
from?
Is
it
from
a
GC?
B
Are
that
the
coordinates
team
owns,
or
is
it
from
a
GC
r
or
a
docker
registry
that
kubernetes
core
team
owns
and
that
turned
into
a
conversation?
What
guarantees
are
we
making
to
people
who
are
consuming
the
kubernetes
release
artifacts
and
by
what
principles
are
we
using
to
decide
whether
this
is
okay
or
not?
B
B
I
really
need
fork
in
there
to
get
sense
right,
keep
our
own
our
own
copy
of
it
from
which
we
do
our
build.
This
seems
like
a
ton
of
work
and
a
very
slippery
slope
into
a
lot
more
work
than
even
that.
This
is
where
Tim,
st.
Claire
and
I
have
rubbed
against
each
other.
A
little
bit
he's
got
some
thoughts
about
it.
Clearly
from
his
experiences
with
OpenStack
openshift.
Sorry,
it's
early,
and
so
we
had
a
little
bit
of
discussion
on
this
I'm
steering
we
agreed
to
take
it
to
secure.
C
C
It's
like
the
last
option,
like
a
last
nuclear
option
that
we
should
go
towards
because,
as
you
mentioned
earlier,
and
as
I've
mentioned,
it's
a
very
slippery
slope
once
you
start
to
get
into
this
point
where
you're
supporting
some
level
of
patches
on
your
own
that
haven't
been
negotiated
with
an
upstream
and
upstream
source
you.
It
takes
a
long
time
and
sometimes
those
patches
snowball
right
now,
I,
don't
think
it'll.
Be
that
bad
for
core
DNS,
but
I
think
the
precedent
can
snowball
right.
B
I
would
argue
if
we
do
this
for
corniness.
We
absolutely
should
do
it
for
NZ,
because
we're
asking
users
to
depend
on
a
particular
binary
artifact
in
many
cases,
and
we
should
be
in
control
of
that.
The
fear
that
I
have
with
doing
it
sort
of
last.
Second,
it
you
know
Kobayashi
Maru
I,
like
the
analogy
is
we
don't
have
any
process
in
place
for
doing
it?
We
don't
know
how
to
do
it.
We
don't
know
how
the
build
works.
You
know
how
to
test
it.
B
C
A
ton
of
work,
it's
it's,
it's
it
and
I
think
we're
already
over
strapped
as
a
community
like
if
the
CNC
f
wants
to
build
test
apparatus
to
allow
us
to
do
these
things.
I
think
that
seems
like
a
reasonable
approach,
but
I,
don't
as
a
community
I,
don't
see
how
we
can.
We
can't
even
publish
our
own
artifacts
with
some
level
of
Providence
like
we
don't
sign,
are
binaries
right
that
that
it
scares
me
to
think
what
taking
on
that
extra
responsibility
would
mean.
So
we
don't
have
that
many.
D
B
B
Correct
so
I
would
guess
it's
probably
less
than
five
things
that
fall
into
this
category,
but
they're,
not
small
yeah
he's
not
a
small
project,
it's
not
project,
but
it's
kind
of
within
the
community
sure
and
it's
within
the
CNCs
family.
So
we
have
some
amount
of
power
anyway.
Well
I
mean
it's
within
the
you
know.
Yes,
so.
E
Can
I
jump
in
and
ask
a
couple
of
questions,
just
just
real
quick
to
clarify
when
it
comes
to
something
like
core
DNS
specifically?
Is
there
something
that's
come
up
that
made
us
think
that
there's
gonna
be
a
security
issue
that
we
can't
handle
working
with
them?
That
would
make
us
want
to
add
this
extra
process.
No.
B
D
I
mean
it's
come
up
to
other
times
like
there
was
a
security
issue
with
the
component
that
was
in
the
project,
but
not
in
the
release,
bundle
even
right.
So
then
we
had
to
scramble
and
figure
out
how
to
deal
with
that
so
I'm.
There
is
reason
for
this
precedent.
Yeah
there's
president
I
mean
it's
not
as
it
he's
never
happened.
E
And
you
know
if
it's
very
easy
once
we
start
venturing,
something
to
just
say:
well,
we'll
just
throw
this
one
thing
on:
we
want
sooner
than
they're
gonna
get
it.
We
want
this
other
little
thing
here
and
next
thing
you
know
we're
carrying
a
fork
and
then
to
bring
in
upstream
changes
becomes
a
real
pain.
I
know,
I've
actually
done
this
before
it
can
be
a
real
pain
and
a
fair
amount
of
work
and
we're
already
overextended
and
so
I
guess
my
argument
would
be.
E
We
should
wait
to
start
doing
this
until
we
have
problems
now.
I
think
the
security
case
is
a
good
one
to
say:
how
do
we
tackle
it
make
sure
they
have
a
good
security
process.
They
actually
do
have
a
little
bit
documented
on
security.
I
haven't
poked
it
at
all
for
Cortney
and
s,
but
we
should
probably
make
sure
I
mean
if
it's
part
of
the
CNCs
family
they've
got
a
good
process
where
we
can
work
with
them.
E
Then
then
we
don't
necessarily
have
to
carry
that,
because
those
folks
are
willing
to
carry
their
own
burden.
It
might
be
something
different
with
sed
because
that's
an
entirely
separate
project
which
I
wouldn't
mind
being
managed
differently,
maybe
under
the
scenes.
Yet
for
something,
but
that's
a
different
story.
So
maybe
there's
a
way
we
could
work
with
them
to
improve
their
process,
to
where
we
would
be
more
comfortable
with
it,
and
it
would
be
a
contribution
to
them
to
make
their
stuff
better
as
well.
F
Example,
one
of
the
important
things
that
came
out
of
that
exercise
was
getting
a
process
for
back
boarding,
changes
to
master
back
to
the
version
that
we
depended
on
and
that
that
helped
a
lot
and
so
having
a
formal
process
around
back
boarding
improvements.
And
what
is
the
versions
that
will
be
supportive
in
the
future
seems
like
an
important
one.
F
The
other
topic
I
wanted
to
bring
up
was
just
the
transitive
dependencies
of
these
other
projects
that
we
depend
on
and
it's
unclear
what
percentage
of
the
surface
area
or
or
the
exposure
is
addressed
by
entering
in
just
the
first
hop
@td
depends
on
other
open-source
projects,
I'm
sure
core
DNS
does
as
well,
and
that
I
think
is
unknown
at
this
point.
But
how
many
layers
deep
in
the
onion
do
you
go,
is
kind
of
this.
The
60
part
so.
B
There's
binary
dependencies
and
there's
code
dependencies
right
and
I'm,
mostly
concerned
with
our
ability
to
build
the
binary
artifact,
so
whatever
EDD
needs
to
build
up
by
artifact
would
have
to
be
simultaneously
vendored
in
the
same
way.
At
the
same
tags
such
that
we
had
a
reboost
leaders
if
we
were
to
go
down
this
room.
C
C
B
E
Yeah
well,
the
tools
do
the
expectation
is,
is
that
you
set
an
alias
to
the
other
location
that
you're
going
to
pull
it
from
the
tools.
Do
support
it's
not
plus
patch.
It's
it's
supporting
a
fork
using
the
same
path,
alias
so
to
speak.
The
tools
do
support
that
and
both
of
them
both
these
tools,
we're
talking
about
used
app
which
handles
the
entire
tree.
That's
in
use,
not
the
entire
tree
that
could
be
specified
by
packages,
but
whatever
imports
are
actually
specified.
That's
all
recorded,
that's
all
maintained
as
far
as
the
whole
transitive
tree.
E
It's
all
documented
there
at
the
revisions
and
everything.
If
you
go
look
at
the
go
package
lock
in
these,
then
I'll
tell
you
every
single
transitive
dependency
at
every
single
revision.
That's
used
for
the
whole
tree.
For
this
this
thing
and
whatever
release
it
is,
and
there
is
wait
to
alias
in
so.
If
you
want
to
say,
I
want
to
fork
it
and
do
this
in
your
go
package
tamil,
you
can
say,
use
this
alias.
Instead,
you
can
pull
it
from
a
separate
repo
and
then
use
that
instead,
okay,
so
this
at
least
doable.
B
What
is
the
follow
up
and
who's
the
owner,
so
I'm
blocking
the
cap,
so
I
assume
that
it's
somewhat
my
problem,
I,
don't
know
what
their
follow
up
is.
Is
this
sense
here
that
we
don't
care
enough
to
go
in
this
far
or
that
we
think
that
it's
not
that
big,
a
problem,
sorry
I
didn't
mean
to
denigrate
the
decision.
D
E
D
But
one
of
them
is,
you
know
a
security
related
benchmark
and
we
can
add
I,
don't
know
that
the
security
in
these
process
CDE
processes
covered
by
that
benchmark.
But
that
is
something
that
we
could
all
look
on.
Corniness
is
not
a
graduated
project
but
sure,
if
that
were
a
requirement
for
graduated
projects,
then
it
would
probably
prefer
to
implement
that's.
B
You
know
rather
know
okay
and
so
we're
saying
like
because
they're
in
the
CN
CF
sphere
that
we're
okay
with
not
having
a
copy
of
their
code
because
I'm
trusted
it
will
get
obliterated.
Yeah.
Okay,
I
will
I
will
do
the
audit
to
see
what
other
binaries
I
can
figure
out,
maybe
fall
in
the
same
category
and
maybe
we'll
have
to
revisit
if
there's
something
else,
I'm
gonna
SNES
outside
that
strictly
speaking,
so
we
should
decide.
Then,
by
what
principle
do
we
allow
at
CD
2
all
the
same
vendor?
B
C
E
E
D
C
Yeah
I,
don't
know
where
it
was
decided.
There's
no
cap
on
how
these
are
being
managed.
I
just
was
tracking
my
issues
for
her
111
release
to
try
and
make
sure
my
PRS
for
our
sync
cluster
lifecycle
world
being
triaged
properly,
and
they
happened
upon
a
couple
of
PRS
from
API
machinery
to
feature
branches
and
her
monstrosity
of
logistical
nightmare.
D
So
so
a
couple
of
things
one
is
through
the
bulk
of
the
discussion.
It
would
be
useful
to
actually
have
Daniel
come
I
think
so
we
can
actually
discuss
it
both
because
he's
actually
doing
the
teacher
branch
and
because
he's
done
most,
the
other
major
surgery
with
our
hood
is
organized
in
terms
of
staging
and
little
repos,
and
things
like
that
right
he's
basically
driving
I
guess.
C
D
I
can
answer
that,
which
is
this
has
been
discussed
many
times,
including
it
the
Leadership
Summit
last
summer,
so
before
kept
even
existed,
and
what
I
had
asked
for
all
of
those
times
and
including
it,
the
Leadership
Summit
was
for
someone
to
actually
try
it.
So
we
could
actually
go
figure
out
what
the
problems
were
like
there
were
problems
with
running
tests
on
the
branches
and
their
problems
with
the
owners
files,
and
you
know
just
to
actually
do
an
experiment
to
surface
problems.
D
So
I
don't
know
that
I
didn't
actually
know
that
we're
really
gonna
do
it
with
apply,
but
I
had
every
release.
I'd
hoped
that
someone
was
going
to
try
it
and
I.
Think
Daniel
is
being
familiar
with
the
woes
of
the
other
approaches
that
we
try
is
in
a
pretty
good
position
to
actually
try
and
contain
the
damage
we
can
have
him
come
to
present
a
cig
architecture
to
explain
what
he's
doing.
Okay
go
yeah.
F
D
There
is
a
tracking
issue
in
the
community
repo
for
feature
branches.
Maybe
what
we
should
do
is
post
back
the
issues
that
are
discovered
there.
So
there's
like
a
single
trail
of
you
know.
Oh,
we
actually
need
to
mark
these
things,
especially
with
labels,
so
that
people
are
aware
that
it's
not
master
branch
or.
A
D
B
D
That's
much
worse
in
my
opinion,
because
it's
much
harder
for
them
to
get
the
benefits
of
our
automation,
like
the
testing
and
the
CLA
bots
and
you
know,
have
them
be
visible
to
other
community
members
where
people
can
do
normal
review
and
comments,
and
things
like
that,
like
I,
don't
want
to
be
completely
dark.
Well,.
C
B
You're
right
we
don't
get
any
automation,
we
don't
get
the
CLA
bot,
we
don't
get
the
test
infrastructure,
so
my
question
is
like
whether
we
should
make
that
easier
and
just
more
plumb
or
possible
to
do
in
a
personal
fork
or
whether
we
should
continue.
My
big
fear
is
that
somebody
somewhere
fat-fingers
it
and
merges
the
master
branch
when
they
didn't
mean
to.
B
C
E
Know
there's
another
angle
to
this
too,
with
feature
branches
right,
if
they're
long-running,
then
they're,
especially
on
a
fast-moving
project,
they
can
be
really
hard
to
eventually
merge
back
in
and
if
it's
in
kubernetes
kubernetes
or
something
we
own,
then
we
own
that
pane.
But
if
somebody
else
has
their
own
personal
Fork
and
we
expect
them
to
do
a
PR
as
always,
then
they
have
the
pain
of
bringing
a
packet,
especially
if
they
decide
to
have
a
long-running
feature
branch.
You.
D
So
I
know
that
people
really
like
to
live
ahead
and
master
and
is
definitely
how
we
do
things
inside
Google
and
their
benefits.
But
it
requires
a
super
high
level
of
discipline
and
very
high
levels
of
test
coverage,
and
we
don't
have
that
and
they
also
don't
have
frequent
releases.
So
in
the
situation
every
release
where
their
stuff,
that
probably
shouldn't
be
and
the
release
it
actually
isn't
the
release,
that's
not
properly
flag,
gated
and
is
not
adequately
tested
and
doesn't
have
documentation,
and
then
every
released
is
a
fire
drill.
B
And
just
by
way
of
analogy,
if
you
look
at
how
like
Linux
kernel
operates
right,
nobody
pushes
half-baked
features
into
the
upstream
kernel
they
at
all
are
operating
on
their
own
individual
branches
and
honestly,
the
code
base
has
evolved
to
make
it
easier
for
some
systemic
patches
to
exist
right
and
I.
Think.
D
D
Refactoring
could
be
a
whitelist
activity,
I
think
on
what
they're
doing,
but
anyway,
if
we
really
want
to
discuss
this
in
detail,
we
should
probably
ask
Daniel
to
come.
I
didn't
notice
this
on
the
agenda
until
this
morning.
It
would
be
interesting
to
hear
their
current
set
of
woes.
How
does
it
actually
going?
It's.
B
The
fact
that
they
can
do
sort
of
giant
scary
PRS
with
confidence
that,
like
nobody
else,
should
care
about.
It
is
kind
of
powerful,
like
you
shouldn't,
be
looking
at
those
PRS
unless
you're
involved
in
that
development
process,
because
at
the
end
of
it
like,
hopefully,
that
PR
stream
will
be
broken
down
and
we'll
charge.
D
I
Sure
I
mean
I'll
just
give
the
very
brief
introduction
that
we
rolled
out
the
certified
kubernetes
program
in
September,
and
we
were
aware
of
the
fact
that
the
actual
coverage
of
the
conformance
tests
in
communities
is
not
great
in
the
range
of
15
to
20
percent.
And
so
we
looked
at
funding
for
improving
those
tests
and
the
governing
board
had
the
request
that
that
be
a
one-time
process
and
not
an
indefinite
liability
that
they
were
signing
up
to.
I
And
so
we
worked
with
city
architecture,
I
guess
about
four
months
ago
or
so,
and
to
have
a
change
saying
that,
when
new
features
made
before
features
can
make
it
too
stable
that
the
conformance
test
be
included
and
so
that's
occurred.
And
so
there
is
this
large
amount
of
technical
debt.
And
the
idea
here
is
that
CN
CF
is
funding
an
external
debt
test
development
company
to
work
on
that
and
try
and
dig
us
out
of
it
and
mithra
has
generously
volunteered
time
to
help
supervise
them.
F
What
tests
do
you
wish
you
had
that
you
don't
not
that
there
would
be
some
external
vendor
writing
all
of
the
conformance
tests
unchecked.
The
other
thing
I
want
to
mention
that
I
did
meet
with
a
couple
of
those
folks
and
suggested
that
lots
of
people
are
paid
by
lots
of
companies
to
work
on
kubernetes
full
time,
and
they
are
no
exception.
They
are
simply
joining
the
community.
D
I
think
you
know,
as
Dan
mentioned,
there
may
be
getting
started
more
slowly
than
expected.
I
think
there
were
impressions
that
more
has
happened
and
has
actually
happened,
or
that
more
complex
things
were
being
developed
because
they
were
bunch
of
ideas
discussed
in
the
past
of
building
some
kind
of
proxy
to
measure
or
detailed
coverage,
information
or
kind
of
automated
framework
for
systematically.
F
Sorry
to
jump
in
there
are
a
couple
of
other
efforts.
You'll
see
one
about
the
API
surface
coverage.
This
vendor
is
not
the
right
group
of
people
to
do
that.
They're
new
to
kubernetes.
They
don't
have
the
context
that
are
not
the
right
people
to
be
building
test
infrastructure,
in
my
opinion,
just
seeing
what
they've
been
working
on,
so
they
would
be
simply
improving
the
flakiness
of
existing
tests
that
we
would
like
to
be
conformist
tests
or
writing
a
few
tests
that
don't
exist
that
we
wish
did
exist.
F
I
think
the
test,
infrastructure
and
I
think
the
other
group
dan
has
engaged
with
hippie
hackers
here
to
talk
about
test
coverage
via
that
audit
logging,
which
is
pretty
promising,
which
looks
really
cool
you'll,
see
that
in
a
minute,
but
I
think
that
may
be
a
Mis
Mis
expectation
about
the
role
of
this
vendor.
So.
C
Yeah
I
think
the
conversation
we
had
in
the
performance,
working
group
and
whatnot
was
that
the
new
PRS
before
they
were
added
would
be
make
sure
that
there's
a
test
plan
its
okayed
by
cigar
before
they,
because
there
were
a
couple
of
PRS
that
came
to
sync
testing.
I
know
that
Aaron
and
I
were
like
this
has
been
approved.
Is
this
okay
I
because
will
happily
review
those
PRS
so
long
as
cigar
says?
Yes,
so.
F
One
one
clarification
I
would
like
one
of
the
PRS
is
about
API
machinery
and
it's
simply
an
e
to
e
test
against
API
machinery
and
API
machinery.
Folks
are
in
now
in
that
PR
I
think
there
was
a
routing
problem
there
or
I
went
to
cig
testing
first
without
any
context,
but
now
that
it
has
been
assigned
to
folks
in
yai
machinery,
I
I'm
unclear
why
the
cigarette
connector
cares
about
tests
ete
tests
that
are
being
written
to
improve
the
test
coverage
of
the
api
machinery
at
this
stage.
F
C
F
B
J
F
A
working
group
for
conformance,
which
is
a
little
strange,
the
mailing
list,
is
different.
It's
more
of
a
has
been
run
more
within
the
CNC
F,
then
in
the
kubernetes
community,
although
it
is
linked
from
the
SIG's
page
and
working
group
there.
So
if
you
go
to
the
list
of
SIG's,
there's
the
mailing
list
and
slack
Channel,
and
so
on
is
listed
in
that
in
that
page.
J
F
D
Yeah
so
I
think
going
forward.
You
know
clarifying
clarifying
the
process
and
getting
some
documents
can
very
smart
down
and
checked
in
so
that
everybody
can
discover
them
and
see.
What's
going
on
would
be
helpful
and
then
some
kind
of
if
there
are
a
lot
of
tests
in
flight
or
that
have
identified
that
needed
on
just
have
a
single
place
where
those
can
be
tracked
so
that
people
can
see.
What's
what
statuses
so.
H
D
H
Look
weak
status,
update
as
what's
been
going
on
is
we
have
a
spec
sheet
which
has
all
the
different
API
and
going
some
and
which
have
coverage
in
terms
of
condolences
and
in
twinges,
but
we
work
we
are
looking
at
correct
is
to
identify
which
other
top
API
is
that
we
want
to
focus
on
to
go
and
a
dad
first
and
that,
once
it
rains
hit
me
happy
hacker
talks
about
a
stool.
There
is
nothing
we
are
assuming
that
we
can
get
some
insights
into
the
top.
H
He
is
from
his
results,
and
once
we
have
that,
then
we
definitely
ordered
to
come
to
this
six
themselves
and
see.
Is
this
the
great
ones
that
we
should
be
focusing
on?
If
so,
then
what
are
the
gaps
that
it
says,
whatever
scenarios
that
we
should
become
a
danger
test?
We
definitely
work
with
only
six
to
even
get
a
guidance
on
that
before
working
with
the
vendor
company
to
add
those
cases,
and
we
were
angry
with
the.
The
next
idea
is
to
actually
turn
to
cigar
to
see
if
these
could
be
turned
into
conformance
cases.
H
G
D
Yeah
in
my
hyoma
guidance,
which
I
know
was
copied
to
one
of
the
docs,
was
that
we
need
to
focus
on
parts
of
the
system
that
our
pluggable
or
that
we
expect
to
be.
We
implemented
because
that's
those
are
the
areas
of
most
concerns.
Conformance
and
defragmentation
is
that
parts
of
like
if
people
are
just
running
kubernetes,
then
it's
gonna
be
the
same
as
everybody
else's
kubernetes.
D
But
if
parts
get
replaced
or
things
are
credible
like
a
container
on
time
or
like
virtual
Qiblah
networking
storage
things
like
that,
then
that's
where
we
need
components
to
us
to
make
sure
that
those
are
implementations
actually
match
the
behavior
of
the
system,
and
there
have
been
things
that
have
been
identified
that
you
know
like
people
have
T.
V
is
not
explicitly
replaceable,
but
there
have
been
experiments
and
key
ours
from
people
to
replace
it
with
console
and
DynamoDB,
and
things
like
that.
D
So
just
as
one
example,
things
like
atapi
machinery
is,
there
are
eco,
her
behaviors
that
are
the
image
heavily
dependent
on
sed
behaviors,
like
optimistic
concurrency
and
watch.
We
should
prioritize
those
areas
for
conformance
tests,
so
snake
API
machinery
is
working
on
a
watch
test
and
I
asked
latex
it.
You
know
been
some
feedback
to
help,
make
sure
that
the
right
things
are
actually
tested
there
to
test
the
multi
version.
D
Existing
eee
tests
were
converted
informants,
and
that
did
happen
and
if
they
go
through
the
sig
architecture,
inning
the
api
approvers,
the
other
big
batch
of
tests
that
were
looking
at-
but
you
don't
exactly
know
what's
gonna
happen
with
are
the
node
conformance
to
us.
Naming
is
a
little
bit
confusing,
but
the
node
conformance
tests
is
where
a
lot
of
the
node
level
todd
level
functionality
is
tested.
D
So
we're
working
on
figuring
out
what
to
do
about
that,
but
the
pod
functionality
in
particularly
because
everything
in
the
node
is
portable
basically
is
really
critical,
that
we
have
good
test
coverage
there
and
you
just
do
not.
You
don't
have
any
tests
that
directly
test
Tod's
and
functionality
in
the
conformance
feed
right
now,
it's
to
indirectly
tested
by
some
of
the
tests,
but
it's
not
directly
deliberately
tested.
D
G
D
J
Brian
dim,
so
how
do
you
prioritize
cloud
provider
stuff?
Because
we
have
a
huge
ecosystem
here
of
so
many
cloud
providers?
And
we
don't
have
a
way
to
you
know
black
thing
that
the
providers
that
we
are
pulling
out
from
entry
to
external,
that
they
are
actually
working
and
they
are
doing
the
things
that
they
are
supposed
to
do
so.
B
I
think
on
this
topic,
there's
some
confusion
about
what
the
spec
is
versus,
what
the
implementations
do
and
I've
been
working
with
Walter
on
this
serve
on
going
to
tease
apart
the
controllers
that
are
mixing
up
both
sort
of
expected
to
gratis
parts
and
the
provider
specific
parts.
They
should
probably
in
almost
those
cases,
these
separate
controllers,
to
the
extent
that
I'm
not
sure
how
much
of
a
conformance
spec
we
can
apply
to
cloud
providers
above
and
beyond
the
generic
functionality.
Like
does
your
networking
work
like
do
services
work?
Yes,
they
work
cool.
B
Then
your
cloud
rider
is
conform
and
there's
always
going
to
be
cloud
rider.
Specific
things
that
are
gonna
be
really
impossible
to
test,
because
you
know
they're
specifically
Google
operates
the
way
Amazon
operates.
I
would
love
to.
If
you
have
specifics
to
go
through
a
list
of
what
you
think,
behaviors
that
we
might
want
to
specify
are
and
start
to
tease
those
apart.
I
have
one.
D
D
C
Want
to
say
that
this
conversation
is
great
and
it's
the
reason
why
I
brought
it
up
in
steering
so
that
we
could
have
it
here.
So
that's
a
little
purpose
behind
me.
Raising
the
issue
is
that
there's
a
lot
of
information
here,
that's
being
relayed
to
a
lot
of
the
other
people
who
need
to
be
aware
of
it
and
the
implications?
That's
what
will
happen
so
thanks.
That's
all!
Ok,.
I
I
That
would
use
a
lot
of
the
machinery
that
we've
created
for
certified
kubernetes
and
we
could
certify
a
set
of
kubernetes
apps
to
say
that,
for
example,
sto
or
scaffold
or
draft
or
many
of
these
other
tools
built
on
top,
require
us
to
burn
Eddy's
1.9
and
that
there
would
be
some
expected
guarantee
that
if
you
have
a
certified
kubernetes
1.9,
you
would
then
be
able
to
run
that
tool.
On
top
of
it
and
I
will
say,
there's
just
this
is
super
super
new
work.
I
There's
a
lot
of
balls
up
in
the
air,
even
this
definition
of.
Is
it
useful
to
think
about
a
kubernetes
api
consumers
that
we
call
capex,
as
opposed
to
like
a
my
secret
owner,
WordPress
implementation
of
a
home?
Cart
that
doesn't
exercise
the
API
in
the
same
way.
So
with
that,
let
me
hand
it
off
to
two
hippy
hacker.
Unless
anyone
wanted
to
interject
first
yeah.
D
I
just
have
a
quick
comment,
which
is
a
lot
of
these
obligations
like
my
sequel
or
whatever
have
operators
which
do
interact
with
the
API
surface
and
need
to
have
a
service
account
with
certain
vac
permissions,
and
things
like
that.
So
actually,
some
similar
work
had
been
done
discrete
audit
logs
in
order
to
automatically
generate
our
back
rules
for
applications.
D
So
it
sounds,
there's
just
some
similarity
there,
but
yeah.
Don't
don't
assume
that
just
because
it's
an
application
that
runs
in
a
container
that
it
doesn't
need
access
to
the
API
server,
because
in
practice,
in
order
to
automate
some
of
the
application
lifecycle,
people
are
finding
that
applications
do,
or
at
least
you
know
like
a
sidecar
or
an
operator
or
something
does.
I
Yeah
I
think
the
piece
I'm
still
a
little
unclear
on
is
that
I,
just
I,
don't
think
we
wanted
to
be
in
the
business
of
saying.
Oh,
this
is
a
kubernetes
compatible
container
that
that
would
sort
of
go
against
the
whole
socket
where,
if
it
runs
in
a
container,
it
actually
should
run
in
kubernetes,
and
so
we
were
trying
to
get
at.
Is
there
some
terminology
for
containers
that
exercise
the
APL,
but.
G
D
J
B
F
K
K
H
F
So
it
is
using
the
audit
logs
to
describe
what
api's
are
called
during
some
operations,
so
I
think
they
went
through
the
process
of
following
multiple
helmet
arts
and
seeing
what
API
groups
were
called
and
then
used
that
d3
library
to
visualize
it
in
those
concentric
doughnuts
that
that
has
read
for
G
a
beta
and
alpha
and
what
API
endpoints
are
called
and
so
on.
It's
from
my
perspective
is
the
most
promising
demonstration
of
or
visualization
of
what
API
groups
and
endpoints
and
verbs
are
used.
G
D
K
Is
the
starburst
chart
for
Rowan
and
I
work?
If
you
there's
a
link
down
there,
you
can
go
in
and
and
doing
interactive.
The
outer
grey
parts
are
what's
untested
and
the
men,
the
main
parts
in
the
middle
or
alpha
beta
stable,
and
that
gives
you
an
idea
of
what
part
like
this
is
the
overall
view
we
also
stuff
per
application.
K
K
So
there,
if
you
mouse
over,
you
can
kind
of
see
it
at
the
top
level.
There
be
a
link,
there's
the
spreadsheet,
so
the
spreadsheet
is
automatically
generated
from
the
API
repository
using
the
data
and
there
should
be
I
made
to
grant
read/write
access,
but
you
can
filter
based
on
the
conformance
and
it
generates
the
list
of
api's
that
are
stable,
that
there
are
no
conformance
tests
for
that
I'm.
A
number
of
the
applications
that
we
tested
are
using.
H
K
Are
the
lists
they're,
primarily
the
helmet
that
use
our
back
so
that
we
knew
that
these
were
kubernetes
api
consumers
and
within
each
of
those?
If
you
want
to
pull
up,
say
draft
or
sonobuoy,
we
can
see
what
they're
used
to
do.
This
is
what
endpoints
that
are
hit
from
Asuna
boy
tests.
It
doesn't
show
it's
untested,
it
shows
what's
actually
hit,
so
this
would
be
like
a
profile
for
a
particular
application.
K
Beyond
the
conformance
for
a
particular
application
to
be
a
kubernetes
1.9,
a
stable
or
one
point
10
for
the
conformance
test,
there's
also
an
idea.
We
gather
all
the
logs
and
are
able
to
see
for
a
particular
API
endpoint
all
of
the
applications
using
that
endpoint
and
the
parameters
and
workflow.
So
for
a
user
story
we
could
over
a
flow.
K
We
could
possibly
generate
a
a
set
of
data
for
what
person
writing
that
test
to
see
how
the
different
applications
are
using
that
API
from
real
data,
not
from
guessing
what
this
API
might
be
doing,
and
if
we
do
that
across.
Let's
say
you
know
forty
to
sixty
applications
that
tester
is
going
to
have
a
really
good
idea
of
how
to
write
a
meaningful
test.
K
So
those
are
I.
Guess
that's!
The
three
things
when
prioritizing
throughout
our
community
wet
api's
are
used
heavily
around
being
able
to
have
a
certification
program
in
the
future
that
has
these.
These
capex
are
stable
or
compliant
for
these
releases
and
then
generating
a
good
starting
place
for
our
testers
to
write
tests
or
possibly
doing
some
automation.
Based
on
that.
K
Slightest
go
up;
one
I
want
to
kind
of
go
through.
Why
that,
where
the
data's
flaky,
okay,
so
the
right
now
the
the
test
we're
just
ringing
up
the
helm,
charts
and
I,
don't
think
we're
driving
the
charts
heavily
enough.
We
don't
really
have
a
lot
of
helmet
rights
that
have
a
helm
test.
Chart
support
I,
think
we
need
to
prioritize
a
list
of
applications
that
do
use
the
kubernetes
api,
so
these
capex,
so
that
we
can
drive
them
hard
to
see
what
API
is
they're
using.
K
F
Discussion
about
the
certification
or
evaluation
of
API
consumers
is
a
second
issue
for
me
that
I
think
we
need
more
conversation
on.
Rather,
this
relates
to
the
conformance
program.
I
think
it
helps
identify
what
api's
are
actually
being
exercised
during
conformance
test
I
had
a
more
granular
level
and
helps
with
the
prioritization
effort,
so
that
that's
how
I
think
the
most.
F
I
Guess
I
would
make
the
request
if
we
could
need
to
move
this
to
the
sig
architecture.
Mailing
list
I'd
really
like
to
feedback
from
this
group
as
to
where
Chris
and
Rowan
should
focus
next.
So
I
think
this
is
really
promising
initial
results.
But
what
you'd
like
to
see
for
the
school
to
be
useful
in
are
not
going
basis.
I
D
Yeah
great
well,
thanks
for
coming
I,
think
that
was
great
helpful
for
everybody
to
see
what's
actually
happening
into
the
progress
that's
being
made.
I
think
the
fall
discussion
will
be
useful.
Also,
the
API
visualization
is
I.
Think
presents
it
in
very
clear
terms
about
how
poor
the
coverage
actually
is
like
most
this
table.
Api's
are
mostly
gray
yeah.
D
You
know
getting
the
deeper
level
of
making
sure
that
we
have
tests
that
actually
test
functionality.
That's
relevant!
It's
great
to
hear
that
you're.
Thinking
about
that
and
supposed
to
just
superficially
get
all
the
end
points
yeah.
There
are
certain
api's
like
pods
that
are
extremely
feature-rich,
so
you
know
we
can
say
well
the
exercise
ponds
that
actually
pause
have
like
70
features,
or
something
like
that
right.
So
we'll
need
to
get
down
to
that
next
level
of
detail
and
actually.
D
Of
the
API
are
of
other
api's
are
exercised
through
pods
down
through
the
API
directly
write
the
secrets
to
config
maps
and
volumes
of
other
types.
Things
like
that,
okay,
so
we'll
want
to
think
about
how
we
can
make
sure
that
we
have
coverage
of
those
things
I
think
to
some
degree.
You
know
this
idea
of
exercising
applications
is
interesting.
We
also
have
some
tutorials
and
things
like
that.
So
maybe
you
know
we
think
about
some
of
the
main
user
journeys.
D
You
can
make
sure
that
you
know
those
kinds
of
normal
things
that
users
with
you
would
get
tested
beyond
in
terms
of
near-term
priorities.
Going
back
to
the
thing
I
mentioned
earlier,
it's
making
sure
we
have
adequate
testing.
Other
things
that
we
believe
will
get.
Reimplemented
still
seems
like
the
right
area
of
focus
yeah
that
one
in
the
box
functionality
pod
is
one
of
the
things
that
would
be
really
because
the
container
runtime
is
portable.
Networking
is
portable
storage
is
pluggable.
There's
a
virtual
keyboard
thing,
that's
happening
so,
just
basically
everything
about
related
and
secrets.