►
From YouTube: 20200825 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody:
this
is
hippie
hacker
with
iii
and
we
are
starting
the
conformance
meeting
for
august
the
25th
in
most
places.
I
will
be
your
host
today
and
I
see
we
have
friends
from
all
over
the
world,
so
welcome.
Welcome,
welcome,
remember,
to
abide
by
the
cncf
code
of
conduct
and
the
kubernetes
code
conduct
as
well.
A
A
Discussion
all
right,
I
am
hh.
I've
got
a
few
issues
up
here.
Rhian's
got
a
few
john's,
got
an
issue
at
the
end
and
if
we
have
time
to
go
ahead,
add
your
more
issues
to
the
bottom.
This
first
issue
is
one
that
we've
had
open
for
a
while.
That
is
identifying
features
and
api
operations
as
optional.
A
We
specifically
are
looking
for
a
create
core
v1
namespace
service
account
token,
that
is
in
core,
but
we
need
to
kind
of
separately
decide
if
we
want
to
remove
that
from
conformance,
because
it's
currently
still
part
of
our
surface
area,
but
in
trying
to
understand
other
endpoints
as
well.
Knowing
if
there
are,
is
a
way
to
tie
specific
api
operations
and
kinds
that
are
part
of
feature
gates,
because,
obviously
to
be
part
of
conformance,
it's
not
something
that
we
should
be
able
to
feature
gate
on
and
off.
A
The
manual
process.
Right
now
is
inspecting
the
kubernetes
code
address,
which
is
looking
at
the
comments
actually
in
the
in
the
in
the
this
kubernetes
source
code,
which
gets
interpolated
into
the
open
api.
It
doesn't
make
it
all
the
way
to
the
opening.
If
you
expect,
I
don't
think
other
operations
and
endpoints
are
usable
in
this
case
only
if
we
have
a
metric
server,
which
it's
unclear,
how
we're
supposed
to
to
sell
that.
So
we
were
putting
forth
that
to
cigarch
a
while
back
and
the
the.
A
A
It's
going
to
be
done
eventually,
but
not
in
a
time
frame,
useful
to
us
back
in
october,
and
then
at
the
beginning
of
this
year,
john
removed
the
life
cycle
stale,
I
saw
it
go
stale
again
and
he
removed
it,
and
I
was
asking
about
any
time
plan
or
frames
on
this.
I
just
want
to
make
sure
before
I
stir
this
up
too
much
that
this
is
something
that
we
do
want
to
resurrect
and
keep
on
the
as
a
higher
visibility
or
whether
we
should
go
ahead
and
let
it
let
it
rot.
B
C
Can't
we
can
bring
it
up
again,
but
you
know
if
the
gpa
machinery
has
too
many
other
falls
in
the
air
that
doesn't
mean
they're
going
to
get
to
it
any
time
sooner.
But
you
know
certainly
that's
the
way
you
make
things
happen,
make
them.
D
More
visible
so.
C
You
know,
I
guess
the
question
is,
do
we
have?
Is
it
blocking
us
in
anything
right
now?
It
is
blocking
say
automated
checks.
Is
that
what
the
the
issue
is
or.
A
Helpful
to
them,
you
know,
I
think,
for
us,
it's
knowing
whether
we
should
be
targeting
an
endpoint
or
not
and
trying
to
get
authoritative
it
in
this
case.
When,
for
this
particular
endpoint,
I
think
there's
a
separate
issue
on,
should
we
have
create
core
v1
namespace
service.
Account
token
be
part
of
the
surface
area
that
we
want
to
cover
for
conformance
right.
A
A
Our
mandate
is
to
not
include
in
conformance
api
operations
that
are
optional
right
and
for
create
core
v1
namespace
service
account
token.
It's
still
unclear
to
me
if
this
should
be
removed,
and
why
do
you
think
it's
optional?
A
I
think
when
we
looked
at
the
core
of
it,
is
it
tight
we're
trying
to
find
a
way?
Is
it
tied
to
a
feature
flag
or
not,
and
then
the
conversation
kind
of
went
well,
you
should
be
able
to
figure
that
out
at
runtime,
but
we're
trying
to
figure
it
out
as
that's
policy
right
now.
If
we
look
at
the
for
these
particular
fields,
I
think
and
as
we're
talking
about
security
context,
this
the
operation
is
service
account
token
in
our
er.
Here
I
think
we
don't
have
all
of
these.
A
The
end
point's
there
as
far
as
maybe
that
was
that
I
think,
can
we,
I
think
it
controls
the
presence
of
the
operation
or
the
endpoint.
C
C
A
I
I
think
our
policy
should
probably
be:
what
is
the
fault
if
you
don't
change
the
the
feature
sets
for
our
and
like
don't
don't
intentionally
disable
things
particularly
stuff
in
core,
and
I
I
think
that
create
core
v1
namespace
service
account.
Token
in
this
case,
is
enabled
by
default
and
wouldn't
have
a
flag
to
turn
it
on.
D
E
I
mean
this.
It
sounds
like
it's
in
the
nice
to
have
bucket.
I
do
think
it's
worth
asking
again
to
see,
because
it's
been
about
a
year
since
you
last
asked
it
would
be
where
that
falls
these
days,
but
I
think
of
it
in
terms
of
how
like
you
can
use,
is
it
feel,
deprecated
or
not
to
sort
of
automatically
generate
exclusions
of
certain
endpoints
or
fields?
E
A
E
Right
and
so
the
thought
being
that
for
some
of
the
things
that
are,
you
know,
blocked
by
a
feature
gate
or
something
you
could
put
some
kind
of
reminder
there
such
that
you
know
sort
of
each
release.
We
could
sort
of
periodically
revisit
like
hey.
Has
this
particular
feature
gone,
ga,
or
has
this
feature
gate
been
removed,
or
is
there
like
a
cap
that
we
can
link
to
or
something
to
to
remind
us
that,
like
this
needs
to
be
revisited?
E
A
Annotating
it
within
the
api
snoop
repo
for
our
decision
to
exclude
it
is
where
we're
doing
it
now
and
the
reason
is
usually
in
the
link
here-
and
this
is,
I
don't
know,
zack's
on
the
call
that
we
have
yeah
he's
not
here
today,
but
here
end.
A
Yeah
feature
flags
that
enable
and
disable
operations.
A
E
So
I'm
I'm
just
suggesting
it
sounds
like
you,
maybe
did
some
archaeology
on
this
a
year
ago,
but
we
didn't
drop
down
something
that
said
what
the
decision
was
and
why
it
was
made
that
way
at
the
time
it
would
be
useful
to.
We
can't
get
this
annotated
in
the
api
itself,
because
that
remains,
you
know
not
a
priority
and
not
something.
You
have
the
resources
to
address
yourself,
then
yeah,
finding
a
way
to
annotate
it
with
like
actionable
issues
that
can
be
revisited.
It
sounds
like
the
appropriate
optimizer.
A
I
our
research
prior
was
fields
like
these
are
the
fields
that
were
required,
deprecated
or
there's
the
features.
So
let's
go
back
through
okay,
let's,
let's
revisit
that
and
do
some
dive
current
research
on
tying
together
when
we
hit
those
and
where
to
annotate
the
the
optional
because
of
a
feature
flag
making
it
optional.
A
Yes,
all
right!
Thank
you
for
capturing
that
rion,
the
I
think
that's!
Is
there
anything
else
on
that
particular
issue,
I
will
go
ahead
and
open
the
next.
The
next
meta
issue,
this
one
is
our
log
for
detecting
flaky
test
root.
Cause
stephen
are
you
on?
This
is
what
if
this
is
your
work,
that
I
can
yeah
go.
F
For
it
yeah,
so
the
problem
was
that
the.
If
you
go
to
the
power
history,
there
is
three
flakes
like
it
had
asked
for
a
review
of
what
was
happening
and
trying
to
use
the
prowl
jobs
resources
as
a
guide
to
actually
create
a
separate
cluster
for
going
through
a
number
of
test
runs
and
when
I
did
like
a
50
consecutive
test
run.
The
slowest
time
for
the
whole
test
was
only
just
under
6.8
seconds
and
then
100
run.
F
F
F
So
it's
just
trying
to
get
a
bit
more
guidance
on
whether
or
not
that's
stuff
that's
been
covered,
and
the
sick
testing
meeting
this
morning
will
give
me
some
extra
clothes,
but
it's
trying
to
do
as
much
prep
before
doing
stuff
through
a
pr
changes
and
then
trying
to
re
re-run
it
that
was
just
causing
some
issues.
F
F
Yeah
so
basically,
I've
got
a
separate
fork
of
the
the
job
and,
if
you
scroll
down
a
little
bit
further
okay,
there's
some
extra
timing,
components
that
I
put
into
this
particular
job
so
that
I
could
see
what
the
timing
that
was
happening
for
each
stage
of
the
test,
as
well
as
what
was
going
through
to
try
and
find
any
outliers
and
it
across
the
150
tests,
runs
basically
there's
no
outliers
at
all
that
I'm
saying
and
that's
on
a
cluster
that
was
only
had
in
four
gigs
of
ram.
E
Right
so
I
feel
like
one
of
the
takeaways
from
sick
testing
session
this
morning
might
be
that
testing
running
the
test
in
isolation
is
one
thing,
but
the
next
step
would
be
to
figure
out
how
to
run
the
test
alongside
sort
of
other
tests
or
how
to
find
some
way
to
stimulate
timings,
getting
thrown
off
by
concurrency,
and
it
may
be
a
little
difficult
here
to
verify
whether
it
is
you
know
the.
F
I
did
try
and
run
the
stress
test
actually
on
the
vm
at
the
same
time,
on
some
other
options,
and
even
when
I
was
doing
that,
I
didn't
see
any
real
outliers,
but
it
was
separate
from
yeah.
I
understand
the
point
of
trying
to
run
some
extra
load
on
the
actual
cluster
itself
to
simulate
other
test
runs
so.
C
Is
some
sort
of
race
condition
in
the
code
that
sets
all
that
up
and
that's
why
we
saw
it
a
few
times
under
heavy
load
when
lots
and
lots
of
things
are
happening,
but
if
you're,
just
repeating
serially
or
you're,
not
exercising
the
same
segment
of
code
concurrently
in
a
lot
of
different
crazy
ways:
you're
not
gonna,
see
it.
C
Okay,
it's
true
flake,
but
it's
just
and
not
even
necessarily
this
code.
I
mean
that
would
be
my.
That
would
be
my
first
guess
anyway.
F
Okay,
I'm
just
going
back
to
the
prayer
history
happy
looking
at
the
last
two
like
green
runs
for
the
last
set
of
commands.
I
noticed
also
that
those
timings
for
some
of
those
jobs
being
run
through
it's
it
lines
up
with
a
lot
of
the
ci
changes
that,
like
just
the
timing,
runs
just
drop
drastically
compared
to
the
timing,
runs
and
the
historical
jobs
so
I'll
see.
F
If
I
can
look
over
the
video
and
from
the
meeting
this
morning
and
get
a
few
more
ideas
before
I
send
back
some
comments
back
to
like
a,
I
think,
at
the
stage.
E
I
think
that's
fair,
I
think
I
agree
with
john.
It
could
be
that
you're
running
into
like
an
actual
legitimate
underlying
bug,
and
so
I
think
that
would
be
worth
pointing
out
and
I
think
you
know
throw
throw
a
couple:
explicit
tests,
pull
kubernetes
kinds
and
see
if
it
sort
of
continues
to
behave
the
same
way
or,
if
you're
seeing
flakes.
E
A
Cool
thanks
for
the
feedback,
thanks
also
to
the
sig
testing,
and
look
at
the
videos
this
morning,
that
was
at
five
o'clock
or
local
time,
so
we're
going
to
review
that
later.
In
the
day
whenever
that
gets
uploaded,
I'm.
E
A
On
that
now,
by
the
way
in
in
probably
three
months
whenever
everything
switches
over
it
will
actually
be
at
7am
or
local
time,
which
we
can
probably
attend
whenever
the
the
flippy
bits
happen.
A
G
Now
that
endpoint
is
one
of
the
technical
dates
that
came
in,
I
think
in
117,
and
I
looked
around
for
information
on
that
and
if
I
go
into
the
specification,
it
says
that
the
the
flow
schema
is
actually
alpha
endpoint.
G
C
A
Okay
things:
we
will
submit
a
another
pr
to
api
snoop
to
update
that
list
of
endpoints
to
include
that.
I
just
wanted
rion
to
go
through
the
full
process
of
identification
and
bringing
it
towards
the
group.
Okay,
yeah
for
sure
there's
a
learning
process.
E
Right,
I
think
you
know
just
tying
back
to
what
we
were
talking
about
earlier.
It
would
be
great
to
have
this
going
through
an
actual
issue
that
represents
the
feature
being
in
alpha
status,
so
you
can
be
sure
to
like
track.
Hey
it's
not
an
outfit.
We
should
do
something.
A
In
this
list
here
do
we
want
it
like?
I
think,
right
now,
these
link
to
specific
commits
four
things
where
it's
an
alpha
feature.
Do
we
want
to
link
to
an
issue
instead,
because
this
is
these
are
just
links.
C
C
Enough,
which
I
don't
know
that
it
is
in
theory,
you
can
right
now,
we've
got
tooling,
we
have
metadata
in
a
file.
Now
that
says
what
stage
it's
in
and
when
it's
graduating
expected
to
graduate
like
to
the
next
stage
and
that
sort
of
thing,
so
we
could
actually
have
a
job
that
runs
through
and
says,
find
everything
that
says
it's
ga
and
I
mean
you
already
have
this
job
right.
I
mean
you're
talking
about
having
this
job,
but
you
might
be
able
to
put
some
tooling
around.
A
Worth
it,
we
should
create
a
different
issue
just
to
do
a
quick
query
to
see
if
we
can
quickly
see
the
dependence
of
the
missing
endpoints
to
see
if
they
depend
on
alpha
parameters
or
kinds
like
the.
If
the,
if
the
api
that
is
in
core
the
ga,
if
it
calls
the
cons.
C
If
the
feature
gain
is
on
right
and-
and
we
typically
don't
do
that-
where
we
would
add
a
required
field,
because
if
we
do
that,
it
can
cause
problems
with
upgrade
downgrade
cycles
and
things
like
that,
typically,
they
don't
get
added
as
required.
Unless
it's
like
a
new
kind
right,
you
can
have
recorded
fields
in
a
new
kind,
but
I
think.
A
For
api
server,
api
group.
A
That's
all
good
rhian.
Can
I
get
a
confirmation
that
this
is
is
not
get
api
server.
Api
group
like
that,
doesn't
look
like
a
like
that
full
because
there's
the
the
get
api
group
group
encore
and
anytime
there's
a
new
api
group.
You
can
get
the
the
one
underneath
that
and
that
is
available
for
any
time.
There's
a
new
api
group
so,
whether
they're
an
alpha
or
not.
G
It
correct
be
if
you
want
to
double
check
on
the
actual
endpoint,
it's
an
api
snoop
under
the
technical
date
for
117..
Okay,
we
want
to
quickly
review.
A
It
is
okay.
That
is
the
full.
I
think
there
is
always
going
to
be
a
get
something
api
server
api
group
for
any
group
that
gets
created
so
anytime,
there's
a
new
api
group.
As
soon
as
it
hits
alpha,
we
will
get
a
new
core,
endpoint
called
get
that
new
group
api
server,
api
group
and-
and
just
like
john,
was
saying-
we
just
need
to
put
that
into
the
list
of
not
yet
and
link
to
the
the
issue
for
tracking
that.
C
A
This
is
just
it
is
in
the
code,
somebody
promoted
it
and
it
just
automatically
gets
created
it's
part
of
the
process,
and
so
when
we
have,
we
could
that's
these
two
things
were
tied
together.
For
that
reason,
and
so
we
know
super
clearly
now
why
this
one
shouldn't
be
there
and
we
need
to
link
it
together.
We
have
no
way
of
automating
this
process,
because
the
metadata
is
not
there
within
the
open
api.
C
A
A
That
we
use
to
generate
the
open
api
and
it's
not.
It-
is
literally
bringing
up
api
server
and
querying
it
and
saying
what's
on
the
menu
today
and
then
publishing
that
as
our
de
facto
interface,
which
is
what
we
base
our
conformance
test.
C
Okay,
so
let's
do
a
little
more
research
into
that,
and
definitely
if
this
group
is
just
an
alpha
group,
then
you
should
be
able
to
look
at
the
groups
and
see
what
kinds
are
available
based
upon
for
specifically
for
api
groups
right.
So
if
there's
only
an
alpha
available,
v1,
alpha,
1
or
or
v1
beta
1
or
whatever
it
is,
there's
no
there's!
No
without
there's
no
v1
or
v2
available.
C
So,
okay,
well
yeah!
You
can
look
at
the
all
the
the
kinds,
the
the
groups
and
kinds
that
are
available
and,
and
the
version
is
in
there
what
group
version
and
kind
right
you
can.
You
can
get
that
and
if
you
see
that
this.
A
Okay,
we
will
update
our
logic
before
okay,
well,
yeah,
we'll
get
into
that,
so
that
will
clear
up
all
the
way
back
to
117.
yay
and
then
I
think
we're
actually
looking
at
these
two
end
points
for
116.,
and
if
we
do
these
two
because,
like
discovery,
api
group,
hey
it's
already
tested,
there's
really
only
one
other
one,
and
that
was
the
other
one
we
were
talking
about
was
create
core
v1
namespace
service
account
token.
A
If
we
can
decide
on
this
one.
That
means
all
the
way
back
to.
When
we
started
writing
tests
we've
had,
we
will
have
zero
technical
debts
since
we
started
writing
tests
where
I
started
writing
tests,
so
I
would
love
to
know
because
we've
got
the
other
one
figured
out
this.
This
create
core
v1
namespace
service
count
token.
Can
we
write
a
test
board
or
can
we
get
rid
of
it?.
C
That's
I
think
you
can
yeah
yeah,
I
I
I
don't
know
offhand.
It
sounds
like
a
something
that
is
probably
required.
They
probably
should
be
tested,
but
I
I
have
to
look
at
more
detail.
A
We'll
do
a
little
more
research
on
that.
I
feel
the
same
way
john.
It
was
just
back
when
we
were.
I
think
this
was
like
this
is
when
we
were
looking
to
see
around
parameters.
C
A
C
C
Sure
so
I
I
went
through
the
recording
from
the
last
meeting
when
I
wasn't
here
and
we
talked
about
the
document
I
put
together,
which
the
main
thing
I
got
out
of
it
out
of
the
recording
was
that
it
was
a
topic
I
didn't
put
in
the
document
and
was
orthogonal
to
it,
which
is
the
process
of
how
we
add
new
groups
of
tests
or
grouped
how
we
reduce
technical
debt
by
adding
conformance
tests
for
chunks
of
functionality
or
potentially,
possibly
optional
functionality,
and
how
we
validate
that
with
vendors
and
give
them
time
to
adopt
the
functionality
and
make
sure
that
they
can
pass
it
before
we
make
it
officially
conformance
and
that
that's
great
and
we,
I
can
add,
that
to
the
doc.
C
But
there
was
a
I,
I
kind
of
honestly
got
the
feeling.
Nobody
read
the
doc,
because
there
was
a
discussion
of
that.
I
was
proposing
something
and
this
document
doesn't
propose
anything.
It
raises
a
bunch
of
questions
about
the
things
that
I've
heard
raised
and
concerns
and
decisions
and
things
that
we
need
to
address
before
we
can
really
make
progress
on
the
profiles.
C
The
thing
I
did
propose
was
what
generated
the
doc
in
the
first
place,
which
was
those
privileged
and
based,
and
I
think
that's
maybe
what
people
were
talking
about
as
the
proposal,
but
the
point
of
that
really
was
to
engender
some
discussion
about
like
one
of
the
things
I
think
aaron,
you
brought
up
that
that
sorry.
This
is
actually
I've
jumped
to
the
second
one,
sorry,
but
we
can
change
the
notes
later.
C
You
brought
up
like
you're,
coming
from
the
user
perspective
right,
brad's
coming
from
the
vendor
perspective,
and
so
those
two
different
perspectives.
I
talk
a
little
bit
about
that
document.
C
D
Maybe
it
was
the
perspective
or
something.
A
C
There's
actually
that's
a
fourth
perspective,
but
but
it's
kind
of
aligned
with
it.
The
first
one
was
community
and
cncf.
The
community
wants
to
avoid
fragmentation
and
wants
to
allow
people
to
write
tooling
and
applications
against
a
consistent
set
of
apis
so
that
that
kind
of
is
covered
in
there
right.
That's
also
the
tool,
vendors.
So
then
there's
the
users
who
want
to
know
that
stuff
can
run
right.
They
want
to
know
that
stuff
can
run.
A
C
G
C
The
point
is
like
I
really
want
us
as
a
group
to
go
through
this
and
discuss
some
of
these
things.
So
maybe
what
I
need
to
do
is
shoot
out
a
note
and
ask
people
to
to
take
a
look
at
it,
because
we
need
to
decide
that
what
happens
is
or
what
I
believe
is
happening
is
we're
picking.
C
G
E
Statement
of
the
problem
yeah,
I
apologize
if
I
didn't
accurately
die.
E
Because
yeah,
it
seemed
like
a
lot.
More
of
the
discussion
was
not
necessarily
grounded
in
the
dock,
but
I
think
I
think
that's
a
good
framing
of
the
problem
like
is
it
possible
for
us
to
meet
the
needs
of
all
stakeholders,
and,
if
not,
can
we
agree
on
a
common
set
of
priorities
for
these
stakeholders,
given
that
we're
all
from
we're
all
coming
from
different
positions
here.
C
Yeah,
so
I
guess
I
I
I'll
send
out
a
note
and
we'll
just
try
to
hopefully
get
people
to
take
another
look
at
the
document
and
revisit
this
in
the
next
meeting.
C
So
then
the
other
topic.
I
want
to
talk
about
something
else
that
some
that
tippy
you
mentioned
in
the
meeting,
which
I
actually
thought
was
an
awesome
idea
and
it
just
sort
of
like
went
by
with
passing
and
people
nodding
their
heads,
but
I
think
we
actually
need
to
do
something
about
it.
This
was
my
other
topic
so,
and
that
is
you
mentioned
hey.
I
think
somebody
asked
about
the
vendors.
I
remember
who
it
was
and
said:
hey
what
about
those
tool,
vendors
or
those
people?
C
It
was
walter
like
what
about
the
people
delivering
these
workloads
and
they
have
this
vested
interest
of
you
know
they
want
their
workload
to
work
on
all
the
different
vent,
all
the
kubernetes
providers,
and
how
can
they
validate
that
and
hippie?
You
mentioned
that
api
snoop,
set
up
with
the
proper
audit
configuration,
can
identify
all
the
endpoints
that
are
hit
by
a
given
workload,
and
I
thought
that's
really
nice
and
could
help
us
get
some
of
these
vendors
to
potentially
contribute.
C
C
If
you
just
push
these
two
buttons
here,
spin
up
a
cluster
with
this
configuration
whatever
it
does
everything
for
you
ideally
and
it'll
spit
out.
A
report
for
you
of
here
are
the
the
apis
that
you're
using
that
are
not
currently
part
of
conformance,
and
you
can
help
out
by
going
to
the
conformance
meeting
and
talking
about
these
and
defining
which
identifying,
which
ones
should
be
part
of
conformance.
C
You
know
working
with
the
team
there
and
writing
some
tests
right
so,
like
I
think
that's
actually
could
we
could
because
we're
aligning
with
their
interests.
We
could
actually
pull
people
in,
but
we
have
to
make
it
easy
for
them,
because
the
process
for
not
right
now
is
non-trivial
right.
It's
it's
not
a.
You
have
to
get
steeped
in
the
conformance
world
to
be
able
to
do
it
and
that's
going
to
be
too
too
high
of
a
bar.
E
B
C
A
You
have
to
provide
the
extra
parameters
and
config
file
for
the
auditing
to
be
enabled
and
for
the
statically
defined
endpoint
that
will
be
available
when
the
cluster
comes
up
to
be
defined,
and
then
we
deploy
api
snoop
inside
of
it.
We've
had
to
do
that.
This
release
in
order
for
us
to
write
any
more
tests,
because
we
can't
work
from
master
and
so
we're
also
working
to
get
where
mini
cube
can
be
brought
up
with
a
build
from
master,
because
currently
it
only
supports
releases
that
are
published.
A
Those
images
and
we've
already
got
that
working
with
kind
we're
just
documenting
it,
and
so
we
should
be
pretty
close
to
just
saying
here,
run
these
few
commands
on
any
computer
to
bring
up
kind
or
mini
cube
until
the
cluster
is
up
and
then
run
your
tests
and
then
run
this
command
to
output.
A
Your
because
we
have
a
web
ui
like
you
just
kind
of
if
you
want
to
you,
can
browse
it
that
way
or
just
dump
it
out,
because
it's
a
postgresql
database
that
you
we
use,
org
files
to
dynamically,
just
define
queries
hit,
comma,
comma
on
it
and
it
outputs
the
list
which
we
could
write.
You
know
turn
into
here
just
run
this
container
and
outputs.
The
report
right.
C
C
A
If
we
had
a
for
what
we
want
like,
I
guess
we
just
look
at
the
kind
clusters
we're
already
running
for
conformance
and
pr
blocking
or
we
create
another
one
that
is
for.
C
Well,
part
of
this
is
discovery
around
profiles
too
right.
So
so
things
that
aren't
necessarily,
I
don't
know
how
those
clusters
are
set
up
today
or
the
details.
So
maybe
we
do
already
set
up
some
sort
of
storage,
persistent
storage.
C
Maybe
we
do
set
up
some
kind
of
load
balancer,
although
I
don't
know
how
you
do
that
with
kind.
Maybe
we
don't
need
that
because
that
one
at
least
is
very,
very
clear
but
other
optional
things.
You
know
you
need
our
back,
enable.
I
bet
you
99
of
the
vendor
package.
Solutions
out
there
all
require
are
back
right
so
that
that's
an
argument
for
you
know
some
sort
of
solution
that
includes
some
sort
of
profile
that
includes
our
pack.
A
No
thank
you
for
capturing
that
I
I've
been
to
be
honest.
We
pushed
for
that
all
the
way
back
in
the
beginning.
To
try
to
do.
I
think
initially
talked
about
certifying
vendors
to
where
their
applications
only
use
stable
apis
and
getting
the
certified
kubernetes
stable
as
icon
right.
A
We're
not
far
from
this.
Let
us
do
some
exploration
with
caleb's
work,
okay,
and
what
I'd
love
to
do
is
put
together
a
blog
post
with
the
underlying
tooling
that
we
have
so
that
other
people
can
come
along
and
run
their
information.
It's
trying
to
figure
out
where
we
collect
the
info,
and
I
think
for
now
it's
it
may
be
just
run
this
and
because
you
don't
submit
it
to
kate's
conformance
the
repo
cncfk's
conformance,
it
would
be
jump
into
the
channel
and.
C
C
A
We
do
we
get,
we
get
all
the
layers
of
things
and
the
problem
that
we
had
not
being
the
the
vendors
and
the
tooling
folks
is.
We
didn't
know
how
to
fully
exercise
their
tool
to
make
it
do
its
thing
sure
yeah
chart
yeah.
A
We
did
a
chart
okay
now
what
well
you
actually
have
to
do
all
of
this
and
trying
to
get,
and
that
would
help
everyone
with
ci
if
we
could
find
a
way
to
take
their
product
and
not
only
deploy
it
but
have
a
a
standardized
set
of
of
exercising
all
of
the
important
things
that
it's
supposed
to
do.
There's
room
for
probably
hosting
that
stuff
on
well.
C
So
cncf
did
a
for
cncf
projects
did
a
a
big
ci
thing
at
one
point:
crosscloud
ci,
I
don't
know
the
status
of
that,
but
this
is
coming
from
my
core
dns
experience
like
the
problem
with
it
was
it
literally
didn't
even
run
a
single
accordion
sci
test
it
just
like
checked
if
the
pond
was
running
or
something
that
just
deployed
it
and
and
that
wasn't
that
useful
right
and
what
it's
so
I
I
could
see
yeah
space
in
cncf
to
come
up
with
some
conventions
that
if
you
follow
these
conventions-
and
you
have
say
a
repo
of
tests
and
maybe
a
helm
chart
or
whatever
it
is
you
it
will
deploy
it
to
their
ci
it'll,
execute
those
tests
and
it'll
produce
a
report.
C
Right
I
mean
if
projects
could
opt
into
that
by
following
those
conventions
you
you
could
will
be
very
useful
for
the
projects
for
sure
and
it
could
potentially
be
useful
for
us
if
they're
all
running
on
kubernetes.
A
Yes,
I
am
one
of
the
things
that
I've
been
working
on
in
parallel
with
this,
and
it
was
due
to
us
having
the
cncf
kate's
conformance
repo
we've
set
up
prow.cncf.io,
and
we
have
donations
from
multiple
of
resources
from
aws
and
packet.
And
of
course,
google
has
primarily
donated
their
resources
to
the
kubernetes
project,
but
it
would
be
lovely
to
use
those
resources
to
run
a
prowl
cncf
that
we
have
this.
A
C
A
you
know
a
namespace,
you
know
a
cubeconfig
file
and
a
namespace
right
and
just
say:
go
to
town.
I
mean
it
all
comes
down
to
like
how
you
make
sure
people
don't
collaborate
each
other,
because,
if
they're
like
well,
I
need
17
name
spaces.
In
order
for
my
system
to
work.
Well,
you
know,
but
for
a
lot
of
for
a
lot
of
projects
just
to
keep
config
in
the
name
space
and
you
know
they
can
execute
and
and
some
outperform.
A
The
proud
job
itself
would,
I
mean,
don't
they
normally
limit
themselves
to
a
namespace
or
we
could.
I
know
that
with
aws,
for
example,
they've
got
talk,
far
gate
or
something
like
that.
That
would
kind
of
isolate
and
allow
for
specific
billing
for
those
type
of
jobs
or
we,
I
could
have
another,
that's
kind
of
on
the
side
challenge
as
well.
A
I
went
and
getting
arm
support
for
a
lot
of
projects
as
well
being
able
to
not
only
have
builds,
but
also
to
test
those
builds
on
some
arm
architecture,
which
we
could
right.
Think
packet
would
be
helpful
for
that.
I
would
love
any
more
information
or
thoughts
on
this.
I'm
still
trying
to
coalesce
this
into
something
that
I
can
present
to
the
multiple
vendors
and
to
priyanka
and
I'll
I'll
try
to
engage
in
that
a
little
more
deeply
in
the
coming
couple
of
weeks.
A
Sure,
okay,
thank
you
for
noticing
john
yeah
sure.
A
A
And
I
think
we'll
give
that
back.
We
have
lots
of
tests
that
are
ongoing
nothing.
We
need
feedback
on
right.
C
A
One
thing
I
did
note
while
we
were
looking-
and
it
was
not
obvious
to
me
initially
but
at
our
list
of
ineligible
endpoints,
there's
proxy
stuff-
and
I
remember
in
our
last
meeting-
we
said
we
are
going
to
test
that.
So
let's
go
ahead
at
some
point
and
remove
that
as
far
as
an
action
item
for
for
I
and
look
at
iterating
through
those
redirects
as
part
of
our
deliverable
store.
E
C
A
We're
going
to
get
our
numbers
this
quarter,
this
release
we'll
do
what
we
can,
but
I
definitely
want
to
see
these
off
of
the
ineligible
and
and
on
to
test
and
stephen
I'll
leave
that
to
you
to
to
put
in
those
pr's.
If
that's
right,
thank
you.
Everybody
we'll
see
you
in.