►
From YouTube: 20200630 SIG Arch Conformance
Description
GMT20200630 210144 SIG Arch C 1920x1080
A
Good
morning
today
is
wednesday
or
or
tuesday,
it's
the
first
here
in
new
zealand,
and
this
is
the
sig
architecture
conformance
office
hours
meeting.
I
am
hippie
hecker,
your
post
and
this
meeting
is
being
recorded,
live
and
we
have
a
code
of
conduct
that
we
all
adhere
to
rather
than
go
through
the
quick
we'll
do
some
quick,
shout
outs
to
aaron.
A
Thank
you
so
much
for
caring
about
the
watch
chilling
enough
to
put
a
pr
forth
so
looking
forward
to
going
through
that
with
you
and
liggett
smack
constantly
adding
new
endpoints
with
conformance
test.
This
is
exactly
how
it
should
go
and
I'm
so
excited.
Then
we
scroll
down
to
the
bottom
real
quickly
to
cover
a
item
here.
Future
behaviorless
performance
changes,
john
you're
on.
B
Thank
you
very
much
for
letting
me
jump
the
line
there
so
aaron
and
jeffrey-
and
I
were
talking
last
week
and
and
I
wanted
to
see
what
everybody
thinks
we
feel
like
the
work
that
we've
been
doing
towards
the
behavior-based
stuff
and
that
whole
cap
is
really
not
proceed.
B
Progressing
in
the
way
that
we
want
and
for
a
couple
of
reasons-
and
you
know
the
original
goals
of
that
were
around
separating
out
the
set
of
reviewers
and
sort
of
having
a
punch
list
and
a
few
other
things
and
the
problems
it
was
trying
to
solve.
Don't
seem
to
necessarily
be
the
problems
we're
having
and
in
velocity
and
the
as
we
go
to
like
actually
take
the
existing
tests
and
try
to
map
them
into
that
model.
B
There's
an
incredible
amount
of
technical
debt
and
work
that
would
need
to
be
done
in
order
to
make
that
work,
and
it
just
doesn't
seem
like
you
know,
to
to
to
use
a
cliche.
It
doesn't
seem
like
that.
The
juice
is
worth
the
squeeze
on
that
one
and
so
we're.
You
know
we're
thinking.
We
should
abandon
that
cap
and
instead,
we'll
keep
the
conformance
yaml
that
we
have,
but
we'll
pull
the
behaviors
pieces
out
and
one
of
the
things
we
have
another
follow-on
cap
that
was
based
on
that
work
for
profiles.
B
But
we
talked
about
some
alternative
ways.
We
could
do
profiles
that
were
simpler
and
wouldn't
have
to
have
that
dependency
on
that
work
and
aaron's
going
to
talk
some
more
about
those
later.
So
I
wanted
to
raise
that
here
and
get
people's
opinion
and
and
then
we
can
decide
what
to
do
with
it.
A
C
Yep
yeah,
so
one
thought
I
had
for
a
potential
something
we
could
maybe
try
going
forward.
If
we
get
there
is
we
wanted
to
be
able
to
use
behaviors
to
specify
like
what
functionality
we
knew
we
wanted
to
cover,
but
hadn't
yet
been
covered,
and
I
think
that
we
could
use
pending
the
pending
verb
from
the
ginkgo
dsl
for
that.
So
it's
a
it's
a
test,
that's
marked
with
like
a
slightly
different
status,
and
that
would
allow
us
to
know
that
those
tests,
like
those
are
tests
that
we
have
agreed.
C
We
want
implemented,
but
they
haven't
been
implemented.
Yet
it's
just
a
thought.
There's
no
plan
for
that
going
forward
at
the
moment,
but
if
we
find
we're
at
a
place
where
you
know
the
api-based
coverage,
the
api-based
coverage
that
you're
using
to
guide
what
tests
you
next
right
isn't
working
anymore.
B
Yeah,
I
think,
if
we
do,
that,
we'll
need
some
conventions
around
that
they're
they're,
pending
conformance
tests
not
just
pending
tests,
and
that
therefore
there
sort
of
is
some.
You
know
indication
that
these
are.
These
are
important
functions.
We
think
should
be
part
of
the
conformance
suite,
even
if
we
haven't
done
them
yet
I
mean
we've
been
doing
that
with
issues
right.
That's
hippie!
Your
team
has
been
tracking
those
with
these
issues
and
you
know
that's
another
way
too.
It's
just
not
in
the
tree.
A
B
So
I
guess,
if
there's
no
objections,
then
what
I'll
do
is
when
I
get
back
next
week,
I
will
put
together
something
that
abandons
that
cap
and
a
pr
and
then
we'll
just
let
people
object
on
there,
if
which
is
unlikely
to
happen.
If
nobody
here
is
expecting.
B
All
right,
that's
all.
I've
got
aaron's,
got
control
for
the
profile
stuff
later
and
thank
you
all,
and
I
will
see
you
all
next
time
all
right.
Thank.
A
You
since
you're
on
the
call
aaron,
thank
you
very
much
for
the
direct
guidance
and
very
clear
pr
that
we
feel
the
love
of
that
one,
and
it's
really
appreciated
for
our
open
discussion
section
now.
We've
got
some
some
different
points.
Zach
is
going
to
have
this
first
area
about
a
pr
open
to
kk,
I'm
going
to
open
up
that
link
and
yield
the
floor
to
that.
D
Hello,
hello,
hi
yeah,
so
this
is,
I
guess,
just
a
prompt
discussion.
The
idea
is
to
take
the
output
from
api
snoop
and
convert
it
into
a
json
or
a
yaml
that
we
can
put
into
conformance
test
data.
It
would
show
the
endpoints,
according
to
the
most
recent
open
api
spec
and
whether
or
not
they're
tested
and
whether
or
not
their
conform
is
tested,
and
this
would
allow
a
couple
things
mainly
ways
to
see
coverage
using
endpoints
as
a
metric
without
having
to
use
apis
to
the
web
app.
D
So
you
can
just
do
simple,
jq
scripts
to
see
what
the
current
coverage
would
be.
It
would
also
mean
that
different
apps,
whether
it's
api
snoop
or
a
prowl
bot,
or
what
have
you,
would
be
able
to
compare
coverage
without
having
to
have
its
own
database
or
processing
power
itself.
It
would
just
be
taking
a
look
at
the
json
and
comparing
it
to
the
latest
results
or
taking
a
look
at
the
json
throughout
its
commit
history,
to
see
progress
over
time
and
and
so
on.
D
And
yeah,
this
is
the
example
of
it.
I
can
go
over
how
it's
generated,
but
it's
basically
we
combine
the
open
api
spec
with
a
all
the
audit
events
from
a
test
run
and
load
it
into
a
postgres
database
and
then
just
for
every
endpoint
in
the
spec
see
whether
or
not
a
user
agent
hits
it
where
the
user
agent
has
the
ede
string
in
it
and
whether
or
not
it
has
a
conformance
tag
in
it.
C
Okay,
so
I
feel
like
lubomir's
comment
on
the
pr
was
around,
so
what
would
keeping
this
file
up
to
date?
Look
like.
D
Action
set
up
so
that
for
every
commit
or
for
specifically
tagged
commits,
we
would
run
the
postgres
job,
which
it
simply
would
create
a
database,
run
the
the
processing
output
json
and
commit
that
or
output
json
or
yaml,
and
commit
that
to
this
folder.
A
This
is
just
an
example
to
start
a
discussion,
but
when
we
have
endpoints
this
being
at
a
minimum,
the
key
being
in
a
hash,
the
endpoint
itself
or
the
operation
id
and
then
the
values
being
the
list
of
tests
that
hit
it
and
that
could
probably
reduce
the
size
of
it.
I
noticed
off
the
glance
it's
about
1.9
meg.
A
We
don't
need
to
replicate
necessarily
the
structure,
complete
structure
of
open
api
json
because
it's
already
there,
but
we
do
kind
of
want
to
know
that
very
clearly
at
the
time
of
the
pr
that
any
that
this
endpoint
that
there's
a
diff,
it's
really
clear
that
this
endpoint
in
this
particular
pr
is
hitting
a
new
that
this
test
is
hitting
a
new
endpoint
and
ideally
right
now.
The
way
that
zach
has
this
set
up
is
it's
sort
of
inside
a
container
by
itself
that
uses
postgres.
A
So
it's
a
container,
it
would
be
part
of
a
proud
job
that
might
run
as
a
comment
that
produces
a
pr
against
their
own
branch
of
their
own
work.
Saying
just
a
suggestion
that
now
that
this
is,
I
don't
know
exactly
where
it
fits
in
the
process
when,
when
they
don't
run
the
tool
itself
as
part
of
their
pr
when
they
adjust
tests.
So
ideally,
if
you're
writing
a
test
or
you're
promoting
endpoints
or
changing
that,
you
would
run
this
this
tool
to
update
this
conformance
coverage
file.
C
C
The
hesitance
I
have
is
I'm
trying
to
see.
I
can
talk
more
about
this
when
I
talk
about
profiles
trying
to
see
if
there's
a
way
for
us
to
maybe
get
some
data
like
this
out
of
tree
instead
of
sticking
to
it
in
tree.
That's
that's.
The
first
point
second
point
is
this
looks
an
awful
lot
like
we're,
starting
to
like
date
on
api
coverage,
which,
though
john
just
discussed,
how
we
think
we're
not
interested
in
you
know
doing
the
behavior
based
approach.
C
C
Although
I
can
like
really
appreciate
the
trust
but
verify
aspect
of
allowing
like
a
tool
to
verify
this
for
us,
it's
not
I'm
not
sure
it's
worth
the
cost
of
putting
it
in
free.
C
So,
rather
than
like
getting
to
it
too
tightly,
though,
I
think
it
would
be
useful
to
have
this
as
like
another
consumable
artifact
from
api
snoop,
whether
that
is
you
know,
hitting
api
snoop's
api
or
looking
at
git,
maybe
maybe
it's
stored
in
api,
snaps,
git,
repo
or
something
so
we
can
sort
of
periodically
compare
that
against
where.
A
A
Thanks
erin
yeah,
I
am
so
that
will
kind
of
flow
into
the
next
point,
which
is
also
around
zach's
work
on
api's
loop
itself,
the
page
this
is
and
zach
apologies
that
this
isn't
the
most
most
recent
version.
This
is
the
the
link
I
have
available.
If
I
need
something
else,.
D
Yeah
no
worries
and
what
we're
seeing
here
it's
looks
largely
the
same,
but
this
is
pulling
from
that
json
that
we
just
showed
this
is
actually
taken
from
our
own
repo
and
so
not
from
kubernetes,
but
it's
using
the
the
same
outputted
output
file
right
the
main
difference,
so
the
sunburst
is
working.
The
same
main
largest
differences
is
that
the
routes
are
now
based
off
of
the
release,
instead
of
it
used
to
be
built
around
buckets
and
jobs.
D
But
that
was
hard
to
be
able
to
like
know
the
exact
place
you're
trying
to
go
to
it's
much
simpler
to
just
type
in
1.18
or
1.19.
We
also
now
have
multiple
pages,
specifically
the
conformance
progress,
which
is
showing
the
stable
endpoints
and
their
conformance
coverage
over
time.
So
a
couple
different
graphs
to
help
illustrate
that
and
then
would
include
documentation
and
about
page
which
would
give
better
info
around
where
this
data
is
coming
from
what
we
mean
by
like
stable,
endpoints
and
eligible
endpoints,
why
we're
tracking
this
etc?
C
Yeah
this
this
looks
great.
There
is
a
part
of
me
that
feels
like
it
sure
would
be
great
great
if
we
had
something
that
was
more
granular
than
by
release
so
that
we
could
track
our
progress
a
little
more
tightly.
I
know
like
we're
kind
of
in
a
situation
right
now,
where
the
progress
is
slow
enough,
that
there's
there's
not
a
ton
of
value
like
checking
in
on
it
every
week,
and
so,
if
we
accept
that
and
just
say
like
checking
in
every
release
is
when
it's
worth
it.
C
I
get
that,
but
I
still
think
it's
sometimes
helpful
to
understand.
Like
these
release
cycles.
Take
you
know
the
one
we're
in
now
is
taking
maybe
up
to
four
months,
so
it'd
be
useful
to
have
some
sense
while
we're
in
the
middle
of
a
three
to
four
month
cycle
of
whether
or
not
we
are
headed
in
the
right
direction.
C
A
Aaron,
I
want
to
make
sure
I
understand
you
on
that
conformance
and
the
tests
themselves
are
cut
at
the
beginning
of
the
1.9.0
release
and
it's
my
understanding
that
they
don't
change
that,
particularly
the
the
conformance
text
and
the
test.
A
And
if
you
look
at
what's
happened
like
in
the
current
release,
it'll
be
the
stuff
between
when
118
0
was
cut
and
the
progress
we're
making
on
119.0
what
we
pictured
here
so
that
big
swath
of
light
green
is
new
tested,
yay
we're
having
new
endpoints
come
in
with
conformance.
That's
super
happy
about
that.
But
the
green
rise
above
the
left
from
the
prior
release
is
the
work
that
is
new.
A
It's
the
debt
getting
paid
off
and
I
don't
know
that
it
would
be
beneficial
because
there's
not
one
once
we've
gotten
here,
it's
all
focused
on
the
current
release.
Do
you
mean
to
see
over
the
course
of
time
between
117
and
118.
C
Because
it's
I
don't
know,
I
I
think
it's
more
like
119
hasn't
been
cut
yet
so
maybe,
like,
I
think,
you're
right,
like
I
think,
probably
just
viewing
it
by
release,
is
probably
good
enough.
C
A
The
interesting
data
points
here,
I
think,
are
those
prs
that
touch
swagger,
json
yeah.
They
touch
ga
and
the
prs
that
touch
any
tests
that
affect
our
coverage
on
those
and
that's
where
the
gate
that's
we're.
Trying
to
you
know
this
is
the
tool
we
have.
It
doesn't
necessarily
need
to
be
a
hard
gate
but
having
it
suggest
hi.
I
noticed
you
added
some
alpha
the
process
for
going
through
that.
A
Just
so
you
know
when
it's
automated,
like
you
know,
here's
the
progress,
you're
gonna
need
no
stress
beta
same
comment:
hey
I've
noticed
you've
had
some
new
beta
stuff.
Here's
the
process,
hey
I've
noticed
you
went
into
alpha,
we're
not
going
to
deny
this,
but
just
so
you
know
here's
the
process
that
your
your
local,
your
approvers
and
the
reviewers
are
going
to
be
looking
for.
In
order
for
this
to
be
approved,
and
it's
there
in
the
pr,
as
a
gentle
straightforward
reminder
that
this
this
shouldn't
pass.
C
I
hear
that
I
feel
like
again
that's
something
I
would
expect
to
be
communicated
through
the
cap
review
process,
but
yeah
this,
like
I
agree.
This
is
a.
This-
is
a
great
way
of
showing
the
progress
that
we
have
made
while
evaluating
conform,
while
like
adding
components,
tests
and
and
promoting
information
tests.
D
D
So
you
can
see
at
the
point
in
which
it
became
a
commitment
in
1.16
that
new
endpoints
come
in
with
conformance
tests
that
got
much
better
but
before
then,
and
that
also
means
that
a
lot
of
that's
why
you
see
the
the
dark
green
tested
is
increasing
higher
than
just
the
light
green,
yes,
showing
that
a
lot
of
the
tests
are
being
written
for
endpoints
that
were
released
several
versions
ago
because
they
came
in
without
endpoints,
so
that
should
ideally
decrease
like
we
shouldn't
have
to
be
writing
we'll
be
able
to
catch
up
on
a
kind
of
a
static
number,
because
all
new
endpoints
are
coming
in
with
tests,
ideally
yeah.
D
I
think
this
looks
great.
That
is
something
we
could
in
the
like
for
the
the
sunburst
when
switching
between
releases,
these
are
just
static.
Graphs.
The
the
sunburst
is
the
one
where
you
can
look,
you
know
specifically
at
1.18
or
1.19,
etc,
and
that
is
one
where
we
could
have.
The
one
of
the
releases
available
is
like
1.19
alpha
1.19
beta.
D
The
problem
that
I
was
facing
was
like
having
too
much
noise
like
if
we
updated
per
commit,
or
you
know,
for
every
test
run
and
so
then
you're
choosing
between
buckets
and
jobs
like
if
it
was
hard
to
get
like
meaning
from
it,
easy
meaning
from
it.
But
knowing
the
rate
at
which
you
know,
like
the
the
cutoff
points
that
someone
would
want
to
look
at
when
looking
at
the
past
releases,
I
would
be
keen
to
know.
C
So,
if
I
care
to
compare
where
we
were
at
now
versus
where
we're
at
when
whatever
the
last
alpha
or
beta
release
candidate
was
cut
like
evaluating
it
more
frequently,
but
I
think
just
kind
of
sort
of
continually
refreshing
where
we
are
now
in
119
and
comparing
that
against,
where
we
were
when
118
was
cut,
is
probably
proof
enough
for
us
to
know
that,
like
things
are
going
up
and
they're
going
up,
you
know
by
so
much
in
comparison
to
other
release.
B
A
Zach
had
enough
crap
to
that
tracker.
What's
it
the.
D
Second
graph,
oh
yeah,
so
the
second
graph
is
showing
the
endpoints
released
in
our
version
and
the
amount
of
them
that
are
still
untested.
So
the
the
red
is
the
still
untested
so
1.16
and
1.19
both
look
like
1.9
before
we
were
able
to
see
endpoint
coverage
and
before
we
had
the
new
endpoints
come
in
with
conformance
tests.
You'll
see
that
a
number
of
them
are
are
still
untested
compared
to
1.19,
where
there
are
no
untested
new
endpoints,
which
is
great
yeah.
C
Yeah,
I
think
this
is
great-
I
mean
for
me
this.
This
naturally
leads
me
to
ask
like
okay,
so
where's
the
list
of
those
endpoints,
let's
sort
them
by
release,
so
we
hit
the
most
recent
ones
first,
because
that
seems
pretty
egregious
and
we'll
work
our
way
back
or
maybe
we
want
to
work
from
the
earliest.
I
don't
know
but
like
this,
this
is
cool.
D
Sweet
yeah-
and
this
is
the
it's
a
service
level
of
work
where
we
are
continually
reducing
the
like
maintenance
size
of
that
postgres
run
that
we're
doing
so
getting
like
the
list
of
still
untested
endpoints
sorted
by
release
is
much
much
simpler.
Now,
it's
just
a
basic
sql
query
right.
A
Cool,
let
me
reduce
this
down
to
just
being
that
container
error.
That
means
anybody
can
pretty
much
just
run
that
container
and
bring
it
up
and
have
that
database.
The
query-
or
in
this
case
just
the
output
yaml
that
is
used
to
make
these
websites
and
that's
where
it'll,
be
interesting
to
see
how
we
can
bring
that
into
some
tooling
out
of
tree
or
not.
The
guy
is.
A
One
thing
that
I
want
to
ask
of,
I
guess
of
aaron
and
serena
while
you're
here
is,
does
this:
are
we
happy
with
this
new
approach,
of
course,
to
what
our
website
is
now,
and
so,
when
we're
pausing
and
confident
in
those
numbers
that
they're
matching
up
and
everything,
should
we
go
ahead
and
flip
that
over
to
be
the
new
site,
or
is
there
any
more
feedback
that
you
want
to
put
into
that
before?
A
We
use
that
as
an
update,
my
hope
and
I'm
not
and
we'll
have
to
see
if
we
can
make
sure
that
the
numbers
are
all
are
adding
up
that
we
can
do
that
sometime
in
the
next
week
or
so
hopefully
for
the
meeting.
But
I
don't
want
to
promise
that
without
being
real,
confident
in
it,
but
from
a
change
in
how
this
works.
How
do
you
like
what
you
see
enough
for
us
to
flip
the
old
one
off
and
flip
this
to
be
the
new?
The
new
way?
A
This
one
required
us
to
actually
generate
the
data
for
all
of
these
runs
and
it
took
a
while,
and
this
approach
show
what
endpoints
were
there
at
the
time
of
the
release
versus
our
new
approach
only
looks
at
endpoints
that
still
exist
today,
not
that
we're
deprecated
that
we're
still
there
at
these
earlier
releases.
C
Through
that
deeply,
but
again
for
me
like
what
you
just
showed
me,
is
far
more
actionable
than
what
we
have
right
now
in
terms
of
helping
us
understand,
you
know
what
what
that
should
we
be
paying
down.
A
Two
topics
rion:
I
can
take
notes
for
you
during
this
one,
but
you
had
some
release
metadata
and
conformance
tests
and
we
have
another
gate
that
we're
creating
for
the
the
cncf
case,
conformance
repo,
so
I'll.
Let
you
speak
to
this
brianna.
E
Okay,
if
you
look
at
the
information
in
this
br,
you
will
see
there
is
several
endpoints
that
does
not
have
meta
metadata
that
actually
give
the
release
date.
So
the
all
to
what
release
the
test
is
working
to
okay.
So
the
idea
is
that
we
need
to
update
using
the
gate
this
data,
so
we
just
we
did
throw
it
up
in
the
in
the
channel
and
we
did
suggest
that
the
data
would
that
would
be
generated
would
be
the
date
as
when
it
went
to
conformance.
A
We
have
another
issue
that
we're
working
on
for
the
cncf
case
performance
repo
and
we
need
to
know
the
exact
list
of
tests
that
we
need
to
ensure
are
in
the
submitted,
runs
for
sonovoid,
and
this
came
up
that
we
don't
have
that
list
in
a
form
that's
easily
accessible
in
the
logs
in
full
form.
So
my
suggestion
was
to
use
current
tooling
used
to
generate
the
conformance,
yml
and
re-run
it
against
117,
116
and
115..
A
I
don't
think
we
need
it
for
114
for
this
work,
but
it
might
be
useful
to
take
that
same
approach
and
generate
it
all
the
way
back
to
one
nine
so
that
we
have
correct
conformance,
yaml
files
so
that
we
know
when
that
was
introduced
and
that
the
current
one
port
in
this
case,
where
we
have
17
tests
without
a
release.
C
C
I
mean
these,
I
don't
feel
like
that,
would
invalidate
any
previous
conformance
things
like
I,
don't
you
don't?
We
don't
select
based
on
that
release
field
when
we
run
conformance
tests,
we
select
on
the
conformance
tag,
so
there's
that,
like
I
feel
like,
if
you
were
to
go
through
the
get
history
for
all
these
files
and
figure
out
when
these
tests
were
added
and
like
update
and
add
the
release
field
in
master.
So
great
we've
done
the
archaeology.
C
We
now
know
when
those
tests
were
added
in
master
and
we
can
generate
a
conformant
cml
and
it
is
up
to
date
now
because
it
dips
correctly
against.
What's
in
our
tests,
if
you
were
to
cherry
pick,
just
the
conformancy
animal
back
into
earlier
branches
that
that
wouldn't
work
so
you'd
have
to
cherry
pick,
sort
of
the
changes
to
the
test
comments
back
to
earlier
releases,
and
I
start
to
wonder
if
that's
valuable,
if
that's
worth
the
cost
of
cherry
picking,
it
may
very
well
be,
but
yeah.
A
A
And
we
would
do
a
one-time
run
to
it's
kind
of
like
the
easy
way
to
do
the
archaeology.
It's
just
to
go
through
and
check
those
out
and
run
the
current
tooling
and
then
do
a
diff
to
make
sure
that
we
update
these
six
17
tests
with
the
correct
release
and
put
it
as
a
pr
against
faster.
But
I
just
wanted
to
verify
that
the
approach
we're
using
sounds
okay,
because
there
were
some
different
thoughts
around
when
to
add
that.
But.
F
C
Okay,
well,
I
think
so
I
think,
if
you're
asking
the
the
approach
of
let's
go
through
the
art,
let's
do
the
archaeology
and,
let's
add
a
release
field
to
the
comment
in
master.
I
think
that
sounds
great.
I
would
maybe
go
a
step
further
and
update
our
our
job
to
like
consider
anything
that
doesn't
have
a
release
field
to
be
invalid.
A
We
are
doing
something
similar
for
the
the
product.yaml
or
the
conformance
submissions
to
verify
that
those
fields
are
all
present,
including
their
release
and
verifying
that
the
title
of
the
pr,
the
the
the
version
number
in
the
product,
yaml
the
version
number
and
the
results
matches
so
that
we're
not
including
the
test,
it's
just
to
give
the
approver
who's
human,
the
the
quick
eyes
to
go.
A
Yes,
all
tests
for
this
very
specific
release
that
we
know
is
listed
in
the
title
and
then
it's
all
there
and
we'll
go
through
and
add
this
to
the
job
where
we've
there,
I
think,
there's
a
verified
performance
handle
and
we
just
run
the
generator
and
ensure
that
it's
the
same,
so
we
update
the
generator
to
fail
if
it's
missing
all
of
the
data.
A
C
Not
at
the
moment,
this
is
making
me
want
to
get
to
the
discussion
on
profiles
more
all
right.
C
I
think,
on
the
endpoints
with
low
priority,
unfortunately,
because
it
has
the
phrase
low
priority
in
it,
it
was
a
low
priority
for
me
to
to
review
it.
I
will
try
taking
a
look
today
after
this
meeting,
because
I
I'm
kind
of
free
to
like
process
some
stuff
after
this,
so
I
will
take
a
look.
I
may
not
be
the
authority
to
rule
absolutely
yes
or
no.
I
might
need
to
pull
in
folks
like
john
or
clayton
and
since
they're
not
around.
C
A
C
Yeah,
thank
you
so,
okay,
so
basically,
this
boils
down
to
I'd
like
the
ability
to
specify
the
different
sets
of
tests.
All
right
hang
on.
Let
me
back
up
conformance,
as
we
know,
is
supposed
to
test
everything
that
is
non-optional
in
ga,
and
we've
talked
about
the
idea
of
having
profiles
where
we
could
describe
how
there's
additional
functionality
that
a
kubernetes
cluster
has
that's
not
necessarily
available
by
default
on
all
kubernetes
clusters.
C
So
today
we
rely
on
a
single
test
tag
or
a
single
like
well-known
string
embedded
in
the
test,
name
called
conformance,
and
that's
how
we
decide
whether
or
not
a
test
belongs
to
the
suite
of
conformance
tests.
C
So
I
want
to
be
able
to
specify
which
tests
belong,
to
which,
like
set
of
tests
for
a
given
profile,
I
want
to
be
able
to
restrict
how
we
specify
like
which
test
goes
to
which
profile,
in
the
same
manner
that
we
currently
restrict,
which
tests
are
allowed
to
be
promoted.
To
conformance,
and
I
need
to
be
able
to
validate
that
you
know
whatever
test
run,
that
sauna
boy
produces.
C
I
should
be
able
to
take
a
look
at
that
and
say,
like
your
tests,
like
you
pass
this
profile
and
this
profile
in
this
profile.
Does
that
make
sense?
C
So
those
are
the
absolute
must-haves
for
this
some
things
that
would
be
nice.
I
feel
like
if
we
take
a
look
at
the
current
set
of
conformance
tests.
Some
of
them
do
things
that
not
everybody
would
like
to
be
allowed.
So
some
of
the
tests
do
things
that
require
like
cluster
admin
privileges
or
they
do
destructive
things.
C
So
we
feel
like
the
existing
set
of
performance
tests,
maybe
actually
corresponds
to,
like.
I
don't
know
two
or
three
different
profiles,
something
like
that,
so
it
it
might
be
nice
to
be
able
to
say,
like
hey
if
you
certified
against,
if
you
certified
your
kubernetes,
offering
as
conformant
for
a
previously
released
version
of
kubernetes,
it
would
be
nice
for
us
to
be
able
to
say
actually
that
means
you
fit
you.
You
can
now
say
that
you
have
these
profiles
to
as
we
sort
of
constrain.
C
What
the
base
of
conformance
is
right,
it'd
be
nice
to
be
able
to
expand
what
people
got
as
a
result
of
passing
that
slightly
larger
set
of
tests.
So
that's
a
nice
to
have
another
nice
to
have
would
be
to
be
able
to
define
these
lists
of
tests.
C
You
know
which
test
belongs
to
which
profile
out
of
the
kubernetes
tree.
This
is
basically
because
the
kubernetes
tree
is
incredibly
slow
and
painful
to
emerge
into
we'd.
C
Have
we
might
have
a
much
faster
emerge
velocity
if
we
could
get
that
out
of
tree
and
that
could
allow
us
to
you
know,
potentially
it
sort
of
ties
back
to
the
earlier
point,
we're
maybe
decoupled
from
the
kubernetes
relief
light,
release
life
cycle
and
could
sort
of
retroactively,
specify
things,
and
ideally
the
way
that
we
specify
profiles
should
allow
us
to
when
we're
choosing
which
tests
to
run.
It
would
be
great
if
we
could
specify
only
run
the
set
of
tests
that
correspond
to
this
profile
or
this
profile
right.
C
C
C
At
the
moment
whether
profiles
would
be
exclusive,
that
is
to
say,
if
a
test
is
part
of
profile
a
it
cannot
be
a
part
of
profile
b,
I'm
allowing
for
the
possibility
that
a
test
could
belong
both
to
profile
a
and
profile
b,
but
I
think
that
you
know
understanding
that
would
only
come
out
of
deciding
what
those
profiles
should
actually
be
like
concretely.
What
what
do
we
want
to
call
them,
and
how
do
we
want
to
cluster
things
together?.
C
All
tests
have
to
use
the
conformance
stack,
even
if
they're
part
of
a
profile
if
they're
part
of
a
profile,
but
so
like.
We
have
checking
right
now
that
says
you
can't
skip
if
you're
part
of
a
conformance
test.
That's
how
we
sort
of
enforce
the
the
non-optional
part
of
it.
I
would
remove
that
and
say
you
are
actually
allowed
to
skip,
even
if
your
conformance
test-
and
this
would
allow
tests
written
for
a
certain
profile
for
certain
optional
functionality
to
be
able
to
check
if
they
can
do
their
thing,
yeah,
so
hey.
C
C
So
the
idea
would
be
to
take
to
to
implement
a
tool
to
do
this
out
of
the
kubernetes
tree
and
use
as
a
source
of
truth
files
that
live
outside
of
the
kubernetes
tree.
So
for
grins,
I'm
just
assuming
we
added
for
the
behavior
stuff,
we
added
something
called
cubeconform.
So
let's
pretend
there's
a
new
command
line
thing
called
cubeconform
and
it
lives
over
in
kubernetes
six,
and
so
I
could
run
that
as
like
a
command
line
tool.
C
It's
a
coup
conform
validate
and
then
I
would
say
what
version
am
I
trying
to
validate
for
and
here's
the
junit
file
that
I
got
from
my
sauna
boy
run
and
then
I
can
see
like
it'll
spit
out
like
which
profiles
did
you
pass?
You
know,
okay,
great,
you
did
the
base
profile
and
you
did
the
food
profile
going
further
than
that.
I
think
it
would
be
ideal
to
offer
this
as
a
service
through
the
kubernetes
project.
C
It
might
be
like
an
api
or
it
could
be
something
as
simple
as
like
a
static
web
page
that
you
upload
something
to,
and
it
spits
out
that
response
back
to
you,
but
you
know
sort
of
further
reinforcing
that
the
definition
of
what
conformance
is
is
part
of
the
kubernetes
project
and
offered
as
a
service
by
the
kubernetes
project.
C
C
This
would
allow
us
to
restrict
addition
of
addition
of
tests
to
different
profiles
to
different
sets
of
reviewers.
So
maybe
there's
a
really
highly
restricted
set
of
reviewers
for
addition
to
the
core
profile.
But
then,
maybe
there
are
a
bunch
of
like
storage
subject
matter
experts
who
gate
the
addition
of
tests
to
the
storage
profile.
C
C
C
Could
be
just
periodically
done
by
a
crowd
job
or
something
right,
so
this
does
allow
us
to
retroactively
define
profiles
because
it
decouples
the
definition
of
what
you
know.
What
conformance
is
like
what
the
set
of
tests
is
from
the
life
cycle
of
the
release?
It
doesn't
necessarily
change
like
what
is
the
test
actually
exercising
or
doing
would
this
play
well
with
sauna
boy,
because
santa
boy
is
kind
of
the
tool
that
most
people
use
to
certify,
whether
or
not
they're
conforming?
C
So
it
seems
like
the
least
invasive
approach.
The
the
con
one
of
the
big
cons
is
like.
I
have
no
way
to
actually
specify
which,
like,
if
I
just
want
to
run
the
storage
set
of
tests
for
the
storage
profile.
I
have
no
way
of
specifying
that,
and
it
kind
of
prevents
me
from
doing
like
atomic
gating
stuff,
because
it's
because
things
are
now
out
of
tree
right,
so
I
couldn't
necessarily
say,
promote
a
feature
to
ga
and
also
have
it
covered
by
the
storage
profile.
C
All
in
one
go,
I
would
sort
of
have
to
promote
my
feature
to
my
storage
feature
to
ga,
and
then
I
would
sort
of
over
here
kind
of
update
to
have
the
profile
of
storage
tests
on
the
plus
side.
There's
really
no
churn
around
test
names,
so
all
of
our
existing
tools
like
test
grid
and
triage
and
stuff
that
look
at
that
base
their
history
on
test
name.
C
We
wouldn't
lose
any
history
there.
So
there's
that
any
questions
so
far.
A
But
I
was
wondering
if
it
would
make
sense
to
have
the
profiles
metadata
and
a
list
of
profiles
that
that
test
has
so
that
the
tooling
for
generating
your
profile
yaml,
is
very,
very
similar
to
the
process
for
generating
our
conformance
yml.
While
that
would
require
some
stuff
in
tree
and
allow
us
to
going
forward
to
find
that
there
it
sounds
like
one
of
your
felt
needs
is
for
us
to
retroactively,
define
profiles
for
stuff
from
1
18
and
earlier.
C
That's
that
would
be
nice
to
have
and
yeah.
I
think
my
hope
is
that
we
would
actually
take
all
the
the
code
right
now
that,
like
walks
all
the
files
and
stuff
and
take
that
out
of
tree-
and
then
we
could
just
sort
of
like
have
a
script-
that
like
installs
that
and
runs
that,
so
that
you
know
changes
to
what
the
metadata
should
contain.
What
it
should
look
like
all
of
that
lives
out
of
tree,
but
we
still
have
an
entry
gate
on
conformance.
A
Okay
on
the
the
sono
boy
and
the
tooling,
I
it's
been
a
while,
since
I've
run
it,
but
I
remember
there
being
a
sono
boy
website
where
you
could
go
and
say
I
want
to
do
a
sono
boy
test
and
you
you
click
new
and
it
says:
do
coop
cuddle
apply
thisquid.yaml
and
if
you
did
that
it
would
deploy
the
sonobuoy
and
upload
the
results
so
that
you
had
this
dashboardy
type
thing
right,
and
I
wonder
if
we're
going
to
put
forth
the
effort
to
do
that,
if
there's
at
least
what
you're
calling
a
six
k,
sig
eights
coupe
conform.
A
If
there
isn't
a
way
to
combine
that
work,
that
we
have
with
sono
boy
for
that
websitey
thing
together
with
this
validation
process,
with
just
pointing
to
your
uploaded
results,
somehow
verifying
them
via
oauth
that
that
company
submitted
the
results
via
that
process,
so
that
the
getting
the
shiny
badge
and
all
of
that
is,
is
so
much
cleaner,
yeah,
even
if
it
does
result
in
just
a
pr.
But
it's
pr
that
the
results
were
really
run
directly
from
the
tool.
A
I
think
some
of
the
reason
for
that
is
some
of
the
environments
were
completely
offline
and
people
wanted
to
upload
stuff.
I
wonder
how
important
that
particular
use
case
is,
and
if
we
were
to
open
it
up
to
see
how
much
would
just
happen
via
something
like
kubecon
forum
apply
or
your
coop
cuddle
apply.
This
cubeconform.cates.io
grid
approach
several
thoughts
there,
but
I
just
wanted
to
get
them
in
the
air.
C
C
So
I
could
see.
Kube
conform
may
be
being
bundled
inside
of
the
conformance
image
that
sauna
boy
uses
so
that
sonoboy
has
access
to
that
binary
directly
and
it's
one
of
the
things
that
sort
of
sonoma
pre-downloads
in
that
mode,
where
it
pre-downloads
all
of
the
images
that
are
necessary
for
conformance
testing
right.
So
you
can
do
something
like
that
and
as
far
as
integration
with
the
service,
that's
why
I
was
saying
like
well.
C
You
know
it
could
be
like
another
command
line
sort
of
thing,
or
it
could
just
as
easily
be
an
api
that
we
provide.
C
So
I
feel
like
there
are
a
number
of
different
extension
points
that
we
could
provide
because
yeah
that
that
idea
of
taking
the
results
that
are
generated
by
some
boy
at
sonobuoy
is
definitely
a
hard
requirement.
So
I
just
wanted
to
sort
of
walk
through
some
of
the
alternatives
here.
So
if
having
all
the
files
out
of
tree
feels
way
too
oh
go
ahead.
F
C
So
the
idea
is
right,
like
with
this
with
the
default
approach,
they
would
just
run
all
the
conformance
tests,
and
so
we
spit
out
like
what
profiles
they
pass.
We
could
also
see.
I
could
also
see
spitting
out,
like
you,
didn't,
pass
this
profile
because
you
didn't
pass
these
tests
right.
F
C
Sorry
you
cut
out
kind
of
in
in
the
beginning
there,
but
my
my
hope
is
like
if
I
were
to
suddenly
wave
a
magic
wand
and
have
this
apply
to
like
118
today,
right
like
if
you
pass
the
existing
set
of
conformance
tests
that
have
the
tag
conformance,
you
will
absolutely
pass
the
base
profile
for
sure,
like
I
guess.
C
Maybe
I
we
can
list
that
as
a
requirement
but
like
in
re
in
trying
to
sort
of
reorganize
things,
you
would
definitely
still
be
conformant
at
a
base
level
and
optimally
if
we're
doing
things
right,
you'd
actually
get
additional
profiles
for,
like
I
know,
red
hat
has
always
been
kind
of
security.
Conscious
and
kind
of
doesn't
want
tests
that
have,
like
you
know,
super
user
cluster
admin
access
to
the
to
the
cluster.
C
So
we
could
maybe
like
say
that's
that's
an
additional
profile
and
then
by
having
passed
the
conformance
tests
you
actually
like
fulfill
both
the
base
profile
and
that
other
profile.
F
I
I
may
be
thinking
it
wrong,
but
let's
say,
if
I
add
all
the
node
conformance
tests
to
the
components
right,
that's
a
profile
or
storage
whatever
and
based
on
this
plan,
those
will
be
part
of
the
conformance
tests.
F
They
will
have
the
performance
tags,
so
they'll
be
run
automatically,
whether
that
particular
installation
has
storage
capability
or
not
right.
So
in
that
sense
you
will
not
be
able
to
run
100
of
the
components
test.
There
would
be
majority
of
the
conformance
tests
that
belong
to
a
specific
profile
that
would
be
failing
on
that
right.
F
C
So
that's
actually
kind
of
out
of
the
scope
of
what
I
want
to
discuss.
I
just
want
to
discuss
how
we're
implementing
it.
My
personal
preference
would
be
to
stick
to
as
few
as
possible
and
no
more.
I
always
am
kind
of
a
fan
of
seven
plus
or
minus
two
as
the
magic
number
that
humans
can
kind
of
hold
in
their
head
and
reason
about
at
one
time.
C
F
C
Right,
so
those
are
some
of
the
alternatives
I
consider
down
at
the
bottom
right.
You
could
just
do
what
we
do
already
and
like
keep
using
tags
and
then
just
add,
like
a
profile,
foo
tag
and
then
have
regular
expression
support
where
it's
like.
Oh,
I
just
want
to
focus
on
conformance
the
base
profile
or
the
base
profile
and
storage
profile,
and
I
would
continue
to
use
the
same
tooling
that
we
have
today
so
we'd
still
hit
merge
conflicts
on
a
single
file
called
conformance
yml,
but
we,
you
know,
restrict
access
to
it.
C
C
And
yeah
it's
just
kind
of
it
sort
of
pushes
our
our
use
of
special
strings
and
test
names
sort
of
past
the
limits
of
reasonable
usability.
So
then,
I
thought
of
like
what
are
two
other
options
we
could
use
to
maybe
support
the
use
of,
or
concept
of
tags,
a
little
more
natively.
So
one
thought
is
we
could
try
like
just
wrapping
ginkgo
and
introducing
our
own
concept
of
tags.
C
C
C
C
So
going
back
up
for
the
solution
that
I
proposed
to
try
and
overcome
some
of
these
shortcomings.
I
know
we're
we're
basically
a
time
at
this
point,
but
I
do
propose
other
ways
to
you
know
kind
of
like
if
we're
too
uncomfortable
having
the
files
that
we
use
to
like
figure
out
which
tests
are
belong
to
which
profile
out
of
tree,
we
could
go
back
to
doing
it.
Entry
with
conformance
yaml,
have
it
exactly
as
it
is
now
but
add
an
additional
field
called
profiles.
C
The
problem
is
we're
kind
of
stuck
with
all
of
our
entry
merge
velocity
and
it's
going
to
make
it
a
lot
trickier
to
kind
of
backport
these
changes
and
then
to
be
able
to
focus
on
a
given
profile
or
not.
C
I
also
talk
about
the
idea
of
adding
additional
flags
to
the
u2e
test
binary,
where
we
sort
of
hijack
ginkgo's
focus
and
skip
functionality,
and
instead
say,
if
you
use
these
flags,
could
you
scroll
down
just
a
little
bit
chris,
where
we
say
like
their
alter
etv
test
so
for
either
of
those
approaches?
If
I
define
the
files
externally
or
I
have
it
defined
in
tree,
like
use
the
conformance
yaml
file,
and
then
I
could
add
some
flags
to
say,
focus
on
this
profile
or
focus
on
this
other
profile.
C
I
feel
like
that.
More
directly
addresses
your
concern.
Screening
of
like
I
want
to
run
the
tests
that
I
know
will
pass,
which
is
maybe
more
user-friendly
from
the
test
execution
perspective
right,
but
again,
I
somewhat
feel
like
that's
a
nice
to
have
if
we
can,
if
we
instead
provide
some
additional
bit
of
tooling
that
you
tack
on
to
the
end.
C
So,
okay,
I
see
what,
instead
of
like
tests
for
alternate
profiles,
failing,
I
would
anticipate
seeing
them
skip,
and
so
I
don't
think
you'd
see
a
failed
test
run
if
you
didn't
support
all
of
the
profiles.
You'd
instead
see
a
green
test
run
with
this
many
tests
passing,
but
to.
F
Yeah
I
do
like
that
yeah.
I
do
like
that.
I
mean,
if
you
have
a
reserved
set
of
words
for
skipping,
that
we
can
control,
then
the
gating
can
also
help
in
that
sense
that,
oh,
you
can
only
do
these
kind
of
skips,
because
these
are
these
belong
to
these
profiles
that
that's
doable
right.
I
mean.
F
But
we
are
at
the
top
of
the
hour.
So
do
you
have
a
cap
for
this,
or
do
you
propose
that
we
comment
on
this?
How
do
we
brainstorm.
C
I
go
ahead
and
comment
on
this
I'll,
send
this
out
to
the
cigarettes
mailing
list
and
then
you
know
sort
of
once
we've
addressed
comments.
I
would
anticipate
updating
the
profiles.
Cap
sounds
good.
A
Thanks
one
quick
note:
as
we
go
part
of
our
okrs
within
the
cnc
api's
new
repo
for
the
work,
we're
doing
is
gaining
the
cncf
contrib,
the
kids
conformance
repo,
and
so
we
are
looking
at
that
conformance
yaml
as
making
sure
everybody
runs
their
tests
and
so
we'll
fold
that
in
but
the
one
that
I
hear
at
least
from
aaron
is
the
the
other
major
goal
was
to
gate
kk
using
our
endpoints
as
a
metric
for
that
and
I'd
love
to
find
a
way
to
get
some
broader
feedback
on
that.
A
So
that
we're
sure,
because
it's
it
is
it's
been
said
as
a
major
goal,
and
if
we're
going
to
adjust
that,
I
want
to
make
sure
that
we've
got
clear
direction
from
from
other
folks
as
well.
Aaron
welcome.
A
Up
on
that
offline,
okay,
thank
you
everybody
for
attending.
If
anyone
wants
to
stick
around
after
I'm
fine
with
that,
I'm
going
to
stop
the
recording
here
right,
I
think
I'm
not
saying
it
stopped
yet
my
computer's
been
super
slow
today.
So
let's
just
assume
it's
still
running
until
I
see
the
I've
just
had
it
pop
up
and
say:
stop.