►
From YouTube: 20190924 sig arch conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
A
We
had
a
situation
right
at
the
end
of
of
the
cycle
where
a
a
couple
of
tests
that
we
had
promoted
had
some
disruptive
behavior
and
we
wanted
to
run
it
by
the
group,
but
there's
the
decision
we
came
up
with.
If
you
haven't
updated
and
read
that
PR
there
essentially,
is
that
it's
it's
necessary
for
conformance
test
to
be
able
to
just
behaviors,
otherwise
they're
gonna,
whole
array
of
functionality.
We're
leaving
you
know
off
the
table
and
that's
not
really
acceptable.
A
So
problem
is
that
you
have
an
expectation
or
users
there's
a
good,
an
expectation
set
where
people
can
run
these
things
on
live
clusters
that
potentially
have
production,
workloads,
I,
don't
think
it's
a
great
idea,
but
they
don't
done
it,
and
so
we
didn't
want
to
disrupt
their
their
their
workloads.
The
solution
we
came
was
to
allow
conformance
tests
to
be
tagged,
disruptive
and
that
the
default
sonobuoy
configuration
would
skip
those
tasks
by
default,
and
would
you
know
when
you
run
it
part
of
the
actual
conformance
program?
A
You
would
need
to
specify
that
you're
running
it
as
part
of
the
conforming
program
instead
for
it's
coming
too
disruptive
tests,
I
think
in
general,
the
people
who
were
in
agreement
that
this
was
absolutely
fine,
but
we
wanted
to
bring
to
the
meeting
at
here
if
there
are
any
concerns
or
objections.
So
I
opened
the
floor.
E
Think
if
there's
any
modifications
to
documentation
that
are
required,
I,
don't
I
know
that
Aaron
went
through
a
while
ago
and
made
sure
that
disruptors
was
removed
from
all
of
the
tests.
I
don't
know
if
there's
any
reflection
of
documentation,
so
we
should
probably
finish
that
bit
make
sure
that
we
are
in
line
there.
So
there.
A
Are
a
couple
of
things
we
have
to
do
for
the
for
the
release
cycle?
What
we
did
is
just
change
sonobuoy,
but
we
we've
sort
of
left
it
open
right
now,
there's
actually
tooling
in
place.
That
does
not
allow
disruptive
test
to
be
conformance
like
that.
Tooling
has
to
change.
We
need
to
update
documentation
to
say
that
it's
okay
and
and
I
think
when
the
the
Brian
disagreed
that
the
tests,
the
disruptive
test,
cannot
be
conformist.
That's.
D
B
The
first
thing
is
our
conformance
numbers,
which
I'm
super
happy
to
report,
that
from
August
up
till
August
we'd
only
been
able
to
increase
three
point
six
points,
and
this
last
month
we've
increased
to
four
point.
One
points.
Some
of
that
is
due
to
us
being
really
good
around.
When
we
add
new
API
endpoints,
we
add
new
tests
and
some
of
it
is
due
to
tests,
but
I
think
as
a
metric.
That
we've
been
looking
at
for
a
long
time
to
see
that
high
of
a
jump
in
a
short
period
makes
me
happy.
Yeah.
A
If
I
could
just
say
like
actually
last
cycle
more
than
50%
of
the
test
last
added
over
the
last,
like
you
know,
six
cycles
or
something
or
more
or
in
the
last
cycle.
So
that's
due
to
policy
changes.
We
made
right
saying
the
things
they
need
to
need
to
have,
and
so
on
the
CR,
D
and
webhook
stuff.
That
was
a
third
of
those
and
then
just
cool
and
everybody's
been
doing
in
this
group.
So
thank
you.
Everybody
I'm.
B
So
that's
just
the
conformance
numbers
and
I'll
try
to
start
each
call
with
that
information.
I
have
other
information
on
the
end
points,
but
it's
on
the
fields,
but
it's
not
beautiful
yet,
and
it's
not
easy
to
togrog
so,
rather
than
dig
deeply
into
that
and
show
it
I'm
going
to
refrain
this
time,
but
hopefully,
next
time
we
get
together,
we'll
have
some
nice
data
to
guide
that
one
of
the
things
we
identified
I'm
going
on
to
the
next
thing:
the
next
steps
for
our
pod
deprecation.
B
We
noted
that
we
don't
have
any
tests
for
it.
We
wrote
a
test.
Brian
noticed
that
we
never
really
used
that.
That's
why
there's
no
test,
so
it
should
be
deprecated
I
have
created
ticket,
but
we
should
probably
I,
don't
know
if
it's
within
our
I
know
it
where
the
conformance
group,
but
I
would
love
to
push
this
forward
a
bit
so
that
we
have.
One
of
the
outcomes
of
the
work
is
also
to
reduce
our
surface
area.
Yeah.
A
B
B
B
A
C
B
D
B
D
I,
just
go
back
to
the
previous
point.
For
a
second
I
don't
know,
I
was
actually
just
curious
where
the
September
thing,
presumably
somebody
put
it
in
that,
went
through
review.
We
decided
it's
a
good
idea,
and
now
we
decide
it's
not
a
good
idea
anymore.
Do
we
understand
like
how
that
process
happened
and
is
there
anything
to
fix
in
that
process?
So
we
don't
end
up
with
more
things
going
in
and
being
pulled
out,
I
think.
D
A
You
know
yes
or
no
right,
we
have
caps
and
if
people
build
a
cap
and
they
start
the
process
and
they
put
in
an
API
and
then
those
people
disappear
in
the
open-source
community
and
nobody
else
picks
it
up.
It
could
still
happen,
but
I
think
that
at
least
now,
with
taps,
there's
more
buy-in
from
a
broader
community.
It's
not
just
one
or
two
people
pushing
something
forward.
Hopefully
cool
make
sense.
Thank
you.
I.
B
H
H
I
wondered
if
we
need
to
update
even
just
what
the
definition
of
disruptive
is,
because
I
think
we
were
on
the
same
page
of
like
these
tests
that
tainted
a
node
and
disrupted
workloads
should
be
marked,
disruptive
right,
they'd,
even
conflict
with
other
tests,
but
right
now,
I
think
the
definition
of
disruptive
on
that
document
says.
If
it
takes
down
a
component,
it
is
considered
disruptive
I
like
taking
down
a
node,
so
I
think.
Maybe
we
need
to
just
generalize
that
to
say
if
it
would
affect
like
workloads
not
produced
by
that
test
itself.
C
D
E
We
do
have
there's
there's
a
couple.
Other
issues
that
are
floating
around
currently
with
I
will
call
them
labels,
they're
they're,
called
tags
to
pick
at
whatever
it's
the
same
thing,
but
there
is
a
document
that
describes
most
of
them,
but
we
should
definitely
add
conventions
to
that
document
to
be
explicit,
because
there's
a
couple
of
other
issues
that
are
floating
around:
basically
people
trying
to
create
their
own
mechanism
and
there's
a
couple
of
problems
that
exist.
Because
of
that.
D
One
and
we
live
using
their
that
that
they
were
just
essentially
another
set
of
tags.
You
know
they're
kissed,
that
just
since
logically,
had
a
set
of
tags
associated
with
him
tell
you
what
they
do
and
who's
interested
in
them,
and
we
didn't
have
you
know
part
of
the
problem
with
tags,
as
we
didn't
have
a
good
definition
of
what
all
the
hospitals
were,
that
we
currently
support
and
what
exactly
what
they
mean
and
so
yeah
I
think.
D
A
A
In
that
conversation
we
have
with
Sweeney
and
Brad
a
new,
while
back
was
that
we
would
defeat
that
that
validation
Suites
were
more
or
less
equivalent
to
the
future
tags,
and
then
we
didn't
want
to
Sara
Lee
introduced
a
new
class
test
for
that
that
we
would
utilize
those
existing
mechanisms
and-
and
actually
we
would
go
through
ideally-
and
this
has
not
been
done
yet
or
even
noted
anywhere.
Yet
that's
on
me
the
that
we
would
go
through
and
tag
existing
tests
with
some
feature.
A
Information
with
the
goal
being
to
collect
data
from
the
conformance
runs
around
sort
of
that
analysis
of
what
features
are
supported
across
and
potentially
groupings
of
features
for
the
purpose
of
eventually
coming
up
with
some
sort
of
profiles.
But
that's
that's
where,
as
I
recall,
we
left
things.
G
That's
exactly
how
I
remember
John
and
we
wanted
to
get
a
feel
for
you
know
really
how
many
of
the
folks
that
were
running
conformance
tests.
You
know
if
we
added
those
new
feature
sets
and
we
got
the
metrics.
We
could
then
decide
like
oh
everybody's
running
this.
We
might
as
well
push
this
set
of
features
into
the
core
test
any
and
the
core
set
anyway,
and
so
we
might
reduce
the
need
for
needing
a
profile
because
we
have
better
metrics.
That
said,
oh,
we
can
pretty
much
just
move
this
or
or
or
alternatively,
hey.
A
And
then
actually
I
said
conformance,
but
actually,
since
we
don't
have
everything
in
conformance,
yet
that's
not
really.
We
actually
would
need
an
attack
most
of
the
end-to-end
tests
so
that
we
could
gather
information
like
that.
But
but
again,
that's
I
think
that's
an
important
effort
that
it's
not
on
the
agenda
today.
D
A
B
We
have
a
PR
open
and
that's
a
link
to
it,
and
if
we
could
look
at
look
at
that,
I
can
just
read
it
super
short,
but
basically
saying
that
our
goals
and
requirements
and
our
existing
tests
there
wasn't
a
lot
as
far
as
the
conformance
test
specifics.
Yet
we
have
a
lot
of
don'ts
and
those
are
covered
in
the
promoting
test.
The
conformance
doesn't
necessarily.
We
don't
have
a
don't
do
this
on
any
test
list.
B
So
I
don't
know
if
we
want
to
have
a
a
set
of
avoidable
practices,
actually
that
I
want
writing
good
tests
and
trying
to
identify
possibly
best
practices,
as
opposed
to
here
are
all
of
the
pitfalls.
Here
are
all
the
things
that
you
might
think
makes
sense,
but
don't,
but
this
is
our
first
step.
Okay,.
B
It
was
an
action
from
last
week.
I
want
to
make
sure
that
it
hasn't
seen
any
action.
Symptoms
mentioned
some
things
that
we
have
looked
at
for
a
while,
and
I
went
out
to
siga
API
machinery.
The
next
part
is
adding
the
reliefs
feature,
gate
required
and
deprecated
feels
so
taken
that
tribal
knowledge
and
making
it
into
stuff
it's
available
via
the
open
API
for
us
to
say
whether
we
should
use
it
for
conformance
or
not
I.
B
The
I
put
myself
on
the
agenda
for
API
machinery
last
weekend's
at
the
very
end,
a
short
amount
of
time
to
go
through
the
things,
but
there
was
some
pushback
on
what
they
were
responsible
for
and
it
I
don't
know.
If
my
approach
was
was
not
right
off,
but
if
I
could
get
a
little
more
of
a
consensus
and
is
that
important
and
how
to
approach
that
team
in
a
way
that
yeah
yeah.
A
There's
there's
actually
I
think
we
have
to
make
it
bigger
than
just
what
we're
doing
like
I.
Think
that
there's
value
in
writing
and
in
the
metadata
we're
talking
about
for
writing.
Clients
I
mean
some
of
these
things,
maybe
not
every
single
field
or
metadata
we
want,
but
what
what
I
want
to
do
is
put
together
a
kind
of
there's
been
a
bunch
of
these
efforts.
Actually
within
capi
machinery.
A
Already
they've
made
proposals
for
this,
and
for
that,
for
the
other
thing,
and
it's
just
never
come
even
some
of
them
have
actually
been
approved
and
then
just
nobody
ever
put
the
juice
behind
them
to
make
them
happen,
and
just
think
that
I
want
to
kind
of
take
a
little
bit
of
a
comprehensive
view
of
of
what
those
are
are
and
what
they
eat.
The
reasons
we're
behind
them
and
then
go
to
that
team
to
that
sing,
and
when
is
there
meaning
you
went,
you
went
recently
or
there's
another
one.
Can.
B
A
We
can
find
it
it's
not
if
it's
if
it's
this
week,
it
might
be
pretty
hard
for
me
to
make
it
happen,
but
maybe
we
can
try
and
for
the
next
meeting
of
next
week,
then
maybe
we
can
meet
and
go
over
some
of
this
stuff
and
present
it
to
them
in
a
way
that
I
hope
would
be.
Then
we're
gonna
have
to
put
juice
behind
it
to
use
the
same
yeah.
B
A
A
B
There's
a
separate
area
around
them
us
being
able
to
map
the
operation
IDs
in
the
audit
logs
and
the
thing
that's
a1
like
a
very
single
use
case.
That
would
impact
a
bunch
like
it's
only
useful
for
us,
and
so
we
looked
around
adding
that
feature.
Gate
like
an
alpha
feature
gate
that
never
gets
promoted
beyond
alpha
so
that
on
all
of
our
our
CI
test
runs.
B
We
actually
enable
that
alpha
feature
that
enables
the
audit
logging,
but
there
was
also
another,
simpler
approach,
suggested
by
lava
lamp
around
using
the
same
code
that
the
API
server
uses
to
look
up.
An
incoming
API
call
tooth
to
the
routing
and
I
still
need
to
look
into
that
a
little
probably
over
the
next
few
weeks,
or
so.
B
That's
that
thank
you.
I
want
to
put
a
AI
here
for
John
to
assist.
Maybe
next
meeting,
maybe
those
leave
meeting
after
and
the
last
thing
I
for
me
them
I'd
love
to
get
clarity
on
what
we're
doing
in
and
in
cuca.
We
can
do
a
little
typing
here,
who's
coming.
What
are
we
covered
in
a
workshop
and
I,
don't
mind,
picking
up
and
creating
a
shared
Google
Doc
from
that
time
forward
and
interacting
with
the
CNC
F
to
get
the
right
spacing
in
rooms
and
whatnot,
but.
C
E
E
E
B
B
E
E
B
Doing
do
all
the
things
and
I
think
that
the
other
thing
I
thought
about
with
doing
a
test
writing
workshop
of
some
sort
where
we
go
through
the
test,
writing
Docs
and
we
have
a
like
I'm
hackathon,
basically
around.
H
A
E
E
Already
booked
up
for
that
too,
so
I
think.
If
we,
if
we
want
to
address
this
in
the
next
coupon
cycles,
we
should
definitely
do
it
a
little
bit
earlier.
Now
we
can,
we
can
we
can
get
this
space
dedicated
for
this
I
do
think
it'd
be
a
good
contributor
workshop
to
be
honest,
like
I.
Don't
think
it'd
be
great,
because
not
enough
tests
get
written,
people
kind
of
fly-by-night,
so
I
think
actually
having
a
workshop
on.
It
would
be
super
good
in
football,
so
there
will
be
workshops
I.
A
Think
in
when
I
I
don't
know
for
sure
but
I
believe
in
like
that.
There's
multiple
tracks,
so
there's
the
new
contributors
and
there's
intermediate
contributors
and
then
there's
like
the
general
session
and
I
think
there
should
be
in
one
of
the
new
contributor
or
individual
contributor
mediation
or
workshops
on
ready
tests.
I
would
expect,
although
I
haven't
specifically
seen
that
and.
C
A
A
Okay,
well,
I,
guess
the
confusion.
There
is
we're
gonna,
look
into
getting
space
for
a
face-to-face
meeting,
almost
like
birds
of
a
feather
or
something
and
then
and
then
for
maybe
the
following
for
Amsterdam
or
something
we
can
look
into
whether
we
want
to
do
it
in
the
summit.
Natural
work,
travel.
A
Okay,
the
other
thing
I
wanted
to
talk
about
is
planning
for
the
coming
cycle,
so
I
must
admit
that
I've
got
a
long
list
of
a
is
that
some
of
which
I
was
supposed
to
have
completed
by
today
and
I'm,
not
but
I
want
to
make
sure
that
we
get.
On
top
of
you
know
what
that
will
look
like
for
the
the
next
cycle.
A
A
A
As
far
as
other
items,
I
think,
what
is
the
status
hippie
of
like
your
contract
with
CNC
half
as
far
as
continuing
testify,
let
me
like
is
that
something
we're
gonna
keep
going
on
through
this
cycle.
I
think
it's
been
valuable,
but
I
wanted
to
know
what
I
don't
know.
I
haven't
talked
to
Dan
about
it.
I.
B
A
B
That's
that's
the
picture
I
wanted
to
show
next
week,
but
if
you
want
I
can
I
can
go
over
the
numbers
and
I'm.
Looking
at
a
report,
we
have
one
two
three,
four
five:
we
have
seven
field
that
are
either
alpha
beta
or
deprecated.
That's
empirical
containers,
overhead
preemption,
shared
policy,
namespace
topology,
spread
constraints
and
runtime
class
service
account,
which
we
still
do
use
service
account
by
the
way.
B
Even
though
it's
deprecated
in
our
in
our
test,
there
are
some
pod
specs
fields
that
we
could
promote
that
have
existing
it's
this
weird
thing:
I'm
getting
ete
hits
like
the
IDI
binary
is
actually
hitting
the
API
server,
but
I'm
not
seeing
which
test
is
hitting
it.
I
think
there's
some
issues
with
how
we're
recording
our
which
test
is
hitting
the
API
and
it.
C
A
A
A
B
A
Live
in
the
they
don't
live
in
the
EE
directory,
they
live
in
the
no
DD
directory
right
now,
all
the
conformance
panic
tests
live
in
the
EDG
directory.
Do
we
know
if
sonobuoy
and
and
if
the
the
process
right
now
when
it
runs
the
IDI
tests,
that's
running
them
focused
only
on
that
directory,
or
is
it
running
all
that
all
of
the
across
all
directories
with
anything
to
add
conformance?
Does
anybody
happen
to
know
that
our
hand
on
the
call
so.
E
It
there
are
two
three
one:
two
three
there
are
three
separate
binaries
all
of
the
ones
that
our
conformance
should
have
been
moved
from
the
node
suite
into
the
main
ete
suite.
Okay,
so
there's
three
suites,
there
are
three
binaries
that
are
produced.
One
is
note
e
to
e
dot
tests.
One
is
e
to
e
that
tests
and
what
is
PVD
m.
You
tweet
that
test
and
all
of
the
conformance
tests
should
have
been
moved.
A
E
And
Aaron
did
a
lot
of
this
exact
same
work.
We
did
it
evaluation,
her
first
bass,
evaluation,
I,
don't
know
it
was
early.
I,
don't
know
if
it's
earlier
this
year,
if
it
was
last
year,
one
of
the
other,
my
brain
is
like
oatmeal,
but
we
had
done
a
first-class
evaluation
of
the
node
each
we
test
to
see
which
ones
you
can
promote
to
performance.
We
already
move
those.
So
if
there
are
new
ones
that
people
want
to
promote,
we
should
just
move
them
over.
Okay,.
A
A
I'm
looking
for
for
input
from
others,
but
in
my
opinion
this
can
still
be
on
the
on
the
list,
but,
like
frankly,
right
now,
a
lot
of
the
things
that
we
have
in
flight
just
need
to
be
delivered
on.
So
the
the
tooling
cap
I
want
as
far
as
my
my
priorities
may
be.
It
gives
me
and
other
people
can
pick
up
loopholes,
so
we
can
figure
out
as
a
group.
But
for
me
the
resolving
validation,
sleeves
slash
profiles,
drawing
bringing
that
to
conclusion.
A
C
E
A
A
C
E
A
Doesn't
help
as
much
as
one
would
hope
and
that
sense,
but
yeah
we.
How
do
we
do
that?
Tim
I
mean
we
have
this
group
here,
and
a
lot
of
these
folks
are
writing
them.
We
could
be.
Maybe
they
may
be
the
same
folks
here
they
already
need
to
pitch
it
and
start
reviewing
other
folks.
I
know
that
I'm
not
sure
how
we're
gonna
bring
in
well
I
guess
one
thing
we
could
do
we
need
to.
E
Think
I
think
we
definitely
need
to
do
like
a
Roadshow
style
thing
where
we
kind
of
squat
on
different
SIG's
and
ask
them
be
like
hey.
Have
you
looked
at
your
conformance
profiles
to
date?
You
know
and
hey
have
you
have
you
because
it
really
should
be
up
to
them
to
manage
their
AP?
Is
that
they
built
mm-hmm,
because
sometimes
that
some
of
them
I
have
to
do
I
have
to
technically
defer.
You
know
the
final
decision
so.
E
Anything
that's
obvious
in
pretty
straightforward
salon
was
the
criteria.
Okay
for
me
to
you
know,
as
your
team
wrote
like
some
of
the
other
ones
that
are
slightly
ambiguous,
you
change
the
behavior
of
this
intent
test
week.
I
know
there
was
a
couple
of
them.
You
know,
I
have
to
go
back
and
talk
with
them.
Verify
yeah.
A
So,
as
part
of
this
I
think
there
was
a
few
tests
that
we
needed
input
from
node
and
somebody
from
that
group
I'll
have
to
go
back
and
look
at
who
was
did
sort
of
volunteer
to
be
the
point
person
for
conformance
unrelated
things.
So
if
we
can
do
you,
like
you,
said,
go
to
the
roadshow
with
the
sake
talk
about
it
and
see.
If
we
can
get
each
fake
to
assign
somebody,
then
they
can
be
going
for
that
board
and
and
doing
reviews
of
there.
A
E
I
can
take
an
action
item
to
to
start
the
conversation
with
a
cig
leads
mailing
list.
Okay
and
I
can
start
something.
Maybe
I
want
a
wordsmith
and
not
be
too
harsh.
Maybe
I'll
have
a
doc
and
pastor
with
this
group
first,
because
I'm
a
little
bit
surly
and
old
now
in
this
project,
so
that
it
might
come
across
the
wrong
way
because
we
want
we
want
to
enlist
their
help,
not
shame
them
and
I
want
to
make
sure
I'm
doing
the
right
thing
for
the
community.
Here.