►
From YouTube: 20190502 sig arch conformance subproject
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
This
is
the
conformist
sub-project
SiC
arch.
The
reason
why
this
is
a
kind
of
a
one-off
meeting.
The
reason
why
I
asked
requested
this
particular
meeting
was
we
don't
often
get
the
folks
who
are
from
globe
it
working
on
the
actual
promotion
of
intent
test
instead
of
the
standard
meeting.
So
I
wanted
to
have
a
separate
discussion
with
them
to
see
what
issues
they
were
kind
of
running
into
you.
A
We
could
also
have
a
standard
set
of
status,
updates
and
other
things
at
the
end
of
this
discussion,
but
I
do
know
that
there's
been
a
couple
of
blocking
issues
for
a
while,
so
I
wanted
to
get
their
take
before
we
get
into
anything
else
about
what
what
they
see
is
the
biggest
obstacles.
How
can
we
help
facilitate
and
expedite
their
work
because
they're,
the
people
that
are
doing
most
of
the
work
right
now.
B
B
Essentially
we
talk
about
what
Iggy.
Yes,
we
can
promote.
Taking
that
low
hanging
fruit
and
next
we
see
the
gaps
in
the
80s,
which
we
can
start
writing
tests,
and
sometimes
we
run
into
videos
that
hard
to
test
takes
awhile,
for
example,
like
one
the
team
did
it
a
few
months
ago,
was
figuring
out
how
the
pre
stop
on
post
on
hoods.
B
The
wake
was
not
ported
to
ROTC.
We
did
not
know
about
that.
So
there
was
a
case
where
you
know:
post
hook
was
not
working
and
it
took
us
a
while
to
figure
out
why
this
III
tests
are
not
behaving.
I
mean
this
behavior
gap
is
not
tested
in
media
does
so
there
is
lot
of
research
as
well
as
a
lot
of
time
is
spent
on
those
things
we
should
optimize
it,
but
none
of
the
normal
community
effort
is
there
to
improve
the
ETS
coverage
or
the
behaviors.
C
B
E
D
A
The
first
one,
the
litters
will:
what
exactly
can
we
do
to
help
unblock
you
there
I
do
know
the
Windows
folks
are
having.
They
have
a
separate
ability
to
you.
We
added
this.
They
were
Servilia
to
kick
a
Windows
test
job,
but
it's
not
part
of
the
default
PR
blocking
jobs
and
I
believe
they
updated
the
instructions
for
the
main
performance
promotion.
A
D
No
currently,
we
do
not
have
access
to
that,
but
definitely
like
the
special
on
the
agent
host
again,
it's
kind
of
the
major
things
to
discuss
like
what
exact
things
should
be
start,
including
inside
the
Asian
hosting
image
like
if
we
talk
about
something
like
the
W
gate,
t
flags
which
are
not
supported
by
Windows,
and
if
we
try
to
replace
that
with
a
curl.
So
again,
there
would
be
like
the
system
level
or
the
early
system
level.
Dependences
are
there,
some
flags
would
be
supported
and
some
would
not
be
so.
F
D
Suppose
if
we
are
trying
to
promote
any
to
e2
conformance,
we
are
not
sure
whether
those
eetu
would
be
successful
on
the
windows
or
not
because
from
the
desperate,
only
few
of
the
e2
is
are
listed
over
there
on
the
windows,
jobs
that
have
record
that
these
are
running
on
the
windows
or
not,
but
not
all.
The
e2
is
right.
F
So
I
think
you
should
leave
it
up
to
the
windows
team
to
suggest,
implements
I
and
not
worry
too
much
about
the
windows
side.
At
this
point
and
the
we
do
have
the
Linux
only
tag
now
right,
Linux
only
tag
we
are
using
with
conformance
as
well
right.
So
let
us
just
your
team-
should
not
worry
about
the
window
stuff
at
this
point,
because
that
that's
a
whole
big
can
of
worms-
and
you
know
there
are
a
lot
of
people
on
their
side
trying
to
work
that
out.
F
But
what
you
can
do
is
create
issues
for
it
when
you,
when
you're
working
on
something-
and
you
see
something-
could
be
a
problem-
just
create
an
issue
for
them
and
assign
it
to
this.
You
know
sig
windows,
it
you
know,
I
sent
it
to
their
sake,
essentially
saying
that
this
is
a
conformance.
This
is
going
to
be
a
problem
for
you
next
next
time
and
I'll,
send
you
the
names
of
a
couple
of
people,
a
Patrick
line
and
who
else
Claudio.
F
C
Sorry,
like
I,
feel
like
what
you're,
what
you're
telling
Mike
to
do
is
to
add
the
Linux
only
tag
to
any
test
that
he
promotes
and
not
bother
figuring
out
whether
or
not
it
works
on
Windows
I
feel
like
right
now
when
we
promote
a
test.
There's
this
consternation
from
the
windows.
Folks,
like
hey,
like
you
shouldn't,
add
the
Linux
only
tag
or
you
should
or
whatever
so
I
feel
like,
and
this
is
cool
I'm,
just
making
sure
the
mechanics
of
it
sound
good
to
everybody.
C
A
The
promoters
are
not
aware
that
there's
a
the
ability
to
kick
a
Windows
and
to
end
test
for
that
promotion
and
I,
don't
know
whether
or
not
we
have
a
lot
of
PR
blocking
jobs,
so
I'm
a
little
bit,
hesitant
to
a
lot
to
promote
the
windows
PR
blocking
job
but
I.
Don't
necessarily
think
that
that's
a
bad
thing,
I'm
looking
at
dims
in
air
and
in
particular,
for
feedback
on
that.
C
It
should
adhere
to
certain
criteria
which
still
have
yet
to
be
documented,
but
essentially
I
would
want
to
see
it
run
as
non
blocking
for
some
sustained
period
of
time
to
gather
data
to
see
whether
or
not
it
makes
sense
to
block
but
there's
a
whole
other
question
of
making
sure
we're
not
monotonically
blocking
everything
yeah.
But
do
you
think
there
actually
is
a
job
that
you
can?
You
can
trigger
yeah.
A
And
so
so,
I
as
part
of
the
feedback
loop,
like
we'd,
updated
the
documentation
to
say
as
part
of
the
promotion
process
that
you
should
submit.
If
it's,
if
Windows
is
going
to
be
applicable
to
this
test,
you
should
you
should
put
in
your
drop
in
your
submission
at
the
very
end,
the
kicker
for
the
Windows
job,
and
it
should
be
able
to
report
to
you
whether
or
not
wood
doesn't
support.
D
Okay,
so
the
basic
part
of
what
we
are
using
now
is
like,
first
of
all,
analyzing
they
eat
we
totally
and
what
kind
of
the
properties
and
the
behavior
are
they
following
and
then,
if
they
they
are
eligible
for
the
promotion.
We
just
promote
it,
but
the
thing
happens
like
most
of
the
them
are
again
on
the
review
phase,
just
because
the
different
kind
of
rating
phases
like
one
one
of
them,
was
the
windows
or
whether
it's
running
on
Windows
or
not.
D
So
the
thing
is
that
we
are
not
sure
like
what
would
be
the
ultimate
result,
whether
it
would
be
successful
or
the
fail,
but
all
the
efforts
like
analyzing
in
the
promotion
took
stabed
and
orden,
the
first
site
so
and
also
like,
like
even
on
the
promo
like
what
time
is
it
taking
flakiness
all
these
things
so,
but,
but
as
based
on
your
suggestion,
definitely
we
should
focus
the
behavior
and
whether
it's
running
on
the
windows
or
not
that's
the
second
priority
and
on
the
github
issues.
We
can
have
the
detail.
Discussion
on
that
I.
A
Did
want
to
see
I
want
to
get
your
feedback
on
a
question
I
had
with
regards
to
you.
You
have
this
issue
open
with
regards
to
the
two
bullet
point
criteria
with
regards
to
monitoring
events
and
known
condition,
messages
and
there's
a
lot
of
tests
that
are
like
that
and
I
asked
Brian
repeatedly,
multiple
times
and
multiple
occasions
to
give
guidance
for
what
the
expectations
are
for
the
behavior
that
they
want.
What
we
expects,
given
that
we
have
a
sort
of
implicit
state
machine
versus
an
explicit
one.
A
D
So,
on
the
on
the
results,
I
came
across
around
30
to
40,
which
party
take
your
in
directly
using
the
messages
and
status,
but
most
of
them
are
relying
on
like
those
days
which
are
verifying
the
failure
behavior.
They
should
really
utilize
these
messages
instead
to
check
whether
the
failure
was
due
to
the
bad
specific
reason
like
like
then.
A
So
that
gives
me
more
information.
I
can
feed
back
to
Brian
to
try
to
sort
of
write
down
what
the
expectations
are,
because
the
the
problem
is,
if
they're
trying
to
check
the
other
condition,
it
becomes
very
difficult
to
do
if
you're
not
checking
the
actual
message
itself,
because
the
errors
and
the
failure
conditions
are
not
enumerated
for
a
given
release.
D
So
again,
like
I
was
going
through
the
design
documentation
and
it
was
mentioning
like
we
should
check
against
the
different
types
of
the
messages.
What
are
getting
returned
and
the
state
of
them
like
whether
it's
they
are
succeeding
or
the
failing.
That
said,
we
should
not
rely
on
the
messages
but
again
like
there
are
tons
of
the
failure
reasons
if
you
are
not
verifying
them,
so
it
would
be
of
very
less
preciseness.
A
Okay,
that's
good
feedback.
I
will
I
will
take
that
and
use
that
the
other
question
I
had
for
you
is
I
talked
with
shrinky
about
automating.
Some
of
the
project
board
maintenance.
So
this
is
area
project
it
gets
put
into
the
project.
Is
there
what
Aaron
is
here?
Do
you
is
there
a
bot
command
to
add
it
to
the
project?
B
B
C
On
so
forth,
I'll
have
to
add
that
it's
just
to
be
clear,
you're
saying
there
is
a
plug-in
called
project.
At
least
I
can
search
for
the
word
project
here
and
I
see
it
exists
and
it
looks
like
it.
It
claims
it
supports
a
slash
project
command.
Are
you
saying
there
are
other
commands
that
that
plugins,
because.
B
B
Hip
hacker
gave
me
an
environment
last
week,
so
I've
been
testing
these
plugins
on
that
environment,
and
now
we
have,
we
both
are
working
on
it,
and
then
we
have
all
these
plugins
tested.
But
there's
a
lot
of
code
code
changes
because
the
the
code
was
not
working
so
I
have
one
PR
outstanding
right
now,
I'll
put
post
it
in
the
chat
quickly
and
then
there
are
two
more
beers
coming
in
for
the
create
and
the
clear
commands.
Please.
A
B
A
That
the
problem
is
that
when
peers
get
posted
or
issues
get
created
and
they
have
the
label
area
conformance
that
doesn't
go
to
the
project
board,
it
requires
Mechanical
Turk.
You
know
that's
near
Erin
that
actually
goes
through
periodically
and
you
know
literally
puts
the
science
the
project
part,
because
we
have
rights
or
hackles
and
then
kind
of
shuffles
it
throughout
the
board.
So.
F
A
And
that
was
the
original
request
that
I
asked
sure
you
need
to
work
on
so
I
think
if
it
doesn't
matter
as
long
as
there's
process.
So
if
the
person,
ideally
it
would
be
just
a
label
and
auto
do
it,
but
if
they
have
a
process
to
add
it
to
the
board,
that
would
be
helpful,
too
I
think
in
the
meantime,
until
until
we
have
clear
guidance
and
any
PR
updates
feel
free,
Maya
to
just
assigned
at
Timothy,
SC
and
I'll
use
that
as
routing
because
that's
assigning
is
the
only
way
I
can
manage.
A
C
B
C
G
Yeah
Oh
as
thing
we've
created
that
new
PR
and
with
it
we
were
working
through
the
broken
project
plugin
and
the
project
plugin
is,
is
somewhat
fixed
now
and
as
soon
as
that
gets
through,
we
can
move
on
to
what
I
think
will
be
a
much
user
for
the
broader
community
as
well.
Is
this
populate
the
boards
automatically
be
a
search
queries?
That's
the
project
manager,
board,
project
manager,
plug-in
versus
just
a
project
again.
A
A
What
are
the
biggest
issues
besides
Windows
you
mentioned,
that
was
one
I
have
denoted
the
automation
piece
like
because
what
happens
is
I
have
to
go
through
manually
with
a
couple
of
different
windows
because
github
fun
to
make
sure
that
I
can
actually
do
the
diff
of
what's
actually
the
project
board
and
I
was
looking
at
that
this
morning.
What
other
issues
do
you
think
that
are
high-profile,
that
we
can
as
a
collective
group,
we
can
help
to
grease
it
and
make
it
easier
or
to
give
you
guidance
to
make
it
easier
for
you.
D
Okay,
so
so,
if
if
we
are
totally
focusing
on
the
conformance
promotion,
definitely
like
picking
up
the
priority
or
specs
or
the
components
or
the
first
one
like
definitely,
we
started
picking
up
the
pod
spec,
but
most
of
the
fields
like
the
Toleration
service
accounts.
So
these
are
the
direct
properties
of
the
pod
spec
which
are
getting
covered.
But
if
we
go
deep
down
on
the
service
accounts,
so
again
that
spec
come
into
the
action
as
a
second
step
and
again
verifying
the
different
attributes,
property
and
promoting
their
behavior
like.
D
If
you
talk
about
the
service
accounts,
image,
pull
security
secrets,
so
whether
these
properties
has
been
covered
inside
the
e
to
e,
with
respect
to
service
account
a
is
achieved
or
not.
So
that
is
the
things
for
the
analysis.
So
after
analyzing
the
service
accounts,
we
found
that,
like
the
image,
pull
secrets
and
the
secrets
are
not
verified.
D
So,
during
the
evaluation
process
of
for
the
conformance
promotion,
all
these
gaps,
we
are
finding
so
drafting
them
as
a
locals
or
local
file
system
or
in
the
centralized,
Google
Docs
or
creating
the
issue
so
that
the
new
member
or
anyone
can
create
writing
that
e
to
e.
So
so
again
like
the
along
with
the
conformance
promotion,
we
are
finding
out
the
gaps
of
for
improving
the
API
coverages.
So
where
can
we
start
adding
these
gaps
as
a
github
issues
or
the
centralized
document?
A
Yes,
so
definitely
advocate
have
issues
just
so
they're
tracked,
I,
don't
think
I,
don't
think
we
can
track
it
any
other
way.
Otherwise,
if
we
have
separate
docks,
it's
just
too
difficult
to
maintain
state
over
time,
so
we
have
a
project
board.
We
have
the
labeling.
Once
you
have
the
automation
we'll
have
disability
across
everybody,
so
I
think
just
add
the
area
conformist
label
and
we'll
put
it
into
the
board
one
of
the
areas
that
we
talked
about
during
the
last
conformance
meeting,
which
was
last
week
Thursday.
A
It
was
the
idea
of
watch
behavior
right,
you
have
it
as
Etsy
detests
inside
of
the
issue,
but
like
that's
fundamental
to
kubernetes,
so
identifying
key
watch,
behavior
and
I
think
there's
a
very
limited
subset
of
tests
that
actually
do
watches
so
extrapolating
those
tests.
I
do
have
a
question
regarding
that.
If
those
watches
should
be
using
the
common
patterns
which
they
probably
should
that
exists
inside
the
code
base,
so
would
just
be
straight.
Api
coverage
you'd
be
using
like
the
libraries
and
verifying
that
you,
those
library
behavior
of
the
watch,
it's
the
API.
D
A
Think,
as
Brian
mentioned,
pod
behavior
is
first
and
foremost
in
identifying
those
gaps
and
fixing
and
filling
those
but
I,
think
secondary,
which
was
also
listed
inside
of
our
conversations
and
somewhere
in
our
notes,
was
that
watch.
Behavior
is
fundamental
to
the
system
right,
so
like
pod,
behaviors
phone
system
and
watch
behavior
is
fundamental,
so
verifying
that
those
two
key
I
don't
know
cornerstones
we're
gonna
call
it
linchpins
are
correct
across
all
environments
is
pretty
important.
I.
C
A
So
last
time
we
talked
to
in
the
very
beginning
we
started
taking.
The
exam
is
that
we
have
a
p0
r
p0
is
fixing
the
prospect
we
did
every
had
like
a
p1
and
an
enumerated
list
of
the
other
things
in
our
conversation
last
time
at
the
very
beginning,
is
that
the
next
fundamental
piece
would
be
watches
right
and
then
we
actually
to
do.
We
need
to
enumerate
that
particular
list.
I
think
there's
still
plenty
left
in
prospect
to
go
to
be
honest
right,
so
I
think
that
will
keep
them
busy.
I.
C
A
Well,
we're
looking
through
the
list
and
if
I'm
the
project
board
and
there's
a
bunch
of
things
that
were
we
questioned,
starts
right
here.
There's
we
just
need
to
prioritize
all
these
other
things
that
are
listed
down
here.
So
it's
like
tracking
issue
for
all
the
different
behaviors
and
all
we
want
to
do
is
is
at
least
sort
some
behavior
to
the
top
right,
and
we
obviously
know
that
prospect
is
the
most
important
one.
H
A
Me
to
do
like
that's
for
me
to
to
walk
through
and
by
next
conformist
meeting,
to
give
a
concrete
list
of
semantics
of
behaviors
that
we
kind
of
expect
I
knew
you
know.
Do
you
know
that
Daniel
had
mentioned
that
we
have
a
very
limited
number
of
watches
that
we
actually
verify
across
resources
and
I
do
know
in
the
code,
because
I
wrote
the
code
is
that
it's
separate
copass
entirely
Sunday,
you
guys
they
back
into
the
main
sed
code,
but
they're,
separate
code
paths
right.
C
The
the
other
context,
I
feel
like
watch
has
come
up
for
globin.
A
couple
of
times
is
in
making
sure
that
tests
are
not
flaky
before
they
are
promote,
take
conformance
and
so
I
know.
I've
walked
from
work
through
with
Andy
a
little
bit
on.
How
do
we
rewrite
a
test
to
make
sure
that
it's
not
using
a
watch
to
to
get
its
behavior
done?
Yeah.
A
H
A
Verifying
the
informer
or
the
watch
behavior
across
informers
across
a
number
of
different
resources,
I
think.
The
notion
that
we
need
to
watch
across
resources
gives
you
the
broader
coverage
of
the
API,
because
right
now
we
like
watch
a
very
finite
limited
number
of
resources
versus
verifying
the
other.
C
So
when
you
talk
about
watching
across
resources,
for
example,
is
there
there's
a
test
for
like
we
recreate
deployment
or
something
we
delete
the
deployment?
But
we
actually
watch
to
make
sure
that
the
pods
got
deleted
in
a
cascading
effect?
Is
that
the
cross
resource
thing
you're
talking
about
that's.
A
D
So
so
another
thing
I
wanted
to
get
clarity
on
like
if
you
are
talking
about
the
pod
spec,
to
cover
most
of
the
cases.
So
as
an
example,
if
I
am
picking
up
the
service
account
so
general,
if
we
see
that
we
are
using
the
non
optional
fields,
especially
for
the
conformance
promotion,
if
fields
are
optional,
we
do
not
promote
them,
but
in
the
service
account.
Definitely
it's
an
important
property
and
inside
that
we
can
create
the
service
account
without
any
other
properties
like
the
secrets
or
the
image
pool
secrets
or
the
other
properties.
D
D
Okay
and
most
of
the
areas
like
the
create
and
the
lists
are
verified
very
precisely
or
at
a
larger
extent,
but
not
the
patch
and
update
fields,
which
we
and
also
like
there
are
fields
like
the
field
manager
now
in
the
query
parameters.
So
all
these
things
so
so
again,
these
are
important
when
we
are
trying
to
absorb
update
the
objects,
and
especially
they
would
be
useful
on
the
patching
operations
and
update
operations.
D
A
Think
you'd
be
stopped
to
be
looking
at
behavior
first,
what
is
the
behavior
expectations
of
the
things
that
we
are
trying
to
do
and
what
is
it?
End-User
operations
that
we
care
about
the
most
I
wouldn't
go
and
focus
on
bra
coverage
for
some
things
as
the
primary
function,
but
focus
on
the
standard
set
of
operations
that
people
will
do
update
is
for
the
common
patch
is
really
common.
I
wouldn't
I
wouldn't
go
to
the
extremes
of
adding
just
getting
the
basic
coverage
for
those
operations
in
place
first
and
then
adding
optional
fields.
A
Second,
but
I
would
not
go
through
the
whole
extent
of
of
its
kind
of
could
be
a
black
hole
right.
You
could
go
really
deep
and
on
given
behaviors,
but
I.
Think
the
common
set
of
operations
of
the
behaviors
that
we
expect
for
most
consumers
to
consume
should
be
the
driving
motivation,
so
I
think
what
it
sounds
like
it.
The
way
I
interpret
what
you
have
the
questions.
You're
asking
me
is:
what
is
the
order
of
evaluation?
We
need
to
figure
out
like
to
add
specific
tests
and
I
think
it
would
help.
A
Maybe
maybe
she
beyond
us
this
group
to
basically
have
a
rubric
by
which
we
help
to
you
know
the
guidance
should
be
clear
right,
the
the
guy
that
should
be.
In
my
opinion,
this
is
my
opinion,
but
as
people
can
chime
in
and
I
want,
their
opinions
do
is
that
we
should
be
covering
from
the
end-user
perspective
of
what
we
think
should
be
consistent
across
providers
right.
If
a
person
does
an
update
or
patch
command
on
their
cluster
you'd
expect
that
updater
passed
commands
to
be
consistent
across
two
different
virus.
A
B
Zone
these
behaviors
it's
hard
for
us
to
dig
deeper
and
say:
oh,
this
is
tested
this
way,
but
not
this
is
and
with
these
set
of
parameters
right.
So
at
this
point
we
do
not
have
that
reference
sheet
where
all
these
kind
of
the
behaviors
we
need
to
test.
So
what
are
all
the
parts
fact
fields
or
parameters
that
are
being
tested
as
part
of
those
behaviors
or
not?
B
G
A
It's
on
us
probably
me
in
particular
to
start
to
draft
a
document
that
helps
to
make
it
easier.
So
that
way
it
people
can
work
more
synchronously
I
wanted
to
have
this
feedback
session,
because
I
wanted
to
know
what
are
the
problem
cases
and
what
are
the
issues
in
your
scene?
It's
pretty
obvious
to
me
that
prayer,
prioritise
or
being
independently
is
difficult,
and
usually
that
means
that
we
need
to
help.
C
None
of
those
things
actually
describe
whether
or
not
a
given
behavior
was
exercised
they're
only
like
numbers
that
should
be
going
up
and
to
the
right
as
we
added
more
behaviors
right
and
I
think
there
had
been
discussion
in
the
past
about
how
okay,
once
we've
been
like
single
yeah
cool
we've
touched
the
grace
termination
period.
How
does
that
interact?
If
a
pod
also
has
posts
up
hooks
set
up,
you
know,
do
two
things
fire
off
the
expected
order.
C
That
that
to
me
that
list
of
behaviors
is
the
desired
and
state.
We
can
use
automation
for
those
proxies
of
behavior,
both
in
like
generating
things
or
in
measuring
what
we've
covered,
but
ultimately
I,
don't
know
how
to
tie
back
to
the
behavior
part
without
a
lot
of
human
effort
and
I
feel
like
in
independently,
in
parallel
from
cloven
trying
to
identify
and
then
test
that
should
be
promoted.
Yeah.
A
I
think
I
think
it's
up,
I,
think
it's
on
our
group
and
to
try
and
drive
a
rubric
or
a
framework
by
which
we
evaluate
this,
for
what
we
want
is
the
in-state
I
think
we're
we're
making
good
progress
with
the
low-hanging
fruit,
and
we've
got
a
lot
of
it.
We've
got
enough
low-hanging
fruit
to
keep
busy
for
months
right,
but
then
there
will
be
the
point
where
we
need
to
look
past
that
and
where
do
we
really
want
to
go
to
and
the
answer
of
where
they
want
to
go
to?
C
Yeah
I
think
you
said
something
else
about
like:
let's
prioritize
the
most
commonly
or
the
most
expected
behaviors
and
my
ask
in
return
would
be
like
what
data
would
we
be
using
to
prove
that
users
want
this
behavior
and
is
thus
more
important
than
this
behavior
no.1
proxy?
That,
if
he
had
been
looking
at,
was
let's
take
a
look
at
all
of
the
helm,
charts
that
are
out
there
and
let's
take
a
look
at
all
the
resources
that
those
home
charts
use
and,
let's
see
which
of
those
resources.
C
A
Think
that
that
that
sounds
like
it's
going
to
take
a
while
I
think
in
the
meantime,
it
you
know
continue
on
the
prospect
work,
knowing
that
we
are
building
automation
to
help
make
the
project
board
stuff
work
cleaner,
and
you
know
to
answer
your
question.
Yes,
we
should
be
doing
evaluating
optional
fields
in
tests
and
that
alone
will
keep
you
busy
for
a
long
time
right.
That's
not
going
to
end
any
time
soon.
I
But
Tim
Dan
conure
I,
just
I
thought
I
heard
Erin
asking
for
potentially
some
deliverables
from
hippy
and
API
snoop,
but
I
wasn't
clear
on
on
that
seem
it
could
that
it
could
be
done
in
parallel
with
better
documentation,
process
and
suction
guidelines
on
optional
fields
and
everything.
But
I
wasn't
quite
clear
on
on
whether
they
had
actually
whether
there
was
an
explicit
actionable
request
there
from
Aaron.
C
What
behaviors
and
tests
we
should
be
generating
or
prioritizing
I
think
that
what
hippy
has
proposed
in
the
past,
the
use
of
pod
manifests
just
manifests
in
general,
from
the
helm.
Charts
could
be
a
useful
birth
artifact
from
us,
but
that's
sort
of
the
priority
in
which
I'm
interested
in
receiving
those
things.
A
A
B
A
G
We've
updated
the
front
page
with
a
repo
to
have
a
bit
of
more
info
on
how
to
get
started
in
what
we're
doing,
in
particular,
how
to
get
audit
logs
for
the
different
areas
we're
trying
to
get
that
from
different
groups
and
also
how
to
run
a
PA
Symphony.
The
analysis,
as
far
as
specific
to
this
group
there's
links
in
the
document
that
we've
shared.
G
G
You'll
note
that
in
user
agent,
reg
X
field
you're
able
to
see
all
of
the
other
user
agents
and
rather
than
just
e
to
e,
which
will
allow
us
when
we
start
getting
usage,
see
behaviors
from
other
applications
and
other
other
operators
to
see
what
they're
doing
that
allows
us
to
focus
on
those.
Don't
give
an
example
of
that
later.
The
next
thing
is
being
able
to
focus
on
test
tags.
I
didn't
have
time
to
prep
it
for
now,
but
I
can
also
create
our
own
visibility
tags.
G
That
would
be
in
addition
to
the
test
tags,
so
we
can
create
something,
for
example,
for
Windows
tests
that
also
interact
with
the
disk
and
then
under
test
patterns
as
well.
This
would
be
a
list
of
all
you
can
filter
to
specific
tests,
so
we
can
focus
on
on
that
over
here
in
I.
Think
this
is
the
next
link.
G
This
is
the
user
agents
that
are
not
e
to
e,
and
so
you
can
see
all
the
matches
here
that
this
is
CSI
snapshot
or
metrics,
cluster
Kubb
and
and
what's
what's
interesting
here-
is
that
this
is
our
release
blocking
job.
This
is
the
what
everybody
against
a
test-
and
we
say
this
is
okay
and
we
only
hit
60%
of
those
end
points
and,
and
37
of
them
are-
are
hit
37%
ER
hit
by
conformance
test.
This
is
a
good
if
we
were
look
at
other
applications.
G
Like
the
stuff
from
the
helm
charts,
this
is
not
behavior.
Yet
this
is
just
endpoint
coverage
and
then
here's
another
example
of
searching
for
a
signal,
for
example.
But
the
again
these
are
these
tests
and
test
tags
means
we
have
to
have
test
and,
as
erin
was
pointing
out,
we
need
to
see
more
patterns
going
forward,
so
we
can
identify
what
patterns
beyond
just
the
you
and
we
used
to
identify
and
also
the
kinds
what
we
talked
about,
focusing
on
pod
and
other
behavioral,
focusing
on
lists.
G
G
Also
went
through
and
present
put
together
in
that,
based
on
the
APS
new
documents,
research,
what's
new
in
115,
and
it's
those
big
bold
headlines
of
what
here's,
the
four
new
tests,
there's
three
new
tests
that
were
tagged
as
conformance
I
thought.
That
was
interesting.
Maybe
that's
four
new
tests.
Why
did
we
get
three
or
four
new
tests
into
one
fifteen
conformant
right
off
the
bat
and
then
I
thought
about
automating
way,
so
we
can
see
when
we
get
a
new
test
like
if
we
did
something
like
this.
G
And
we
also
assign
the
reading
back
through
our
notes
here
we're
looking
forward
to
we're
getting
data
in
from
the
kind
team
and
also
from
the
CNF
test
bed.
That
should
happen
soon
and
we've
made
it
in
the
repo.
So
if
you
want
to
focus
on
different
buck,
different
crowd
jobs
that
produce
audit
logs,
we
can
just
update
that
in
the
repo
and
our
CI,
jobs
will
pick
up
that
data
and
present
it
and
process
it
for
us.
G
G
A
A
real
quick
question
for
you:
I,
don't
know
if
you
even
know
these
do
this:
what's
the
Delta
for
keeping
the
tests
wide
open
versus
conformance.
So
if
you,
if
you
look
at
the
total,
just
raw
numbers
of
API
coverage,
because
there's
currently
like
over
a
thousand
tests
conformance
covers
like
less
than
200.
G
G
C
When
I
talked
about
this
at
Shanghai
last
year,
it
was
like
there
was
maybe
a
10%
Delta
in
the
coverage
of
one
of
the
release,
walking
jobs
that
was
all
of
the
non-disruptive
non
flaky
non
feature
specific
tests
that
we
only
gave
us
like
an
percent
additional
API
coverage,
and
the
number
of
test
cases
is
closer
to
four
thousand
at
the
moment,
mostly
because
a
lot
of
those
test
cases
are
different
storage
drivers.
As
far
as
I
understand.
C
Like
that,
the
thing
for
me,
I
feel
like
is
lacking
from
these
kinds
of
meetings,
is
I
would
love
to
see
every
meeting
start
off
with
like
this
is
what
has
changed
since
last
time.
We've
added
this
many
tests.
We
now
have
this
much
coverage.
Are
we
heading
in
the
right
direction
and
I
want
to
understand
if
this
is
useful
to
this
group,
but
not
I?
Think.
C
I,
don't
I,
don't
know
if
you
do
that
right
now
by
walking
the
board
to
understand
like
these
are
all
in
the
PRS
we
have
shepherded
in
the
past
week
or
whatever,
but
like
typically,
when
I
have
interacted
I've,
never
really
been
able
to
get
pull
us
out
of
the
level
of
we're.
Just
talking
super
tactically
about
individual
PRS
and
individual
issues,
it
would
be
great
to
start
with
what
has
changed
the
sea-air
progress.
G
C
A
Beyond
that
there
was
the
PR,
but
there
was
the
issue
for
me
to
follow
up
yet
again
with
regards
to
guidance
on
the
current
set
of
blocking
tests
that
can't
be
promoted
because
of
the
two
criteria
and
there's
the
long-term
goal
of
starting
to
create
umbrella
issues.
For
us
with
regards
to
behavior,
as
well
as
like
a
rubric
by
which
we
can
evaluate
asynchronously
and
not
have
to
have
everyone
calm
to
sort
of
understand
what
makes
sense.
And
what
doesn't?