►
From YouTube: 20190507 sig arch conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
B
A
A
B
A
What
Trini
is
doing,
there's
two
parts
to
that
work.
One
part
was
just
to
add
the
automation
for
a
person
to
do
slash
project
whatever
the
second
one
was
to
especially
associate
the
area
CFCF
label
with
auto
putting
in
the
triage
column
for
the
project
board,
because
that,
right
now
it's
it's
all
mechanical
turk
area,.
A
C
So
I
posted
a
couple
of
long
responses
on
that
one,
as
well
as
following
up
on
some
day,
issues
that
were
referenced
or
PRS
who
were
referenced.
I
think
all
of
those
tests
mentioned
are
currently
don't
qualified
for
conformance
like
they
are
doing
things
that
they
should
not
be
doing
that
they
are
overly
specific
they're
like
unit
tests
that
are
testing
the
implementation,
not
the
behavior
that
is
guaranteed
to
be
stable
and
portable.
A
C
C
A
C
So
I
tried
to
link
to
previous
documents
and
explain
why
it's
wrong
and
where,
if
you
had
made
this
assumption,
it
would
have
been
broken
in
the
past.
For
example,
when
we
added
crash
loop
back
off
and
image
the
image
pulled
back
off
any
test
that
was
testing
earlier
specific
reasons
for
why
a
container
was
not
starting
would
have
been
broken
and,
as
we
continue
to
improve
operational
behavior
I
would
expect
more
such
you
know,
behavioral
improvements
to
be
added
right.
So
that's
not
a
thing
you
can
depend
on.
C
We
have
broken
that
several
times
and
when
we
had,
we
were
on
versions
Q
tests.
We
run
the
incident
us
against
other
kubernetes
releases
and
when
those
break
someone
has
to
go
through
case-by-case
and
figure
out.
If
that
breakage
is
acceptable
and
then
they
have
to
decide,
am
I
gonna,
back
port
changes
to
the
test
in
the
release
branch
for
a
previous
release
and
that's
just
fraught
with
peril.
We
have
broken
actual
compatibility
in
the
past
when
people
changed
the
tests
to
match
new
behaviors.
C
A
Yes
and
no
I
mean
there's
I
understand
where
they're
coming
from
and
I
understand
the
approach
they're.
Taking
that
there's
there
stricter
guarantees
that
exist
in
other
systems
and
there's
not
there's
a
very
loose
security,
there's
no
guarantee
I,
guess
in
Kiruna
DSLAMs
and
we're
making
that
explicit.
So
as
long
as
we
give.
C
A
A
C
A
C
A
I
didn't
have
another
action,
which
is
the
long-standing
one,
which
was
we
need
a
rubric
which
I
think
I
could
may
also
ask
you
Brian
for
help
on
that.
One.
With
regards
to
an
understanding
of
how
the
club
and
folks
want
to
be
able
to
proceed
in
absence
of,
there
was
a
lot
of
questions.
I,
don't
know
if
you
have
a
chance
to
look
at
the
conversation
we
had
last
week,
Thursday
there
was
a
recording
that
it
posted,
but
the
havoc
having
like
a
way
for
them
to
evaluate
what
they
should
do.
A
Next
is
helpful
in
absence.
You
know
it
absence
of
being
able
to
sync
with
this
individual
group,
like
a
individual
reverb,
for
how
they
want
to
approach
things
there
right
now
trying
to
fill
out
upon
spec
behavior
and
then
a
bunch
of
questions
about
like
what
matters
most
and
there's
a
lot
of
conditional
fields
that
exist
inside
of
the
prospects.
So
they
were,
they
were
trying
to
determine
what's
the
best
area
to
cover.
First.
B
C
Can
go
back
and
look
at
that
recording
if
they're
not
much
in
the
way
of
notes,
or
we
can
discuss
it
now
or
at
a
future
meeting
if
that
would
help,
but
in
the
way
I
approached
it
and
at
least
initially
there
was
a
lot
of
low-hanging
fruit
because
you
could
just
literally
walk
through
the
pod
spec
and
there
are
fields
and
if
they're
expected
to
be
portable
and
there
were
no
tests
covering
even
rudimentary
behaviors,
then
we
should.
You
know,
create
some
tests
that
cover
those
rudimentary
behaviors
mm-hmm.
B
C
Do
we
have
tests
that
cover
all
the
different
ways
you
can
run
life
cycle
or
sorry,
liveness,
probes
and
readiness,
probes
right?
There
are
HDD
probes
and
exec
probes
and
TCB
probes.
Do
we
have
basic
coverage
of
those?
If
we've
completed
like
all
the
low-hanging
fruit,
then
we
can
think
about
well.
Are
there
failure
mode
behaviors
that
we
care
about,
or
should
we
move
on
to
other
things
like
watch
and
for
watch?
I?
Think
I
would
just
walk
through
the
exercise
of
imagining.
C
What
a
typical
controller
would
do
right
so
I
think
that
yet
the
rudimentary
test
we
had
for
watch
is
if
I
do
create
an
update.
A
delete.
Do
I,
see
those
events
as
super
rudimentary.
You
know
you
need
to
be
able
to
do
things
like
if
want
to
watch
connection
breaks.
You
need
to
be
able
to
resume
at
the
same
resource
version.
Even
if
there
are
other
changes
you
need
to
figure
out
what
the
consistency
model
is
and
test
that
right.
C
So
the
way
I
would
approach
is
look
for
broad,
simple
coverage
first
and
then
once
that
is
satisfied,
go
down
to
the
next
level
and
try
to
imagine
what
the
gotchas
are.
They
might
be
different
in
a
different
implementation,
but
still
should
be
expected
to
be
portable
and
the
same
that
a
user
writing
a
client
would
reasonably
could
reasonably
depend
on
in
a
forward
compatible
way.
I
understand
that
increasing
levels
of
subtlety
is
possible.
D
A
B
I
can
actually
send
out
yeah.
We
we
are
listing
out
the
behaviors.
Actually,
we
are
trying
to
figure
out
what
other
fields
are
missing
from,
but
just
getting
to
understand
a
field
apart
from
the
behavior
is
getting
harder,
because
we
kind
of
think
that
we
have
reached
a
point
where
we
don't
have
any
more,
but
I
can
I
can
actually
send
out
that
list
by
tomorrow
and
then
we
can
see
what
is
missing
in
in
terms
of
behavior.
So
that
way
we
can
probably
chip
in
and
actively
add
some
items
there.
D
C
D
A
C
E
I
think
you
can
unforced
and
share
for
me.
My
zoom
client
tends
to
break
my
box
and
so
I
can't
control
the
zoom
session
if
you'll
yeah
I
may
have
to
be
kicked
or
or
something.
Thank
you
for
that.
I'll.
Try
again.
E
Are
related
to
our
conformance
we
during
the
last
week
we
I
sent
some
links,
but
they
were
to
the
entire
coverage
and
they
were
a
lot
higher.
When
we
focus
on
stable,
we
can
see
that
the
numbers
do
do
get
better.
As
far
as
the
total
number
of
tested
conformance
is
slowly
creeping
up
there.
We
actually
went
down
between
113
at
112
and
113.
We
also
updated
the
the
link
some
of
the
other
things
we
were
looking
at
actually
down
there
in
the
board.
E
If
I
hear
that
we
trying
to
see
what's
different
from
last
week
and
kinda
track,
what's
different
on
our
board
in
the
same
way
that
I'm
working
with
Kate's
infra
a
team
to
audit
the
changes
within
our
community
infrastructure,
I
thought
it
might
be
interesting
just
to
write
something
to
to
dump
the
board
and
do
a
diff.
So
we
can
see
why
it's
changed.
I
know
that's
been
mentioned
a
few
a
few
times
and
on
the
pod
behavior.
E
That
might
help
us
to
focus
and
see
what
parameters
and
objects
are
in
use
and
then
the
last
thing
with
some
some
new
UI
elements
to
focus
on
I,
don't
know
whether
to
call
them
kind,
their
definitions,
depending
on
whether
we're
looking
at
it
from
you,
know,
what's
passed
in
the
API
or
what's
referenced
in
the
open,
API
spec.
But
it's
looking
at
which
applications
and
what
parameters
are
being
passed
to
those
to
those
kinds
of
figley
ready
to
patch
and
update.
D
So
I
mean
the
idea
is
just
I
think
we've
talked
before,
but
is
to
separate
out
the
list
of
expected
behaviors
from
the
actual
test
code
right
now,
they're
embedded
in
there,
and
it
makes
it
really
difficult
until
you
answer
the
questions
we
trained
Ian's
earlier,
like
what
coverage
do
we
have
on
pots
back
means?
We
have
to
essentially
dig
through
a
lot
of
code
to
determine
what
that
is.
It
also.
B
D
It
so
that
when
we're
doing
this
test
promotion
or
when
creating
new
tests
as
I
see
it,
there's
two
different
rules
here,
there's
the
rule
of
the
the
reviewer
who
can
identify.
Excuse
me
whether
a
particular
test,
or
rather
rather
whether
a
particular
behavior
should
be
subject
to
conformance
and
then
there's
the
whether
individual
tests
actually
test
that
behavior
and
that
I
think
that
two
different
types
of
reviewers
can
do
that.
D
D
Yeah
I
mean
it's
definitely
a
substantial
amount
of
work,
but
I
think
it's.
It
can
be
doing
in
a
way
that
I,
don't
think
I
described
this
in
a
cat,
but
it
can
be
done
in
a
way
that
is
essentially
allows
the
existing
conformance
testing
reporting
evaluation
all
to
continue,
because
it's
just
labels
at
that
point
right.
So.
E
John,
one
of
the
things
that
we're
trying
to
explore
was
actually
generating
some
llamó
based
on
the
usage
from
the
helm
chart
and
the
CNC
FCI
audit
logs,
so
that
we
could
see
what
those
parameters
are
and
kind
of
have
a
sort
of
the
and
I.
Don't
know
what
may
be
coming
with
wording,
whether
they're
kind
or
definitions,
when
those
calls
are
made
to
the
different
object
types
within
me
within
the
API,
so
that
we
can
have
that
prioritized
list,
I'm
hoping
to
happen
that
will
be
generating
that
gamal.
E
D
Yeah
we
can
discuss
it,
I
mean
that
would
definitely
give
us
a
more
concrete
measure
or
data
some
data
about
what
which
things
seem
to
be
more
commonly
used
and
so
maybe
which
was
a
priority
but
but
I
guess
to
like
the
discussion
of,
say,
readiness,
probes
or
aliveness
probes.
You
know,
maybe
all
those
home
charts
are
using
HDPE
or
exact.
Does
that
mean
we
don't
test
CB
as
part
of
confinement?
I
think
he's
don't
need
to,
but
I
guess
lower
priority.
I
guess.
A
The
question
I
have
is
like
we're,
basically
creating
a
lot
of
meditative.
Would
it
be
better
to
have
a
well
structured
description
for
how
we
approach
things
you're,
putting
at
it
you're
putting
the
quote
unquote
tag
for
lack
of
a
better
that
we've
defined
inside
of
the
intent
tests
that
basically
wraps
behavior,
as
if
it
were
like
a
feature
right.
D
Well,
I
was
putting
the
that
description.
You're
talking
about
in
a
separate
file
in
a
separate
machine,
Google
file
is
the
description
of
all
the
tests
and
all
the
different
areas
that
they're
in
potentially
features
are
associated
with.
And
then,
when
you
write
a
test,
you
just
create
a
tag
referencing
back
to
which
of
those
babies
is
test
validates,
but
there's
there's
already
metadata
around
some
other
pool
and
so
that
there
is
a
sort
of
there's
pros
and
cons
to
separating
these
things
out
and.
D
My
my
vision
here
is
that
we
can
sit
down
and
may
allow
all
those
behaviors
and
then
we
can
start
to
match
the
tests
to
them,
and
then
we
can
answer
that
question
of
of
the
pods
back.
What
percentage
have
we
covered,
which
we
have
no
way
to
answer
right
now
without
manually,
going
through
exactly
the
same
procedure?
Essentially,
okay,.
A
D
A
D
D
A
C
And
so
what
that
would
give
us
is
a
list
of
features
that
are
represented
in
as
fields
of
some
API
resource
that
should
be
covered
by
conformance.
What
it
doesn't
give
is
whether
a
given
tests,
even
if
it
exercises
that
feature,
whether
it
actually
validates
the
behavior
of
that
feature
or
not
yeah.
A
C
D
E
E
Is
there
we're
currently
referencing
the
tests
by
the
string
that
gets
generated
during
the
ginko
test
run
and
in
trying
to
instrument
a
while
back
I
added
some
some
information
to
the
the
HTTP
user
agent
to
identify
where,
in
the
source
code,
we
were
coming
from
I
in
trying
to
link
together
where
that
test
is
within
the
source
code
base,
as
we
as
the
names
change
over
time
as
tags
are
added
and
removed?
It'd
be
nice
to
have
a
canonical
way
to
say
this
test
in
this
piece
of
code,
so
we
can
identify
that
changes.
E
So
when
we're
talking
about
what's
happened
since
last
week,
what's
change,
we
can
actually
see
the
diff
in
just
the
set
of
functions
around
that
test
and
reference
it
that
would
be
I.
Don't
that
would
be
useful,
I
feel
it
would
be
useful,
but
I
don't
know
how
to
identify
that
and
generate
somewhere.
So.
D
E
You
guys
would
all,
of
course,
I
just
right
now:
I
think
we
generate
a
text
file
and
the
text
file
is
the
name
of
that
test,
as
it
looks
in
that
and
as
far
as
automating
the
tooling,
to
not
just
reference
that
name
because
it
changes
over
time
but
which
source
code
file
and
function
at
least
the
entry
points.
So
we
have
a
tight
coupling
between
the
the
which
test
is
this
and
where
is
it
in
the
code
base
and
as
those
functions
change
over
time
just
real
quickly?
A
I
can't
wait
more
about
routing
behavior
because,
like
the
district,
like
the
description
tag,
John
was
mentioning
I.
Think
it's
far
more
important
to
me
as
a
person.
Who's
trying
to
evaluate
a
failed
test
run.
Is
that
I
want
to
know
what,
because
it's
not
very
intuitive,
sometimes
from
the
strings,
so
I
don't
really
care
what
the
source
code
is,
because
I
can
figure
it
out.
What
I
care
about
is
what
are
we
actually
trying
to
test,
because
the
description
itself
doesn't
actually
is.
It
doesn't
encompass
all
of
the
behavior?
A
What
I'm
trying
to
look
for
so
what
I'm
looking
for
is
what
were
you
really
testing?
Why
were
you
testing
in
and
why
did
it
fail?
That's
the
most
common
feedback
we
get
from
the
wild
is
like
you
have
no
idea
what
this
test
is
doing.
The
the
most
common
feedback
we
get
from
90%
of
people
is
that
DNS
is
terrible.
So
we
should
all
know
this,
and
that
too,
is
that
they
don't
understand
where
what
what
tests?
How
do
I
figure
out
what
this
test
is
actually
doing
even
fix
it.
A
A
E
Making
sure
there's
not
any
feedback
on
it,
stable,
a
heart,
a
stable
helmet,
our
generation,
the
way
to
go
what
about
operators
IO?
What
sources
work?
How
can
we
currently
identify?
What's
what
our
user
bases
for
what's
important
and
prioritizing
those
sources
before
I
go
dig
into
them,
might
be
nice
just
as
feedback
from
from
this
team.
E
A
E
For
a
while
to
work
with
the
the
helm
that
helm
project
has
passed
through
several
people
and
the
the
buckets
themselves,
the
big
query
buckets
for
the
popularity
I
am
I
had
a
lot
of
trouble
trying
to
generate
the
the
what
was
most
heavily
downloaded.
If
somebody
wants
to
take
a
second
look
at
that,
or
or
maybe
pair
with
me
on
and
I,
that
would
be
a
good
source
and
I
have
all
of
the
information
in
the
buckets.
But
my
queries
came
back
like
they.
D
E
Heavily
as
well
and
and
I've
thought
about
the
same
way
that
we
use
the
the
hefty
o
or
the
VMware
son,
oh
boy,
right
where
you
can
go
to
the
son
of
wayside
and
say
yes,
I'd
like
to
be
see
if
I'm
conformant
than
you
past,
that
being
able
to
go
to
API,
snoop
and
say
I'd
like
to
do
a
new
thing
and
and
and
either
upload
my
audit
logs
or
run
it
internally.
So
they
don't
upload.
E
D
Time,
but
and
and
see
what
you
know,
what
they're
using
the
the
kind
of
gifts
to
another
point
that
probably
we
shouldn't
really
talk
about
now,
but
I
just
want
to
throw
out
there.
Is
it
another
step
after
we
get
some
of
these
server
conformance
things
resolved
would
be
to
have
something
that
can
look
at
your
your
manifest
when
you
wanted
to
play
an
application
and
and
identify
if
everything
you're
doing
there
is
conforming
is
portable,
rather
as
it
can
be,
there.
A
Are
people
that
use
kubernetes
a
very
limited
fashion
and
a
very
regular
basis,
I
think
what
might
be
interesting
to
you
is
to
do
a
poll
like
if
you're
doing
wanted
to
help
charts
is
like
the
anonymous
data
that
you
could
potentially
get,
but
having
a
actual
poll
from
the
community
about
like
what
resources
they
use
most.
What
things
do
they
think
lack
in
test
cover
is
just
actually
creating
a
structured
way
for
evaluations
so
that
we
can
actually
get
raw
customer
or
user
feedback.
A
A
A
B
B
A
I've
always
wondered
why,
personally,
why
we
have
it
just
auto-generated
and
pushed
into
Docs,
so
that
way,
would
a
person
does
a
search
for
like
here's,
the
most
common
scenario
I
see
in
a
while
something
failed
and
they
want
to
figure
out
why
and
then
they
go
to
certain
set
of
slack
channels
and
that's
the
same
question
in
fact.
Sometimes
right
and
I
think
optimizing.
That
user
story
of
saying,
like
I,
want
a
Google
search.
A
Why
this
failed
boom
that
punches
to
a
webpage
that
documents
and
outlines
what
this
test
is
doing
and
here's
the
actual
data
pictures
we
can
go
find
more
if
it's
part
of
the
release
tarball
that
might
help
some
people,
but
that
doesn't
actually
give
the
end
user
community
or
the
distributors
ability
to
find
and
search
and
hunts
to
figure
out
where
what
is
this
failure
and
why?
Why
is
what
are
some
ways
to
diagnose
the
problem.
B
Yeah,
so
the
metadata,
as
I
said
with
the
test
it
if
I
hear
it
correctly,
is
give
the
end-user
some
direction
how
to
debug
the
problem.
If
the
test
fails
right
so
yeah,
you
can
probably
respond
to
that
enhancement.
Request
and
say:
I
propose
that
it
should
be
part
of
the
release
dogs
under
conformance,
but
it
can
live
anywhere.
I
mean
once
the
Cape
is
approved.
I
think
I
can
I
can
even
try
to
push
a
br
to
get
this
may
be
part
of
115
or
so.
B
B
A
The
question
is
we're
talking
about
shiri
release
the
metadata
as
part
of
the
release
and
I
said:
why
isn't
it
just
part
of
the
doc
sites?
Wouldn't
anybody,
Google
searches?
They
can
find
the
conformance
test
automated
in
it.
So
the
person
fails
a
test.
They
just
do
a
google
search
and
they
find
it
right.
A
Sure
the
question
that
I
had
was
I
personally,
don't
think
this
is
something
that
rises
to
kept
level
feature
that
cuts
across
multiple
states
or
it'll
even
be
major
announcements.
It
seems
like
what
I
want
to
make
sure
we're
doing
is.
Caps
are
meant
to
raise
the
bar
for
feature
enhancements
so
that
we
have
broader
signal
coverage.
This
just
seems
like
this
is
a
no-brainer.
We
don't
need
so
much
process
to
be
burdensome
in
the
way
just
go
ahead.
Do
it
I,
don't
have
a
problem
with
it.
What
are
your
thoughts
there?
That
was
the.
C
Question
yeah
I
mean
we're
trading
tips
for
more
and
more
things.
If
there's
any
discussion
needed
even
within
a
cig
about
the
design
or
whether
or
not
we
should
do
it,
and
this
particular
thing
I
think
is
on
the
edge.
If
you
just
did
something
and
showed
people,
and
they
said
sure
that
looks
good,
you
could
just
merge
it.
If
not,
then
we
can
go
back
and
discussing
the
cap.
I
gotta
be
fine
with
that.
C
A
A
D
C
C
It
might
be
an
issue,
that's
in
the
in
progress
column
rather
than
a
PR,
and
then,
when
it's
ready
for
a
review,
it
moved
to
the
in
Review
column,
and
it's
in
review.
Yeah
put
it
in
the
in
Review
column,
hard
question:
you
know
anything.
That's
labeled
WIP
probably
should
not
be
WIP.
If
it's
in
review,
okay.
A
So
just
to
give
a
brief
update,
so
part
of
what
we
want
to
do
is
make
sure
that
we're
tracking
progress
there
are
currently
five
that
need
approval.
I
do
believe
that
everyone
else
is
LG
teams.
Those
items
in
there
were
several
aside
to
you.
John.
Do
you
want
me
to
punt
the
networking
ones
to
you,
because
there's
leave
one
networking,
one!
That's
good
went.
A
A
A
A
D
A
D
C
C
B
C
D
A
Are
using
the
right
way?
Yes,
that's
that's
a
fair
statement
and
I
think
having
the
proper
or
ontology
or
taxonomy
is
important
yeah.
We
are
out
of
time.
You
get
one
more
right
here.
This
one
I
want
to
take
a
look
at
it,
promote
ete
check
existing
eg,
a
check,
pod
toleration
for
no
executing.
This
looks
like
another
one.
This
trip
all
of
you.