►
From YouTube: 20190618 sig arch conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
All
right,
so,
hopefully
folks
can
see
my
screen.
Yes,
no,
yes
right
so
from
a
planning
perspective
like
what
we've
had
outlined
up
above
was
that
we
wanted
to
address
core
functionality
and
kind
of
go
through
things
in
order.
But
what
I
wanted
to
do
as
a
group
here
before
we
get
into
the
116
cycle
into
deep
into
it?
We've
kind
of
been
marching
to
a
drum
that
was
pre
established
by
pretty
much
folks.
We've
been
primarily
Brian
and
globin.
A
A
What
are
some
of
the
things
that
they
think
need
to
be
addressed
or
should
be
addressed
and
try
to
sort
that
against
the
legacy
priorities
and
see
if
we
can
come
up
with
a
prioritized
list
of
what
we
think
are
the
most
important
things
that
we
should
be
doing,
because
sometimes
taking
a
step
back
allows
us
to
move
faster.
So
with
that
in
mind,
like
one
of
the
concrete,
tangible
items
that
I
think
is
important,
I
think
would
make
us
move.
Faster,
is
John's
proposal.
B
That
what
I
was
going
to
say
to
them?
Oh,
so
you
can
imagine
Jack
I
have
to
cut
out
a
little
early
12:30
because
I'm
going
to
present
a
lot
of
important
work,
we're
doing
actually
internally
here
in
an
effort
to
get
some
some
more
support
here
internally,
but
I
think
that
the
big,
the
big
thing
that
the
big
thing
that
the
proposal
to
me,
the
main
thing
about
is
tooling
right.
We
need
to
link
more
tooling
around
figuring
out
what
it
is.
B
A
Think
so
too,
we've
kind
of
been
doing
things
the
brute
force
way
and
it's
been
pretty
slow
and
beliebers,
and
unless
we
have
people
who
are
seriously
understand
the
code
and
are
dedicated
to
digging
into
the
gritty
details
to
fix
the
tests
to
make
sure
that
they
match
we
get
into
this
weird
catch-22
situation
where
we
actually
I
think
waste
more
time
than
actually
getting
more
things
done.
Maybe
that's
me,
that's
my
that's
my
supposition,
but
that's
my
take
on
it
then
I,
agree.
I.
Think.
B
It's
it's
been
slow
as
part
of
this
internal
presentation.
I'm
doing
I
looked
at
the
tests
over
the
last
four
or
five
releases.
So
since
1:9
and
we've
gone
from
like
159
tests
in
the
conformance
txt
to
221,
and
that's
just
like
I
asked
Amade,
there's,
probably
over
2,000
tests,
we
actually
need,
and
so
that's
one
ever
gonna
get
up
there
without
some
real,
tooling
yeah.
B
C
A
D
Hi
this
is
Dan
Kahn
from
Cynthia
I
just
wanted
to
say
this.
We've
seen
this
as
a
critical
aspect,
since
we
launched
the
conference
program
believe
a
year
and
a
half
ago,
and
we've
been
investing
the
significant
money
with
globine
along
the
way
and
I
think
there's
a
general
consensus
that
the
globine
process,
as
we
laid
it
out
is,
is
not
working.
The
way
that
we
had
hoped
it
would
so.
D
I
would
just
express
here
that
cncs
determination
that
having
thorough
and
comprehensive
and
and
well-functioning
performance
tests,
it's
critical
that
termination
still
the
case,
and
so
we
don't
see
it
as
our
job
to
dictate
to
this
group
or
cigarettes
or
elsewhere
what
the
right
process
is,
but
that
we
do
have
some
budget
here
and
work.
We
are
interested
in
supporting
these
efforts.
A
C
B
A
B
The
highest
priority
think
that
would
be
my
opinion.
The
I
think
that,
from
a
blocking
standpoint,
I
originally
wrote
the
cap
in
a
way
that
we
would
we
would
do
it.
We
would
do
it
in
a
way.
That's
can
be
incremental,
as
opposed
to
you
know
having
to
redo
everything
we
these
tests
could
give
me
or
did
incrementally
PYP
we'll
see.
Hopefully
that's
that's,
feasible
and
I
think
it
is,
but
I
haven't
gotten
much
feedback
on
the
gap.
I'd
love
to
get
that
if
they
blow
pants
after
a
call.
A
E
Can
help
as
well,
but
it's
unclear
to
me
whether
I
can
commit
enough
of
my
time
to
consider
this
a
p0
for
me,
but
this
was
part
of
the
reason.
I've
worked
my
way
up
through
the
reviewer,
an
approver
path
for
the
tool
that
determines
what
is
and
is
not
a
conformance
test
is
for
that.
I
can
help
with
anything
that
modifies
the
tool
to
consume
the
files
that
are
described
in
trance
proposal.
B
A
B
E
G
The
name
comes
from
up
north
host,
so
basically
it
marches
to
a
single
word
because
it
was
supposed
to
be
at
most
of
either
Linux
or
Windows.
The
original
idea,
but
yeah.
Thank
you
so
much
for
the
reviews,
the
first
parts
of
the
sitter
ization
merged,
which
basically
means
that
13
images
and
have
been
eliminated
from
the
list
and
I've
just
sent
a
third
part
today,
which
adds
another
7
to
the
list.
A
H
I
mean
so
he's
pretty
much
taking
that
on
and
I
try
to
review
when
I
can
and
I
think
he
covers
most
of
what
I
was
interested
in,
but
I
do
sort
of
have
the
question
about
long
term,
where
that's
gonna
go
just
because
right,
there's.
Definitely
a
lot
of
these
images
that
that
Claudio
is
working
on
our
logo.
Apps
easy
to
consolidate,
but
eventually
like
are
we
gonna,
be
putting
this
hard
requirement
that
says
like
to
be
a
conformance
test
must
be
able
to
use
some
master
image.
H
That
is
small
is
that
my
brain
immediately
goes
to
those
super
behemoth.
Images
that
are
for,
like
the
graphics
card,
testing
and
stuff
are
those
or
any
tests
similar
that
has
like
a
large
image.
Are
those
just
never
going
to
be
allowed
to
be
a
conformance
test,
wondered
what
you
guys
thought
I
feel.
A
E
H
I
think
that
is
pointing
out
one
of
the
places
where
we're
still
lacking
some
tooling
here
we're
right.
We
can
centralize
the
images,
but
right
now,
there's
nothing.
Checking
specifically
that,
like
a
conformance
test,
is
using
a
smaller
set.
You
know
like
if,
tomorrow
somebody
again
introduced
a
new
conformance
test
or
image
any
components
that
there'd
be
nothing
that
would
flag
or
fail
I
agree.
E
This
is
something
I
would
sort
of
file
under
the
bucket
of
improving
our
conformance
tool.
If
we
look
at
the
existing
tool,
we
have
that
likely
Linse
conformance
tests
to
make
sure
they
use
the
word
conformance.
It
I
would
like
to
see
that
augmented
to
sort
of
automate
away
some
of
these
checks
like
Michy's,
let's
verify
that
it
doesn't
call
skip
anywhere.
E
It's
like
a
test
when
we
could
similarly,
ideally
try
and
lift
that,
let's
make
sure
that
the
images
it
uses
come
from
this
approved
list
or
whatever
I
might
be
dreaming
too
big,
but
I
feel
like
these.
Sorts
of
these
sorts
of
checks
should
be
automated
away
as
much
as
possible,
rather
than
a
bird
attitude.
Human
review
burden
that.
A
A
Those
are
two
very
important
high-level
items
that
we
talked
about,
but
having
the
priorities
be
explicit,
helps
a
lot
because
we
can
take
a
step
back
then,
and
when
we
triage
the
board,
we
triage
it
with
those
ideas
in
mind.
We
still
do
have
a
very
long
backlog
of
issues
that
we
should
probably
go
through
and
clean
up
yeah,
but
it
is
there
a
are
we
taking
preferential
treatment
for
the
current
reviews?
Items
like
how
are
we
evaluating
the
priorities?
F
To
my
that
does
raise
a
question
that
I've
had,
which
is
the
workload
controllers,
are
one
of
the
original
stated
goals
like
art.
Workloads
in
general.
People
have
confidence
that
a
workload
runs
on
one
kubernetes
cluster
can
move
to
another
conformant
kubernetes
cluster
we've
started
to
dip
into
some
of
the
tests
that
are
more
nuanced,
such
as
privileged
and
storage,
things
that
require
characteristics
of
the
cluster,
which
was
always
kind
of
a
it's.
A
scare
point
for
some
people
running
hosted
services
who
might
not
allow
you
to
delete
nodes
or
allow
you
to
run
privilege.
F
B
We
have
an
issue
or
a
question
or
I,
don't
know
something
we
have
to
deal
with
around
the
constraints
or
the
you
know.
What
what
constraints
we
put
on
the
conformant
test
like
like
get
a
cluster,
be
conformant
when
it's
an
either
that
doesn't
have
privileges.
That's
running
the
conformance
test.
I
would
think
we'd
want
that
to
be
yes,
there's
hosted
environments
where
I
imagine
you
don't
necessarily
want
to
give
admin
privileges.
D
F
Don't
we
don't
really
record
case
law
on
those
very
effectively
I
think
like?
If
we
get
into
priorities
for
review
bandwidth,
we
are
generating
a
good
set
of
documentation.
Are
we
doing
enough
to
standardize
it
either
into
like
a
top
level
cap
or
a
top
level
like
here's?
Every
time
we
generate
a
case
law
exception,
the
review
bandwidth?
Are
we
making
sure
that
shows
up
or
in
review?
Are
we
making
sure
that
shows
up
in
a
list
that
other
people
can
consult
I.
A
A
B
B
E
Good
I,
my
other
thought
in
terms
of
prioritization
I,
still
feel
like.
You
have
an
awful
lot
of
low-hanging
fruit
that
has
been
enumerated
by
hippie
and
the
rest.
The
folks
working
on
API
snoop
and
perhaps
some
of
the
api's
that
are
covered,
don't
directly
tie
to
pots,
but
I
believe
that
a
good
chunk
of
them
do
and
I
think
they
have
identified
a
number
of
cases
where
you
could
probably
get
a
lot
of
bang
for
our
buck.
E
Hogging
up
personally,
it's
clogging
up
my
bag
with
a
lot
more
like
I'm.
Finding
many
of
the
tests
that
are
getting
kicked
up.
I
have
to
really
dive
super
deep
to
understand
like
what
all
the
utility
functions
and
helpers
and
jigs
and
fixtures
and
stuff
are
doing
in
the
context
of
our
conformance
requirements.
Yes,.
A
I've
done
that
as
well.
In
fact,
I've
done
the
first
pet
I've
done
it
than
multiple
passes
and
it's
a
pain
so
like
if
they
do
a
mass
promotion
across
an
entire
file.
That
means
you
actually
have
to
like
dig
through
the
entire
corpus
of
every
single
call,
it's
making
and
verify
that
it's
actually
doing
the
right
thing
right.
So.
E
In
terms
of
like
the
progress
we
can
measure
that
feels
more
painful
than
it's
worth
right
now,
whereas
the
progress
we
could
measure
by
going
after
api's,
they
really
aren't
exercised.
Your
cover
would
be
helpful
and
I
just
view
that
as
like
something
we
can
do
in
parallel
or
alongside
or
complementary
to
the
listing
of
behaviors.
So.
A
If
anyone
wants
to
like
clean
up
the
way,
I've
kind
of
articulated
this
plan,
please
do
so
I
just
kind
of
my
chicken
scratches
you
know,
but
I'm
trying
to
distill
what
we've
talked
about
so
like
working
on
John's
proposal
as
a
high
priority
item
cleaning
with
the
master
image
list
working
on
the
existing
tests.
We've
already
enumerated
and
clearing
out
the
backlog
and
also
developing
a
criteria
for
case
as
well
as
case
law,
for
how
we
want
to
address
some
of
these
weird
thorny
conditions,
because
we
actually
haven't
run
it
down.
Yet.
F
Do
we
have
we
made
an
effort
in
the
last
couple
of
releases
to
gather
feedback
from
people
seeking
conformance
and
the
larger
group
I?
Don't
think
that
that
specifically
should
change
our
priorities,
but
it
is
maybe
an
open
question
that
I
had
is
I
know
as
someone
who
has
submitted
conformance
results.
I
have
not
had
an
issue
but
I
felt
very
wrote
in
mechanical
and
I'm
always
suspicious
when
I'm
doing
wrote,
mechanical
things,
whether
that's
success
or
failure,
John.
H
H
Should
be
taken
into
account
in
the
priority
list,
I
see
what
you're
saying
I
thought
you
had
when
it
came
up
talking
about
people
wanting
to
elevate
tests
to
the
level
of
conformance
I
mean
people
from
the
sauna.
Buoy
route
have
not
reached
out
and
said
anything
in
particular
was
lacking,
or
they
wanted
more,
that
we
have
an
ongoing
feature
or
issue
that
talks
about
people
whose
clusters
have
a
bunch
of
different
taints.
H
F
Don't
know
where
we've
imposed
that
even
on
regular
ETA
tests
and
say
testing
has
been
pretty
good
at
being
accepting
of
it.
Just
you
know,
people
have
different.
People
have
much
more
varied
topologies
now
than
they
did
three
years
ago.
So
I
mean
a
sense
like
that.
That
is
success,
that
we
do
get
this
feedback,
but
people
people
actually
care
enough
to
bring
it
back
to
us
there.
There
have
been.
A
A
number
of
charts
trying
to
think
back
about
some
of
the
issues
that
have
been
filed,
the
while
of
when
I
was
doing
some
of
what
we
work
as
well
as
the
the
taints
one
is
a
very
good
one.
That's
that's
happened
a
number
of
times
there
have
been
a
number
of
issues
with
regards
to
some
of
the
behavior
or
some
of
the
tests.
Some
of
us
related
to
taint,
so
try
to
remember
Alexandre
filed
the
issue.
I'd
have
to
look
it
up,
but
it
was
concrete
feedback
from
the
wild.
Dns
is
always
a
problem.
B
Dns
do
so
tainted
environment.
Are
we
saying
here
that
there
could
be
issues
around
I
guess
I
have
a
question
around.
This
is
what
John
just
said
anything
what
you
know.
Can
people
subvert
the
whole
point
of
conformance
by
just
like
having
one?
No
that's
not
tainted
and
then
other
other
nodes
behave
in
crazy-ass
ways.
No.
F
I
mean
you
know
if
the
goal
is
workload
portability
and
it's
like
this
is
actually
when
I
was
kind
of
leading
into
the
question
with
feedback
is.
If
the
goal
is
workload
portability,
there
will
always
be
workloads
that
are
not
portable,
like
you
can
take
a
Windows
workload
and
run
analytics
cluster
and
we've
I
haven't
heard
a
lot
of
pushback
from
the
sorts
of
things
we've
enforced.
It
seems
like
we're
doing.
People
are
reasonably
understanding
when
you
know
the
taint
stuff
comes
up
like
it's.
F
You
know
it's,
okay,
we'll
work
within
the
constraints
of
your
topology.
We
were
very
afraid
of
the
divergence
of
workloads
and,
like
you
know,
wearing
my
toque
openshift
hat
the
don't
we
don't
give
you
any
permissions
on
the
cluster
by
default
is
certainly
one
of
the
most.
You
know
examples
of
people
are
like
well,
my
root
workload
doesn't
run.
This
clusters
must
not
be
conformant
and
we
haven't
really
even
had
that
problem.
A
But
we
we
do
get
people,
we
do
get
a
feedback
from
the
wild.
The
problem
with
the
you,
especially
the
people
who
are
running
conformance
a
lot
of
the
times,
the
people
that
do
that,
but
here's
the
common
scenario
that
I've
seen
this
happened
to
me
many
times
most
of
the
people
that
have
been
from
the
wild
wanting
to
run
performance
or
people
that
did
it
the
hard
way
on
themselves
or
somebody
else
in
their
environment.
A
Did
it
the
hard
way,
so
they
want
to
verify
that
what
they've
created
isn't
a
monstrosity
and
they
they're
using
that
as
the
bar,
which
is
kind
of
a
low
bar,
you
know
being
honest
on
and
so
the
it's
those
folks
are
fairly
savvy
to
wanting
to
make
sure
that
they
meet
that
bar.
They
haven't
really
pushed
back
hard.
They
haven't
really
mentioned
other
aspects
of
workload.
Portability
such
as
emergent
aggression,
not
that
I've
seen
so
far.
Some.
E
Of
the
early
feedback,
we
get
right
now
anybody
who's
willing
to
submit
their
conformance
results
upstream
to
kubernetes
desperate.
So
we
do
at
least
get
some
signal
from
seeing
names
like
Gartner
and
Oracle
and
vSphere
and
OpenStack
and
kind
up
here
to
give
us
at
least
a
single-digit
number
of
platforms,
just
to
get
an
understanding
of
whether
or
not
the
additional
tests
were
adding
starts
to
impose
more
constraints
than
are
reasonable
in
this
tiny
version
of
the
wild.
F
It
too
far
into
it
like
maybe
this
is
the
broader
working
group.
It
seems
like
we're
making
reasonable
progress
on
the
original
goal,
and
we
don't.
We
aren't
talking
about
changing
that
goal.
We
haven't
heard
a
lot
of
we've
been
able
to
reasonably
satisfy
the
original
constraints
if
we
want
to
take
on
additional
strains.
F
The
the
one
I
was
kind
of
thinking
of
Tim
is
like
that
class
of
user
that
you
describe
is
almost
more
of
a
I
built,
a
monstrosity
I
like
that
terminology,
I've
created
something
and
I'm,
not
quite
it's
not
about
workload
portability
quite
as
much
as
would
the
kubernetes
authors
approve
like
when
someone
who
know
who
encoded
knowledge
about
this
more
than
me,
you
know
in
theory,
be
able
to
judge
what
what
I've
done
is
correct.
I
see
that
a
little
bit
as
a
slightly
different
mission.
A
Of
times
they
do
this
to
to
verify
that
their
clusters
in
a
good
state
before
they
add
workloads.
So
it's
like
the
initial
smoke
test
that
they
run
before
on
any
cluster
operation,
because
sometimes
they
change
parameters,
or
sometimes
they
muck
with
their
configuration
because
they're
doing
something
special
for
their
environment.
So
I
do
I,
do
see
your
point,
there's
a
weird
constraint
where
people
are
using
it
a
lot
for
like
base
level
smoke,
testing
of
their
environments,
not
necessarily
to
guarantee
that
my
workload,
a
work
on
environment
be
but.
E
My
question
here
is:
does
the
sauna
boy
brow
empower
people
to
use
the
latest
and
greatest
versions
of
conformance
tests?
If
somebody,
if
we
wanted
to
get
signal
on
like
what
we're
doing
for
116
right
to
get
the
earliest
possible
signal
that
loops,
we
promoted
a
conformance
test
that
turns
out
to
like
really
not
meet
expectations
across
a
wide
variety
of
environments.
A
F
And
I
think
to
get
to
both
of
your
points
like
the
subset
of
tests
that
we
run
in
conformance
there.
If
you
look
at
the
thousand,
even
if
there's
thousand
dve
tests,
yeah,
200
or
250
informants
I
bet
you
250
of
those
only
work
on
gke
since
Jon
left
I
can
dig
on
him
a
little
bit.
You
know
the
rest.
Many
of
those
do
have
value,
but
are
temperamental
like
you
know,
I
see
this
all
the
time
and
openshift,
which
is
we
run
a
broader
set
like
we
don't
just
run
performance.
F
We
run
as
many
as
will
pass.
That
number
is
not
a
hundred
percent.
It's
not
fifty
percent.
It's
somewhere
in
between
that
that
the
broader
set
would
be
useful.
Is
that
something
that
we
should
focus
on
at
a
state
testing
level?
It's
not
conformist,
or
if
we're
hearing
that
conformance,
which
was
originally
about
workload
portability,
there
is
a
class
of
performance
which
is
not
yet
conformance
or
pre
conformance.
That
is
that
you
know
75
percent
or
so
that
actually
should
pass
on
most
clusters.
F
Are
we
one
opportunity,
maybe
for
a
priority
or
something
we
do
it
is?
Can
we
do
a
better
job
of
exposing
tests
that
are
not
yet
conformance
to
the
conform,
ease
or
conformance
testers,
to
get
feedback
and
distribute
some
of
the
web
sort
of
this
review?
Because
right
now,
like
parent,
the
DNS
was
like
a
great
example.
We
might
not
know
until
it
causes
half
of
the
conformance
conform
ease
to
fail,
and
then
we've
got
a
long
process.
Can
we
outsource
more
of
this?
To
this
is
judge
them
based
on
the
subset
and
get.
E
The
report
this
is
part
of
why
I
go
back
to
actually
the
review
bird
and
the
vetting
that
a
test
does
everything
the
correct
way
and
checks.
You
know
the
correct
things
takes
a
lot
of
time,
so
this
is
why
I
feel
like
there's
value
in
using
API
snoop,
to
take
a
look
at
all
of
the
existing
tests
and
all
of
the
api's
they
hit
right
now
to
understand
which
tests
should
we
prioritize
reviewing
and
promoting
in
performance?
D
H
E
That
this,
like
general
shoving
of
a
couple
of
tests
into
a
place
called
conformance,
is
us
trying
to
really
cherry-pick
well
behave.
Tests
that
aren't
flaky
cuz,
like
what
is
used
for
release
blocking
is
not
really
bad
and
we
can
certainly
like.
We
have
pretty
well
defined
jobs.
We
can
leave
people
to
to
show
like
these
are
all
of
the
release
blocking
tests,
and
you
are
free
to
try
them
on
your
cluster.
A
C
Small
comment
here:
we
propose
validation,
spits
right,
and
then
that
is
the
idea.
Basically,
we
discussed
in
Seattle
Keep
Calm.
An
idea
is
that
any
sub
project
or
group
can
create
a
set
of
tests
as
a
sweet
and
like
nor
conformance
right
now
is
a
validation
sweet.
Then
we
can
reject
and
just
run
that
validation
sweet.
We
can
have
validation,
sweets
we
want,
we
don't
have
to.
A
A
The
problem
is
like,
as
Clayton
was
mentioning
it
and
what
John
was
mentioning
too,
as
well
as
that
that
means
consumers
have
to
craft
this
regex.
That
is
impossible
to
understand
or
know,
like
people
just
want
to
know,
or
can
they
pass
a
whip
to
their
cluster
and
it'll
guarantee
that
it
might
work
on?
Sometimes
it
sounds
like.
C
E
F
So
I
think,
like
you
know,
there's
an
interesting
things
over
here
and
you
brought
up
like
Santa
Bui
and
like
running
other
stuff.
I
think
sonobuoy
exists
to
fill
a
gap
of
an
opinionated
execution
and
to
make
it
a
little
bit
easier.
I
think
we
my
perspective,
and
this
gets
to
the
regex-
is
that
the
way
that
we
organize
tests
and
ginko
worked
good
for
the
first
two
years
and
now
we're
kind
of
ginko
itself
is
not
a
very
good
framework
for
running
tests
anyway.
F
So
like
like
I
finally
got
tired
of
Ginkgo
and
I
kicked
all
getting
a
lot
of
okay.
We
used
the
up
streams,
but
we
do
some
hacks
and
ginko
to
build
a
command
line
around
running
tests
like
sweets.
You
ran
into
regex,
like
we
were
hit
finally
hit
like
Owen
squared
problems
with
regex
and
filtering
tests,
and
I
almost
think
that
maybe
we're
when
we're
talking
about
ease
of
use,
whether
it's
sonobuoy
or
the
core
tests
themselves,
trying
to
encapsulate
the
idea
of
a
set
of
Suites
we
have
one.
F
Could
we
do
it
without
bringing
in
conformance
whether
that's
sonobuoy
your
options
to
sort
of
really
changing
the
up
streams
to
not
be
so
so
darn
ginko
whatever?
It
is
working
through
that
too,
whether
we
have
the
broader
suite
and
maybe
Alec
as
Erin
you're,
saying
everything,
that's
not
in
the
edge
cases,
although
some
of
the
features
I
really
think
should
be
included.
F
All
of
those
are
the
pre
conformance
suite
and
that
gets
us
the
flaky
and
the
works
on
most
people's
clusters
signal
which
preload
some
of
that
work
right
now,
I,
don't
think,
with
the
exception
of
the
people
who
are
on
the
dashboard
vSphere
and
a
few
others
I,
don't
think
as
many
people
are
getting
exposure
to
that
set
because
of
the
focus
on
conformance
bar.
What
can
we
do
to
incentivize
people
to
run
on
that
sled
run
up
that
ramp,
so
we
don't
have
to
do
as
much
of
it
in
review
and
test.
E
F
I
A
Before
we
get
too
deep
into
the
weeds,
like
yeah
I,
think
having
the
case
law
and
the
documentation
for
how
we
want
to
address
these
things
might
be
a
good
first
step
and
then
I
mean
if
we
really
wanted
to.
We
could
even
take
that
and
farm
out
this.
That
idea
as
a
proposal
for
people
to
actually
execute
on
so.
F
H
So
I
could
do
that
super
quick.
If
it's
just
a
matter
of
like
preforming
regex,
we
kind
of
do
that
because
we
have
a
quick
mode
which
says
you
know
run.
The
pod
should
be
submitted.
It's
a
single
fast
test.
We
could
have
some
sort
of
verbose
or
thorough
or
conformance
plus
or
whatever
we
want,
and
even
if
we
just
wanted
to
make
it
kind
of
quietly
out
there
we
try
it
out.
You
know
we
have
a
release
coming
up
as
soon
as
kubernetes
1/5
goes
out.
I
could
put
in
there
just
to
just.
F
E
My
only
reservation
to
that
is
this
sounds
an
awful
lot
like
hey.
Let's
make
a
canonical
skip
list
in
this
third
party
product
that
doesn't
live
upstream
and
we
went
through
this
already
and
it
resulted
in
different
people
running
different
numbers
of
conformance
tests
and
us
being
unclear
on
what
the
actual
definition
of
informants
was.
So
my
only
ask
would
be
that
if
we
do
that,
we
plummet
all
the
way
through
to
the
upstream
fast
runner,
so
that
it
allows
everybody
to
sort
of
use
that
same
canonical
definition
and
I'm
totally
cool
with
filming
cloning.
E
H
Yeah
I
think
and
we're
totally
on
board
with
Sun
buoy
like
trying
to
get
that
skip
list.
I
don't
want
it
to
be
canonical,
but
it's
just
being
a
user
friendly,
hey.
You
might
want
to
be
interested
in
these
tests,
but
they
have
hold
no
special
significance
upstream
yeah,
but
that's
sort
of
my
thought
of
it.
But
it's
the
next
place.
If
someone
says
yeah
I
know
my
clusters
conformant,
but
what
else
should
it
pass
or
what?
What
else
might
I
want
to
run?
E
So
have
you
this
is
nice
to
have,
but
maybe
deca
what
Tim
was
saying
earlier:
I
kind
of
bucket
it
under
the
same
place
as
profiles
and
for
me,
even
validation,
Suites,
like
I,
still
feel
like
there's
so
much
work
to
do
on
coverage
of
our
core
api's
that
I
kind
of
just
want
to
focus
on
work
that
gets
us
in
that
direction.
I
was
wondering
if
we
get
the
tree
chat
about
hippies
API
snoop
really
did
work
in
that
context.
E
A
Is
so
I
think
this
list
is
reasonable
enough
that
we
can
kind
of
move
on?
We
have.
We
have
other
agenda
items
to
talk
about,
but
this
is
the
primary
thing
I
want
to
discuss,
because
my
goal
here
is
that
when
we
triage
the
backlog
as
we
go
through
and
groom
the
backlog
or
execution,
we
kind
of
grim
it
with
this
priority
kind
of
in
mind.
So
that
way,
you
know
we
can
kind
of
expedite
individual
issues
as
they
come
through.
A
J
I
did,
and
it
might
be
easier
if
we
open
the
markdown
document
for
the
main
one
there,
and
it
that
way
everybody
can
collaborate
and
we'll
use
that
update
the
umbrella
ticket.
We
found
it
difficult
in
our
team
to
have
an
umbrella
ticket
that
we
couldn't
all
edit
together,
so
this
is
kind
of
our
shared
space.
Just
a
quick
note,
what
we're
trying
to
accomplish
here
is
a
16%
increase
in
our
core
conformance
coverage,
there's
more
than
core,
but
this
is
the
core
parts.
J
If
you
click
on
that
tested,
but
not
conformant
link
and
that
first
area,
maybe
middle
click
or
something
so
it's
another
tab,
but
that
will
allow
you
to
see
is
the
30
endpoints
that
are
in
core
that
are
tested,
and
this
document
and
this
umbrella
ticket
goes
through
these
specific
endpoints
and
what
test
we
could
promote.
So
if
you
middle
click
or
control-click
on
read,
replace
or
patch
there,
it
will
bring
up
the
specific
endpoints
and
their
list
and
then
for
each
of
those.
We
have
some
issues
that
may
or
may
not
have.
J
Pr
is
open,
so
that's
kind
of
a
work
to
accomplish.
If
we
accomplish
this
work
and
everybody,
you
know,
the
reviewers
agree
that
these
are
good
tests.
Then
we're
gonna
get
it
pretty
far
pretty
quick
and
if
we
want
to,
we
can
go
through
each
of
these
and
maybe
just
get
a
consensus
as
far
as
a
triage
of
which
ones
we
should
hit
first
or
go
from
there.
J
We've
got
other
things
in
place
to
go
beyond
this
to
go
beyond
core.
We
go
to
all
of
stable
and
we've
also
got
some
plans
to
go
through
everything
by
our
kinds
and
prioritization
and
working
with
John's
work.
Obviously
I
think
there's
a
way.
We
can
combine
the
definitions
of
the
test
that
a
machine-readable
format,
some
type
of
machine,
readable
format
and
by
working
with
John
all
of
the
list
of
the
tests.
If
you
click
on
the
list,
the
tests
that
are
it'll
show
you,
the
endpoints
that
that
one
hits
so.
A
I
am
looking
at
this
particular
one.
The
the
the
nomenclature
is
a
little
confusing
to
me
as
I'm
reading
through
this
so
says,
I
score,
V,
1,
node
status
right
and
if
I
look
at
that,
prenup
scheduling
path
runs
replicas
sets
for
to
verify
preemption
running
path,
yeah,
that's
really
kind
of
a
confusing
flow
endpoint
hitting
through
the
test
suite.
J
Sure
what
we're
that
is,
the
only
test
that
hits
that
endpoint,
the
only
one.
So
it
is
this
only
test
that
if
we
do
promote
that
single
test,
it
will
increase
the
coverage
on
that
end,
point
I.
E
J
There's
two
approaches
here:
one
is
we
have
we
have
tests
that
do
hit
it
already,
so
we
can
see
one
the
test
code
and
see
what's
being
done
and
then
the
other
is
that's
already
a
nice
test.
Let's,
let's
promote
it,
but
that's
kind
of
what
this
is
for
is
to
give
us
a
quick,
very
focused
viewpoint.
Here
is
a
test:
is
it
conformant
and
we
can
go
through
that
checklist
of?
Is
this
a
well
behaving
test
or
not
I?
Think.
A
I
can
triage
your
list
that
you
currently
have
here
in
a
reasonable
time
frame
this
hack
and
e-file,
which
is
also
in
the
umbrella
issue.
That's
why
and
go
backwards
to
see
whether
or
not
this
is
a
happy
path
like
we
could
promote
the
test
and
it
would
expose
more
in
points
yes,
but
it's
also
this
weird
indirect
test
that
if
somebody
were
to
modify
it
or
change
it,
maybe
or
it's
totally
not
intuitive
to
the
user-
that
we
have
coverage
of
an
API
and
indirect
fashion.
I.
J
I
think
that's
what
we
have
currently
with
all
the
existing
tests
that
were
there
there.
This
I
agree
that
we're
we
want
to
be
writing
tests
in
the
future
that
do
specific
coverage
for
that
particular
endpoint,
but
all
of
our
existing
coverage,
metrics
by
in
pointer,
are
come
from
tests
that
are
next
intentionally
targeting
all
those
at
once.
Well,.
E
A
J
That's
one
umbrella
for
all
of
them,
and
it's
a
copy
of
this
markdown
document
and
then
underneath
some
of
these
have
umbrella
issues
for
the
particular
kind
like
node
has
one,
let's
see
one
of
the
other
ones
here.
Pod
template
has
one
and
I
think
it
also
has
a
PR
config
map
has
one
has
a
couple
of
PRS.
Namespace
limit
range
has
a
PR
and
they're
all
they're
kind
of
in
the
market
down
document
as
well.
Well,
whenever
we
all
say
this,
every
once
in
a
while,
I'll
put
it
back
in
the
umbrella
tiger
and.
E
J
I
was,
we
do
have
some
other
umbrella
tickets.
This
was
the
main
one.
I
thought
was
importance
for
this.
This
meeting,
the
other
ones
are,
as
Aaron
said,
focused
on
either
kubernetes
components
that
are
already
hitting
untested,
endpoints
and
they're
they're,
usually
coop,
API
and
I,
know
and
I'm
working
on
that
and
publishing
that
umbrella
took
it
out,
and
then
we
have
another
one
that
is
I,
believe
core
components
hit
by
other
objects
during
the
conformance
test,
but
I
think
we
can
hold
off
on
those
umbrella
issues.
For
now.
A
I
think
it
would
take
a
while
to
suss
through
the
details
of
these
issues.
I
can
work
through
one
of
them
like
the
first
one
you've
listed
here,
but
there's
a
couple
of
other
ones
that
we'd
probably
need
to
triage.
Maybe
maybe
I
can
do
a
first
pass
in
the
first
one
and
then
take
a
look
at
the
other
ones.
Does
anyone
else
want
to
go
through
that
with
me?.
E
J
A
I
dunno,
we
spent
a
lot
of
time
in
planning
and
we
usually
do
this
meeting
on
a
bi-weekly
basis.
I,
don't
I
loathe
to
want
to
add
more
meetings
to
my
meeting
sort
of
schedule,
but
at
the
same
time,
I
do
realize
that
our
backlog
is
pretty
dirty.
A
J
Like
to
be
a
part
of
that,
but
I
would
need
it
to
be
Wednesday
merely
later
in
the
week.
I.
A
A
A
John's
kept
us
there.
We
already
prioritized
that
we
for
folks
to
I
think
to
ask
for
this
group
would
be
for
folks
to
review
that
kept.
That
includes
everybody
on
this
call.
If
you
are
engaged,
please
review
the
cap
that
would
be
highly
beneficial
to
the
group
and
will
go
through
a
year.
I'll
try
to
get
your
issues
addressed
before
we
do
the
trimming
on
Thursday
hippie
I'll
try
to
take
a
look
at
it
make
sure
it's
done
before
them.
Thank
you.
First,
before
we
go
srini
yeah.
C
I
can
quickly
update,
basically
you
can
go
through
this
list
and
on
thursday.
Actually
some
of
these
have
questions
regarding
some
of
them
already,
and
some
of
them
has
questions
like
you
know.
Some
tests
involve
security,
context,
whatnot
and
then
there
is
some
naming
changes
like
that,
not
sure
whether
we
have
to
write
new
tests.
Like
notes,
electro
versus
note
affinity
such
things
coming
up,
so
we
can
do
that
on
thursday.
The
second
thing
that
I
mentioned
in
the
email
was
given
for
team
is
transitioning.
C
And
then
a
couple
more
people
will
join
from
better
esteem.
That's
pretty
much
it
from
me
at
this
point
in
time.
Did
our
project
plugins?
Actually
the
slash
project
plug-in,
is
complete
all
the
3pr.
So
much
now
I
just
have
to
add
a
config
file,
I
guess
somehow,
maybe
a
pr2
to
add
a
config
file
but
KK,
so
that
you
know
we
can
use
that
plug-in.
Our
project
manager
plug-in
the
the
PR
is
still
under
review.
I'll
update
you
on
stuff.
E
C
A
C
A
My
biggest
ask
is:
is
bodies
to
Spanish
the
rocks
like
this
is
a
lot
of
work
to
get
this
done.
I'm
I'm
kind
of
I
don't
know
I'm
trying
to
struggle
for
words
a
little
dismayed
that
vendors
don't
actually
resource
this.
As
a
first-class
item
saying
you
know,
I
think
we
need
to
treat
this
seriously,
not
just
from
the
community,
not
with
just
our
community
at
some,
but
with
our
with
our
vendor
hat
on,
to
try
and
get
resources
assigned
to
making
this
be
better
for
the
comments,
because
it
helps
us
all.