►
From YouTube: 20190730 sig arch conformance office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Thank
you
so
much
for
me
recently,
the
version
of
Reddy's
that
is
being
used.
You
know
our
tests
know
so
conformance
tests
has
been
bumped
to
5.0
or
something
like
that,
mostly
because
it
has
a
pv6
support,
which
is
perfectly
fine.
The
only
issue
is
that
the
you
know
the
windows
4th
of
reddy's
only
goes
up
to
3.2,
which
is
a
bit
lower
than
than
5,
and
we
are
already
encountering
some
issues
because
of
that
in
some
tests.
B
So
there
are
not
so
many
tests
which
views
that
image,
but
the
are
a
couple,
a
couple
of
conformance
ones
which
do
some
of
them
from
what
I
saw,
doesn't
necessarily
have
to
be
Reddy's,
one
of
them
kind
of
speaking
of
the
guestbook
application
test.
If
you
don't
know,
I
will
link
the
issues
later
after
I
am
done
talking,
so
what
we
were
talking
in
still,
we.
B
A
A
The
ideal
scenario
would
be
even
like
potentially
dedupe
this
with
a
single
binary.
I,
don't
know
if
you
can
put
a
mixed
mode,
artifact
thing
in
place
here
it
might
be
more
complication
than
what
it's
worth,
but
it
for
simplicity
for
something
like
this.
It
might
be
easier
just
to
say
if
platform
X
deployed
this
version,
otherwise
platform
Y
deploy
this
version.
The
document
inside
the
tests
I'm.
D
D
A
I,
don't
think
that's
true
at
all
I
think
I
think
that's
a
false
false
choice,
because
you're
basically
testing
communities
behavior
on
the
output
artifact
of
an
image.
It's
just
that
the
image
is
not
supported
within
a
given
platform.
So
I'm
not
I'm,
not
terribly
concerned
on
that
so
long
as
it's
only
on
the
behavior.
As
long
as
the
behavior
that
we're
checking
communities
is
consistent,
the
target
image
for
guestbook
should
be
pretty
much
irrelevant
because
we're
only
dependent
upon
like
we
should
not
be,
depending
upon
the
target
images
for
something
like
a
guestbook
application.
D
Reason
I
get
squarely
about.
It
is
because
I
think
it
required
it.
It
increases
the
review
burden
and
requires
a
lot
more
manual
interpretation.
So
the
this
is
a
totally
made-up
scenario.
It
might
not
be
realistic,
but
let's
say
there's
something
that
needs
to
verify
that
post
networking
functions
appropriately,
and
so
it
schedules
an
image,
and
then
we
change
that
so
like
become
something
that
schedules
an
image
depending
upon
the
operating
system
and
the
image
itself
doesn't
actually
exercise
hosts.
Networking
like
the
window.
D
A
B
For
example,
if
we
can
replace
the
image
views
without
actually
affecting
the
the
test
or
what
it
does
or
what
the
outcome
is,
you
should
be
okay
right.
For
example,
the
reduce
image
is
using
a
couple
of
cubes
ETL
tests,
which
basically
just
verifies
the
output
for
odds
and
RCS
and
stuff
like
that
which,
from
what
I
saw
the
first
class
has
nothing
to
do
with
the
fact
that
it
is
the
Redis
image
yeah.
D
A
What
he's
saying
is
that,
like
there
there
are
some
things
we
depended.
We
can't
you
can't
make
that
statement
either
like
so
it's
got
to
be
in
a
test
back
tests
basis,
because
there's
some
I
know
the
ipv6
work
is
a
pain.
It
needs
to
move
forwards
because
people
are
very
evening
it.
So
the
the
we
shouldn't
minimize
the
dependencies
on
image,
specific
data,
that's
outside
of
our
control
or
possible.
That's
the
only
generic
stigma
we
should
make
and
we
should
deal
with
a
non
test
by
test
basis.
A
B
E
A
B
D
A
G
But
we
do
have
an
issue
open
for
guestbook,
though,
and
that's
that's
the
one
I
just
pasted
in
the
chat,
but
I
guess
on
that.
One
you're
saying
you
mentioned:
take
it
to
sig
apps,
but
it's
marked
as
sig
CLI
and
test
titles
cube
control,
client,
guestbook
application
should
create
and
stop
a
working
application,
and
so
for
that
one
I
believe
that
if
we
simplify
something
so
that's
still
deploying
an
application
that
is
OS
agnostic,
then
that
would
be
a
reasonable
change
for
this
one
as
well.
Yes,.
A
Yeah
yeah
I,
just
wanna,
make
sure
we
don't
go
in
there
have
a
hole
on
this
one
about
making
general
statements.
Thumpings
look
at
what
the
test
is
testing
and
if
it's
relying
upon
a
very
specific
version
that
is
not
platform
agnostic.
We
should
remove
that
dependency
where
possible,
to
use
the
agnostic,
images
and
I
think
that's
a
that's.
The
most
general
statement
I
can
make
that
I
feel
comfortable
with
and
I
think
is.
That
is
that
try
with
you
Erin
as
well.
Yes,.
D
H
H
B
A
J
I
got
behind
just
because
of
some
vacation
time
and
catching
up,
but
I
found
my
old
notes
and
try
to
review
what
images
we're
being
used
by
conformance
and
then
looked
at
the
PRS.
That
Claudia
did
to
aggregate
those
images
and
it
still
seems
like
there's
a
handful
of
images
that
we
would
need.
Rather,
you
know
the
ideal
situation.
J
Is
there
only
one
I
think
right
now
we're
down
to
a
handful,
but
some
conflicts
in
my
notes
about
what
that
exact
list
is
also
that's
kind
of
due
to
lack
of
ability
to
really
clearly
check
which
images
are
used.
The
best
case
I
did
did
try
and
discover
which
were
used
during
conformance
one
run
was
to
build
some
custom
debug
lines
into
it,
and
just
every
time
you
asked,
for
you
know,
get
this
image
name
to
dump
that
out.
J
Yet
the
only
one
I
know
that
I
think
pretty
simple
and
I
think
you
had
mentioned
this.
One
too
was
the
the
kitten
and
the
Nautilus
which
are
really
just
simple
web
servers
that
return
an
image
based
on
the
path
you
hit
or
something
like
that,
or
they
only
support
one
half
so
I
think
those
should
be
simple.
The
other
ones
start
getting
more
complicated
like
some
of
those
guestbook
apps,
some
other
Redis.
We're
not
gonna.
Do
that.
J
B
D
B
D
B
B
J
B
D
H
B
B
G
K
F
We
had
an
issue
where
one
of
the
local,
when
somebody
was
doing
local
testing
their
local
image,
seemed
to
be
different
from
the
image
that
was
actually
in
GCR
and
just
for
debugging
purposes.
It
made
it
a
little
bit
confusing,
so
they
when
they
go
and
actually
submit
the
PR
and
it
go
through.
You
know
it
go
through
this
Yahoo
fail,
and
so
they
would
have
been
nice
to
have
a
version
thing,
but
I
mean
I'm.
Okay,
yeah.
A
So,
just
to
follow
up
from
the
main
root
of
this
issue,
the
main
issue
that
we
have
for
D
tipping
you
said
was
close
John.
Should
we
reopen
it
and
make
it
more
of
an
umbrella
issue?
Should
we?
How
do
you
want
to
track
the
progress
of
this,
as
well
as
other
things
that
can
uncover
it
along
the
way
yeah.
J
J
So
I
will
make
a
list
of
things
that
seem
reasonable
to
do
now,
and
then
we
can
reevaluate
if
we
want
to
open
another
ticket
after
that
and
and
I
think,
if
I'm
only
gonna
be
adding
three
or
four
images:
five
images
there
I'm
hoping
that
we
can
get
all
those
end
for
the
1.16
release.
Does
that
sound,
reasonable
Claudia
yep
perfectly.
A
All
right,
there's
I'm
debating
whether
or
not
the
order
of
how
to
address
these
ones
that
I
listed
below
and
who
wrote
this
portion
down
here.
I
think
I'm
gonna
jump
around
a
little
bit
because
it
seems
like
I
want
to
put
clayton's
I'm
gonna
put
this
one
up
above,
because
I
think
we
can
actually
hammer
this
one
out.
Pretty
fast.
L
Problem
currently
sure
over
the
past
month
or
so
and
I
can
get
those
specific
dates,
but
we
had
some
tests
at
it
which
increased
the
goalposts
we
had
somewhere
around
eight
hundred
and
some
odd
tests,
or
a
turn,
in
wit,
some
added
sixteen
more
and
since
we
didn't
add,
more
endpoint
coverage
with
conformance
that
meant
that
we
lost
percentages.
So
we
dropped
under
like
14
something
percent
coverage
and
I
was
really
wanting
to
understand
how
we
were
able
to
merge.
L
These
particular
ones
in
and
I
did
a
little
bit
of
conversation
on
the
conformance
channel
and
it
looks
like
one
of
the
end
points
is
optional,
but
I
wouldn't
have
known
that
without
just
having
conversation
on
well.
Why
is
it
optional?
It's
in
core,
so
I
would
love
to
know
how
we
can
keep
the
goalposts
the
same
when
we're
adding
optional
tests
have
some
metadata
or
is
something
we
can
say.
L
We
have
some
definitions
in
a
cap
that
says
it
needs
the
hab
test,
but
we
don't
have
any
automation
in
place
to
do
that,
and
so
I
wanted
to
look
at
what
we
want
policy
wise
and
then
also
maybe
some
things
for
automation
so
that
we
can
catch
these
before
they
merge
next
time.
If
we
want
the
stuff
Pentagon
conversations
about
them,
so.
A
The
first
one
is
hard:
I,
don't
know
if
I
have
a
good
answer
for
you
other
than
to
walk
the
actual
types
data.
It's
make
sure
if
there's
enough
minute
and
documentation
to
describe
the
fields
which
I'm
guessing
there
probably
is,
or
there
might
not
be
and
have
take
a
look
at
this
one
in
particular,
do
you
know
off
the
top
of
your
head
if
this
was
documented
inside
the
typed
field?
So
if
you
underneath
the
API
reference,
do
you
see
it
listed
as
optional?
It's
not
it.
That's.
L
L
F
This
is
actually
exactly
the
related
to
certain
the
API
is
that
whether
the
API
is
even
essentially
installed
or
not
like
with
the
metrics,
it's
an
aggregated,
API
server.
So,
like
that's
an
optional
decision,
that's
that's
not
gonna
appear
anywhere
in
our
you
know
our
API
schema,
because
if
it's
not
there,
it
doesn't
even
appear
in
the
API
scheme,
all
right,
so
there's
not
even
a
I.
F
L
What
I'm
looking
is
in
the
in
the
parameters
to
API
server,
when
you
have
enabled
particular
features,
I
need
to
know
which
of
what
endpoints
or
what
operations
and
what
prop?
What
objects
that
that
feature
enables
are
disabled,
so
we
can
tie
them
together
as
if
it's
not
going
to
be
there,
but
it's
still
in
core
it's
it's
confusing.
A
L
A
So
we
were
talking
about
feature
flag
detection
for
test.
Well,
this
gets
weird:
it's
not
exactly
the
same
John
as
the
profile
style
workload
versus
feature
gated
behavior
like
there.
We
really
need
to
do
a
better
job
of
sort
of
defining
the
layers
and
we've
not
done
that,
because
we've
been
too
busy
like
triaging.
I
Each
against
an
interesting
one,
because
if
it's
feature
gated,
it
should
either
be
on
or
off
it
shouldn't
be
off
once
it
hits
GA
unless
there's
like
extenuating
circumstance,
nothing
should
be
like
the
configuration
kake's
enablement,
this
one
actually
something
besides
the
feature
gate.
This
is
a
natural
flag.
A
I
Feature
gate
is
what
he's
saying:
okay,
because
yeah
like
feature
gates
I
would
be
like
once
they
reach
GA
or
like
then
they
are
fair
game
and
the
cage
should
always
be
on
if
you
turn
off.
Maybe
this
is
like
there's
like
actually
two
separate
things
here
now
that
I
think
about
it.
There's
the
yeah
the
profile
aspect
and
the
config
variation,
but
there's
another
one
which
is:
are
you
allowed
to
turn
off
a
feature
gate
once
it
is
hit
GA
and
still
be
considered
conform?
It
I
would
probably
say
no
right.
A
So
I
think
the
problem
here
is:
there's
a
lot
of
organic
growth.
It
before
extensions
mechanisms
came
into
play
and
anything
that's
optional
should
be
an
extension
right
like
CN.
I
CRI
CSI
as
well,
are
well
defined
extension
points
right
and
you're
able
to
swap
out
individual
provider
specific
details
there
and
anything.
That
is
a
feature
that
gets
the
Hess
fields
in
the
main
API.
That
is
an
optional
parameter
to
the
observer.
L
D
We
do
have
one
of
our
policies
saying
like
it
can't
be
a
conformance
test
unless
the
things
that
it's
exercising
are
stable.
So
in
some
sense
it's
hit
the
correct
next
step.
All
of
the
resources
are
now
stable,
v1
they're,
not
beta
they're,
not
alpha,
so
that
criteria
has
checked
marked.
So
now
it's
like.
Well,
we
should
can
we
promote
these
tests?
Are
they
written
well?
Can
they
be
conformance
tests?
What
I
don't
know
is
like
what
mechanism
would
we
have
to
say
you
don't
have
conformance
tests.
Your
feature
cannot
go
out
the
door.
F
Actually
is
in
the
camp
as
a
GA
graduation
criteria
that
they
should
have
that
but,
like
you
said,
there's
nothing
enforcing
it
other
than
people
reviewing
the
GA
graduation
and
you
know,
there's
maybe
too
much
too
many
criteria
to
follow
and
likely
well
I,
don't
know
so
really
the
Kemp
that
included
the
graduated.
Those
2g8
should
have
included
a
requirement
for
component
tests,
so.
A
L
Yeah
Dan
pointed
out
something
kind
of
funny
to
me
if
the
percentage
of
coverage
is
on
its
way
down
at
what
point,
and
how
long
will
it
take
for
us
to
reach
near
zero?
Christen
coverage
by
our
goal
is
to
get
to
100%
coverage,
so
I'm
trying
to
find
a
way
to
keep
keep
the
whole
from
getting
deep
earth
on
putting
the
dirt
in
alright.
A
So
then,
for
this
particular
one
we
should
hold
the
authors
of
the
cap
accountable
and
try
to
push
them
in
the
right
direction,
where
possible,
so
opening
an
issue
assign
it
as
some
type
of
blocker
and
you
can
assign
me
you
know,
I'll
put
all
the
right
flags
on
it
and
loop
in
the
release
team.
There
I
don't
think
we
have
formal
policy
of
how
we
enforce
this.
So
maybe
we
talk
about
this
too,
as
well
at
the
higher
level.
A
D
The
it's
the
tech
debt
we've
accumulated.
Ideally
that
cap
describes
a
checklist
in
an
ideal
world.
That
checklist
is
completely
machine,
enforceable
the
world
where
it's
not
we've
had
subject
matter.
Experts
agree
that
that
checklist
contains
the
right
items
and
it's
the
responsibility
of
the
release
team
to
make
sure
that
those
check
boxes
have
been
checked
off
and.
F
D
F
A
But
a
part
of
this
there's
a
part
of
me
that,
like
yes,
we
need
more
process
that
occurred
me
that
bristles
no,
we
don't
need
more
process,
so
I
think
we
just
need
to
hold
the
people
accountable
like
because
right
now,
instead
cluster
lifecycle,
the
the
we
hold
each
other
accountable
like
if
we
are
gonna,
promote
something
to
GA.
We
are
a
damn
sure
that
we've
got
all
the
eyes
across
all
the
T's,
and
if
we
make
a
mistake,
we
do
a
Maricopa,
so
I.
D
A
A
D
My
question
is
just
mechanically
to
that
carrot:
state
question
like
if
the
Louise
team
decides
no,
this
hasn't
been
done
and
we're
at
the
end,
our.
What
is
the
mechanism
by
which
this
cannot
go
out
that
we
can
prevent
they
could
prevent
this
from
going
out
the
door
Frank.
Is
it
defaulting
a
feature
flag
to
false?
Is
it
reverting
the
PR
that
extend
points
entirely
like
that's,
where
I'm
less
clear?
What
enforcement
mechanism
is?
That's
I.
Do.
D
A
D
A
But
it
is,
it
can
be,
but
it
during
reverts
is
actually
easier
than
the
I
think
some
of
the
other
stuff.
So
in
the
meantime,
I
think
what
we
do
is
we
raise
the
issue.
We
should
I
think
it's
also
out
of
our
purview
for
an
angry
Horseman,
so
I
think
we
should
raise
the
issue
pass
it
back
to
Sigma,
Y
and
Sigma
nice
and
say:
hey
we're,
noticing
a
pattern,
but
there's
two
separate
issues.
D
E
And
con
I
just
wanted
to
interject
with
two
points,
but
I
perfectly
find
it
move
this
just
in
architecture,
I
struggling
to
find
the
exact
link,
but
they
just
seems
like
everybody
is
agreed
that
there
is
a
requirement
for
graduation
that
things
need
to
have
a
performance
with
them,
but
hippy
was
gonna,
propose
an
idea
of
a
new
crowd
plug-in
that
was
going
to
look
for
essentially
reductions
in
coverage.
That
could
then
be
a
blocker
for
graduation
for
GA
and
ideally
would
be
pushing
people
with
PRS
to
write,
confirmed
sense
much
earlier
in
the
process.
E
E
L
It's
just
a
quick
Tim
for
the.
How
that
would
work.
Is
we
actually
just
received
this
particular
endpoint
or
object,
have
conformance
test
and
that's
the
database
that
H
is
new
consumes,
so
we
have
particular
changes
to
to
the
things
getting
particularly
promoted.
This
we're
gonna
JSON,
just
notifying
on
those
and
say
hey.
We
notice
they're,
not
test,
even
in
the
Alpha
form,
we
can
say.
Here's
where
you're
lacking
coverage
well
would
be
somewhat
useful.
Ausable
I
just
wanted
to
throw.
A
I
So
we
we
merged
a
conformance
test
that
covers
the
v1
API
that
has
been
v1
for
a
very
very
long
time,
and
then
we
realized
it
depends
on
an
optional
feature.
The
metric
service,
a
lot
of
scaling.
If
we're
auto
scaling,
is
the
v1
part
of
the
cube
API.
If
you
have
a
metric
server
installed,
the
v1
out
of
HPA
feature
is
considered
effectively
supposed
to
behave
the
same
everywhere.
Auto
scaling
is
actually
a
pretty
horizontal
pod.
I
Auto
scaling
is
actually
a
pretty
common
end-user
feature,
even
if
the
implementations
might
differ-
and
this
is
actually
a
case
that
we
have,
which
is.
We
have
the
default
metrics
server
implementation,
which
exposes
an
API
that
vhp
a
controller
reads
that
uses
C
advisor.
There
are
tournament
alternative
implementations
in
production
in
the
wild
I
know
of
at
least
the
one
for
Prometheus,
but
there
is
one
for
influx.
I
believe
as
well
then
provide
rough
equivalence
and
there
may
be
others.
I
So
we
are
in
the
unique
case
that
we
have
a
ga
external
important
user
facing
feature
that
depends
on
an
optional
component.
Not
all
distributions
conform,
a
distribution
ship
that
so
we
brainstormed
a
couple
of
options
that
are
worth
just
bringing
up
here,
even
if
we
don't
have
to
settle
or
have
the
full
discussion,
one
of
which
is
anything
that
depends
on
an
optional
feature.
Come
on
be
part
of
conformance,
which
is
kind
of
weird
Catholic
Kings
will
always
wanna.
We
got
another.
I
The
implementation
of
CSI
is
not
required
for
the
cluster.
There
is
a
description
of
how
things
should
work
when
they
are
under
CSI,
but
we
don't
care
about
the
implementation.
The
second
one
is.
This
is
something
like
a
profile,
and
this
is
like
another
example.
We
talked
about
security
profiles
for
multiple
tenancy,
but
this
might
be
a
novel
one
which
is
do
you?
I
Is
this
a
optional
features
profile
or
a
metrics
features
profile,
and
then
there
may
be
a
third
option
which
may
or
may
not
make
sense,
which
is,
if
HP
a
if
the
metrics
API
is
there,
the
HP
a
controller
should
work.
So
if
you
are
a
conformant
distribution
that
did
this
prerequisite,
then
this
API
is
expected
to
behave
a
certain
way
and
you
can
verify
conformance.
I
That
would
be
somewhat
analogous
to
the
thing
that
we
just
kicked
out,
which
we
got
rid
of
all
of
our
skip:
optional,
conformance
tests,
conformance
tests
that
skip,
but
there's
a
reasonable
statement
that-
and
this
starts
to
get
into
the
stuff
Tim
was
bringing
up
on
the
previous
one
of.
If
you
must
do
configuration
in
order
for
this
to
work,
you
can
still
verify
that
it
works,
but
you
can't
verify
it
all
the
time.
So
do
we
say
that
every
feature
there
that
may
or
may
not
be
configured
has
to
be
in
the
unique
profile.
I
So
you
can
run
all
these
profiles
individually
or
do
we
consider
the
possibility
that
two
clusters
are
conformant?
If
everything
that
should
work
a
certain
way
does
with
some
of
those
being
optional
and
there
might
be
a
couple
of
other
mushrooms,
those
were
the
three
that
I
think
we
we
came
up
with.
A
So
I
think
we
can
discard
that
this
one
anything
that
is
a
non
as
an
optional
feature
should
not
be
part
of
conformance
I.
Think
if
there
v1
related
I,
do
you
think
we
should
have
some
level
of
guarantees
to
the
end-users
that
says
that
this
behavior
has
a
set
of
tests
that
you
can
bet
against
right
and
I.
I
Think
this
is
a
good
case
law
kind
of
thing,
which
is
we.
We
are
just
now
reaching
the
threshold
that
we
have
enough
of
the
coverage
that
we
can
ask
this
important
question
that
we'd
never
really,
you
kind
of
made
some
statements
early
on
in
conformance,
but
there's
a
good
time
to
say
by
case
law,
if
you're
a
v1
Alfa
feature,
even
if
you're
optional.
We
want
to
test
you,
which
is
a
good
constraint
now.
A
I
think
for
the
last
two
I
don't
have
strong
opinions
of
a
versus
B
other
than
I.
Think
if
we
do
actually
start
to
create
well-defined
profiles,
we
need
to
do
it
in
honest
and
earnest
and
think
about
it
across
not
just
this
case,
but
across
other
ones,
and
that
might
mean
modifications
to
the
tags
that
exist
inside
of
the
tests
to
denote
like
some
level
of
profile
right.
So.
F
Features
are
hanging
together.
If
we,
if
we
do
decorate
the
test
with
some
sort
of
feature
tag,
then
we
can.
We
can
do
an
analysis
based
on
existing
excuse
me
with
providers
existing
conformant
providers.
Watch
of
which
of
those
are,
you
know,
truly
very
common
I
mean
sort
of
a
some
sort
of
factor,
analysis
essentially
to
say
hey.
These
profiles
fall
out
in
this
way.
There's
a
cloud
provider,
one
that
you
know
all
the
cloud
providers.
Major
Club
riders
include
metrics.
They
include
balance
for
services,
etc,
etc.
D
F
I
Hda
believe
behaves
everywhere
and
is
actually
a
good
thing.
Good
point
here
doesn't
change
so
like
there's
kind
of
two
different
types:
there's
nuanced
behavior
and
on-off
behavior
I'm,
not
sure
they're,
exactly
the
same
like
this
is
not
a
case
of
nuanced
behavior
at
the
high
level.
It's
if
you
have
horizontal
pilot
of
scaling-
and
you
say
this-
and
we
run
something
with
this.
I
I
Think
the
end-user
benefit,
which
has
always
been
the
goal
for
conformance,
is
we
would
fix
the
conformance
test
to
take
into
account
a
reasonable
range
of
defer,
diverging
behavior
or
set
like
a
sli
type
threshold.
So
if
you
said
60%
CPU
within
five
minutes,
you'll
be
scaled
up
or
the
five
minutes
is
a
key
part
of
conformance
as
well,
which
is
what's
the
expectation
someone
has
an
API
but
yeah.
The
behavior
here
is
on
or
off
not
doesn't
do
the
same
thing
everywhere.
F
Right
but
the
plan
right
that
the
whole
point
of
conformance
is
is:
is
users
can
rely
on
their
workloads
being
portable?
So
if
there's
a
feature,
that's
optional,
that
your
workload
depends
upon
then
either
you
have
to
know.
Okay,
my
workload
requires
something
above
and
beyond
conformant
cluster.
In
order
to
work,
you
know,
I
guess.
F
I
There's
an
interesting
distinction
here,
two
of
whom
is
the
party
doing
the
verification.
We
have
said
that
we're
doing
this
for
the
end,
the
reality
is
I
think
is
that
the
distributions
and
with
with
I'm,
not
saying
like
it's
it's
that
black
and
white,
but
the
vast
majority
of
emphasis
is
placed
by
someone
providing
a
kubernetes
distribution,
and
it
doesn't
mean
that
the
other
side,
the
people
consuming.
I
What
is
it
important
but
like
the
guidance
for
profiles
for
those
users,
might
be
different
like
if
I
have
a
workload,
only
I
know
whether
I
use
HP
a
we
do
not
have
tooling
today,
and
that
tells
you
which
profiles
are
required
to
run
your
application
across
and
so
like.
Some
of
these
aspects
are
good,
but
we
probably
not
gonna
get
to
them
for
a
while.
Do
we
focus
on
broadening
conformance
and
we
focus
on
closing
out
the
end
user
experience.
D
I'm,
not
not
sure
I
have
an
answer
to
that.
Just
other
random
thoughts.
I
keep
trying
to
to
frame
this
mentally
in
terms
of
CRI
versus
CSI,
so
see
and
correct
me
if
I'm
wrong
here,
like
I,
think
a
CRI
has
something
that
has
a
default
implementation
out
of
the
box,
which
is
docker,
but
somebody
can
configure
their
cluster
to
use
some
other
container,
runtime
and
I
expect
the
conformance
tests
will
still
pass
CSI.
We
currently
don't
have
any
mechanism
to
exercise
anything
that
CSI
does
so
we
don't.
We
don't
call
that
conformant
right
now.
D
I
Another
you
know
is
interesting:
you
brought
that
up.
Aaron
of
docker
so
like
docker,
isn't
technically
a
default
CRI,
actually
because
docker
and
windows
and
docker
and
Linux
behave
differently,
and
so
we
have
a
single
conformance
test
that
covers
Windows
and
Linux,
where
CRI
has
completely
different
behavior
under
the
covers.
Because
of
the
platform
distinction.
D
D
I
A
I
D
I
People,
if
people
cared
that
much
wouldn't
we
heard
about
it
like
this-
feels
like
the
tail
wagging
the
dog
which
is
like
you
must
be
conforming
and
therefore
must
run
metrics
server
I'd
be
like
people
are
finding
that
running.
Metrics
server,
great
you
just
don't
get
HPA.
If
I'm
running
Windows,
you
don't.
F
I
M
I
Think
that
aspect
of
it
is
an
optional
feature.
I
you
can
have
so.
The
interesting
thing
here
is
I
HPA
is
optional
because
time
varying
workloads
there's
plenty
of
other
ways
like
if
you
have
a
cluster
that
automatically
scales
up
and
scales
down
applications
after
24
hours,
you'd
still
be
conformant.
Today,
right,
you
are
allowed
to
have
a
cluster
where
the
administrator
makes
decisions
about
your
application
outside
of
your
control
and
so
HPA.
I
A
I
think
we
can
enter
into
this
full
philosophical
conundrum
or
what's
the
meaning
in
life
and
one
of
the
things
I
think
we
might
want
to
start
doing
here's
we
talked
about
it
for
a
long
time,
but
we
really
need
to
lay
down
the
taxonomy
of
like
what
what
are
the
demarcation
lines
for
certain
features
and
behaviors
and
what
we
considered
to
be
a
profile
and
not
a
profile.
And
we
have
some
of
this
documentation
and
literature
circle
around
that.
A
We
started
many
half
attempts
and
we
even
have
more
documentation
that,
since,
like
I
believe
that
Brad
had
started
as
well.
That
kind
of
overlaps
with
this
conversation
I
think
it
really
behooves
us
to
write
this
down
in
a
doc
and
to
start
iterating
as
a
group
on
this
space,
because
we
keep
on
going,
we
keep
on
treading
into
it
either
purposefully
or
by
accident
problem.
D
Except
that
it's
just
gonna,
say
I
feel
like
operationally.
That's
sort
of
the
reason
we
split
meetings
up
the
way
we
have
we're
like
this
is
supposed
to
be
tactically
focused
on
what
we
can
improve
before
we
hit
that
line,
and
if
we
hit
that
line,
we
should
bounce
back
and
focus
more
on
driving
things
forward.
A
Yeah
exactly
but
one
of
the
things
we
do
in
other
SIG's
is
we
have
this
set
of
issues
that
we
link
at
the
top
and
they're
basically
long-standing
issues,
and
if
we
do
update
so
we
have
thoughts
on
them.
We
just
kind
of
go
back
to
them.
They're
there,
there's
ones
that
are
basically
like
here.
The
large
level
feature
epics
that
we
try
to
track
over
time
and
I
think
this
is
one
of
them.
My.
F
A
So
here's
my
proposal,
let's
open
up
an
issue
track
it
at
the
top
of
this,
is
a
long-standing
epic
that
we
need
to
revisit
and
try
to
gain
traction
as
a
group
and
revisit
it
periodically
like
every
meeting
to
see
if
we
can
start
to
refine
this
and
try
to
take
a
stab
at
it
time
permitting.
In
the
meantime,
let's
still
focus
on
the
the
low-hanging
fruit
that
we
have,
plenty
of
which
is
trying
to
get
the
increased
endpoint
coverage
and
to
hold
the
rest
of
the
group
accountable
for
features
that
get
promoted
a.
F
D
H
A
Look
forward
to
RiRi
RiRi
reading
it,
maybe
when
you
get
it
posted,
also
I
think
we
are
overdue
for
a
bit
of
a
backlog.
Grooming
I
propose
that
maybe
earlier
either
Thursday
or
Friday,
perhaps
having
an
impromptu
session
for
those
who
are
interested
to
walk
through
the
backlog
and
make
sure
we
kind
of
get
it
back
in
the
sorted
order.
So
by
next
conversation
we
can
have
maybe
have
a
little
bit
more
structured
outcomes.
A
We're
at
time,
why
don't
we
coordinated
on
the
channel?
Let's
do
that
thanks.