►
From YouTube: 20190813 sig arch conformance office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
A
D
E
F
C
C
F
Would
be
less
than
ideal,
so,
ideally
every
conformance
tests,
because
it
is
not
slow,
flaky,
disruptive
or
depends
on
a
feature.
It
will
be
part
of
the
release,
master
blocking
set
of
jobs
in
one
form
or
another.
Okay
and
those
need
to
right
now.
The
set
of
criteria
we
have
is
those
need
to
run
at
least
every
three
hours.
They
must
take.
No
more
than
two
hours
to
complete.
There
are
jobs
that
are
wildly
in
violation
of
this
for
what
it's
worth
and
then
there
are
some
just
best,
guesses
and
numbers
on
amount
of
times.
F
E
B
A
G
E
F
This
was
my
thing
where
I
feel
like
occasionally
Tim
and
I
kind
of
swoop
in
and
try
to
figure
out
like
what
is
going
on
within
this
test,
and
we
can
make
a
best
effort,
but
Tim
and
I
are
both
motivated
to
be
like
kind
of
pushy.
We
are
not
the
people
who
originally
wrote
these
tests.
Normally
the
people
who
like
have
the
domain
expertise
to
really
dig
into
is
this
inappropriate
behavior?
Does
it
need
to
be
exercised
in
this
specific
way?
F
F
And
I
don't
know
why
it
was
removed.
So
if
we
want
to
dig
into
the
specifics
of
this,
the
deal
is
that
this
is
a
test
that
that
call
that
was
removed
was
they
call
that
exercises
and
couplets
undocumented
on
versioned
API,
which
we
all
collectively
agreed
is
not
something
we
want
to
allow
for
conformance
tests.
A
F
So
if
I
just
want
to
be
pushy
and
promote,
this
I
would
be
like
won't
remove
those,
because
we
don't
need
those,
because
we
don't
want
this
to
talk
to
the
Google
API.
What
I
don't
have
is
the
subject
matter,
expert
in
front
of
me
saying:
no!
No!
Actually,
we
need
to
you
need
to
hear
from
the
couplet
to
see
what
its
state
of
the
world
is.
Otherwise,
this
is
not
a
valid.
F
C
Know
we
do
need
to
have
the
domain
experts
decide
on
promotion
of
the
tests
we
should
it.
They
should
be
part
of
the
approval
process
and
I
think
that
that
kind
of
happens
naturally,
because
it's
so
Aaron
you,
if
you
approve
something
it
will
prove
for
all
the
different
the
whole
the
whole
like
directory
tree
like
when
I
go
in
and
I'm
like
okay,
I,
wanna
I
want
to
approve
this.
It's
only
going
to
approve
a
subset
of
that.
So
I
need
to
go
talk
to
him.
Note
or
I
need
to
go
talk
to
apps.
C
We
need
to
go
talk
to
scheduling
and
get
somebody
there
to
approve
if
the
files
be
touched.
That's
owned
by
them.
So
in
that
sense
like
we
do
get
one
of
the
owners
files,
some
natural
need
to
have
people,
do
it,
but
I
think
it
I,
don't
think
we
should
be
in
this
group.
I
was
just
deciding
to
promote
things
unless
we
have
concurrence
with
those
Oh
Sabrina
I
am.
F
A
D
C
F
A
whole
Zig
okay.
So
how
can
we
make
not
everybody
responsible
and
how
can
we
get
a
group
of
people
who
are
responsible
because
it
sounds
like
our
hack
would
be
hey
John.
You
happen
to
work
next
to
a
bunch
of
Googlers
who
are
domain
experts
in
a
few
specific
areas,
we'll
just
use
it
you,
as
our
H,
a
proxy
shoulder
tack.
C
H
H
F
Violate
that,
and
so
now
we
need
to
figure
out
for
these
violations
number
one.
We
could
just
kick
them
out.
Boom
done,
don't
care,
but
now
does
that
mean
we're
not
exercising
these
behaviors
and
then
are
these
behaviors
valid
behaviors
that
should
function
across
all
kubernetes
clusters?
If
so,
could
we
please
have
them
the
tests
you
know?
Should
we
have
these
behaviors
exercised
in
a
way
that
lines
up
with
our
conformance
guidelines.
H
H
H
B
Yeah
we've
been
updating
that
as
we
get
feedback
and
requirements,
I
am
and,
as
we
get
word
general
but
I
think
there's
a
difference
between
what
makes
a
good
conformance
test
overall
and
those
overall
requirements
and
digging
down
did
a
specific
subject
matter
expert
who
may
have
context
enough
when
we
get
pushback
on
this.
Can't
this
isn't
a
good
idea.
We
I
just
want
to
create
ask
them
if
we
should
create
a
update
to
the
requirement
doctor
or
not.
A
A
I
think
the
the
point
of
the
matter
is,
is
how
do
we
use
care
and
stick
philosophy
to
get
the
right
behavior
from
the
sinks
that
are
not
responsive
to
get
them
to
make
the
call
on
some
of
these
issues
and
the
answer
that
I
have
for
Aaron
is
I,
don't
know
how
this
has
been
a
systemic
problem
that
we
we'd
have
to
somehow
promote
some
methodology
or
policy
to
get
priority
for
them
to
do
these.
Things
now
is
different.
Things
behave
differently.
A
If
we
sink
less
your
life
cycle,
we
have
a
mantra
of
constant
triage,
so
we
there's
somebody
that
should
always
be
triaging
inbound
and
if
it's
inbound
and
it's
not
triaged
and
somebody's
gonna
sign,
then
then
we
do
something
about
it,
but
that's
not
the
same
for
other
things.
So
I
don't
know
how
you
get
API
machinery
to
move
on
issues
that
are
six
months
old
or
how
to
do
that
for
signo
to
I.
Don't
know
how
you
do
that
so.
F
One
thought,
ostensibly,
is
all
the
things
really
care
about
active
participation
within
sync
architecture.
This
is
a
sub-project
of
state
architecture.
The
folks
who
are
showing
up
the
cig
architecture
are
the
folks.
We
should
be
tapping
on
the
shoulder,
if
you're
not
showing
up
to
our
meetings,
apparently
you're
off
doing
whatever,
but
I
have
a
really
tough
time,
believing
that
people
from
API
machinery,
know,
storage
and
networking,
don't
show
up
to
save
architecture,
and
that's
maybe
the
lever
that
I
would
use
like
if
your
sink
doesn't
have
a
person
dedicated
to
triaging
this.
F
C
So
we
have
a
few
things
going
on
that
kind
of
like
we
also
now
have
a
requirement
that
people
do
conformance
tests
for
GA
features
so
like
we
have.
A
number
of
this
is
one
demand
we
put
on.
Sig's
is
hey:
can
you
review
this
existing
test
and
see
if
it's
functionally
appropriate
and
then
another
is
hey?
You
need
to
write
some
new
tests
for
your
new
stuff,
so
it
seems
like
we
have.
C
We
have
sort
of
a
confluence
of
things
that
are
trying
to
that.
We
need
this
this
process
or
this.
This
stick
really.
It's
a
stick
at
this
point
that
we
can
use
and
maybe
maybe
to
make
that
easy.
We
had
like
when
we
do
our
project
updates
in
cigars.
We
should
add
a
little
section
where
we
say
hey
here
are
the
open
issues.
We
need
input
from
this
thing.
A
One
thing
we
might
want
to
do
just
so:
we
do
our
due
diligence
prior
to
that
sig
arch
meeting
scrub,
the
backlog
for
any
other
issues
or
at
least
notify
on
the
conformance
slack
channel
of
somebody
doing
a
scrub
to
verify
what
we're
all
waiting
on
so
that
way
we're
at
least
complete
in
our
assessment.
I
feel.
F
Like
I
have
seen
so
I'm
also
selfishly
interested
in
this,
so
that
John
and
I
don't
have
to
be
the
shoulder
tappers.
We
we
have
these
contractors
I
want
to
make
sure
they're
empowered
to
to
get
what
they
need
without
us,
being
middlemen.
I
feel,
like
I,
have
seen
folks
from
I
I,
paying
the
conformance
slack
channel
with
not
a
whole
lot
of
feedback
or
response.
I
wish
I
could
see
more
response
when
they
do
that.
It
also
makes
me
think,
like
that,
apparently
isn't
an
effective
channel
for
escalation.
F
I
C
F
A
A
G
E
G
Right,
so
let
me
try
to
explain
right,
but
the
premise
of
this
is
if
you're
gonna
think
about
conformance
profiles.
Okay,
that
the
ideal
thing,
if
it's
possible
is
there
should
be
a
process
where
you
at
least
check
and
say:
maybe
we
can
go
ahead
and
move
it
into
the
core
conformance
because
you'd
like
to
do
that
first,
because
the
whole
world
is
a
lot
happier
if
there's
just
one
set
of
core
conformance
tests.
So
that's
what
you'd
like
to
happen.
G
Okay,
maybe
there
needs
to
be
a
storage
profile,
for
example,
once
you
turn
that
key
and
say
we
want
to
start
having
profiles
now
you
need
to
be
very
careful,
because
what
you
want
to
happen
is
if
you're
going
to
go
down
that
route,
and
you
can
pick
yourself-
you
need
it.
You're,
probably
going
to
want
to
have
multiple
conformance
profiles
aggregated
together,
so
that
they
look
like
all
the
major
vendors
are
essentially
providing
the
same
functionality.
The
the
scariness
comes
from
my
real-world
experience
of
running
this
in
a
previous
cloud.
G
Infrastructure
I
was
the
lead
for
what
was
called
the
interoperability
challenge
and
I
had
to
make
sure
we
could
run
portable
workloads
across
15,
different
cloud
infrastructures,
public/private,
what-have-you
and
I
couldn't
and
I
had
to
do
it
live
on
stage
and
I.
Couldn't
imagine
pulling
that
off
successfully
if
we
start
allowing.
Let's
say
we
create
five
new
profiles
and
every
vendor
Google
Amazon,
whoever
starts
hoarding
their
own
desired
subsets
of
those
profiles,
then
you
can
just
say
you
know
what
we've
lost.
We
really
don't
have
interoperability.
G
We
really
don't
have
what
portability,
because
everybody
had
so
much
free
range
to
make
those
decisions.
Instead
of
taking
the
time
you
do
consensus,
building
where
you
would
at
least
take
the
time
to
say.
No,
no,
let's
all
agree
that
these
new
for
profiles
we've
created
are
really
the
ones
that
that
are
gonna
meet.
What
we
call
the
enterprise
platform
for
lack
of
a
better
term,
because
maybe
they'll
be
like
well,
we
want
something
for
raspberry
pi's.
That's
that's
always
the
case
that
comes
up.
G
So
what
that
is
trying
to
argue
for
is,
we
ought
to
be
slow
and
deliberate
and
consensus
building
as
we
try
and
come
up
with
profiles.
Now,
that's
all
well
and
good,
but
what
I
talked
about
in
the
comment
is
how
frustrating
that
is
for
a
Timms
or
a
srini
who
says,
listen,
and
we
all
have
this
discussion
into
con
Seattle.
We
still
need
to
start
building
out
these
tests.
I
need
to
be
able
to
experiment
with
them.
I
need
to
kick
the
tires.
G
I
need
to
work
with
the
cigs
I
need
to
get
rolling
on
this
and
I.
Understand
conformance
folks
need
to
have
a
little
bit
of
process
to
make
sure
silly
things
don't
happen,
and-
and
so
this
was
a
way
of
saying,
go,
build
your
validation,
Suites,
don't
come
up
and
kick
the
tires
experiment
and
release
whatever
and
then,
as
they
get
mature.
G
Look
at
those
validation,
Suites
and
now
have
a
process
that
says,
let's
find
a
way
to
decide
if
they
go
into
court
or
if
we
need
to
actually
merge
them
and
delete
conformance
profile,
because
if
we're
going
to
do
that,
let's
kind
of
aggregate
them
together.
So
we
don't
end
up
with
every
public
cloud
platform
supporting
their
own
desired
subset
of.
Let's
say:
you've
got
six
profiles
and
each
one
supports
four
of
them
and
it's
a
different
set
of
four.
G
Then
you've
lost
on
conformance,
and
so
this
was
a
way
to
separate
between
letting
testers
create
tests
and
do
what
they
need
to
do
and
have
the
speed
that
they
need,
while
allowing
conformance
to
have
the
time
to
do
consensus
building
in
the
community
so
that
our
end
users,
at
the
end,
perceive
for
the
majority
of
cases.
If
need
be.
That
conformance
feels
like
one
suite
of
tests,
even
if
it
was
and
agreed
to
aggregation
of
a
subset
of
conformance
profiles.
So.
A
G
Essentially
the
same
thing,
it
says
the
only
thing
that
the
validation
suite
tired
was
carrying
with
it
was
a
hey.
We
think
these
Suites
of
tests
are
one
day
going
to
eventually
either
need
to
go
into
poor
conformance
or
have
a
discussion
of
going
into
a
profile
and
and
having
that
extra
notion
of
these
are
going
to
be.
G
G
Please
let
the
test-retest
creators
go
fast,
but
please
let
the
conformance
consensus
process
go
slow,
because
if
you
let
conformance
consensus,
go
slow
and
pretty
much,
everybody
can
start
adding
conformance
tags
and
say
this
is
now
my
profile,
and
this
is
trainees
profile,
and
this
is
so
and
so's
profile.
It
was
trying
to
make
the
distinction.
There
are
places
where
we
want
people
to
go
fast
and
there
are
places
where
we
want
to
get
consensus
from
the
community,
and
that
was
the
what
we
discussed.
If
you
go
watch
the
video
from
couch
on
Seattle.
G
A
H
A
In
the
link
there,
so
so
so
surd
might
here's
my
heartache
I
agree
with
Aaron
and
I.
Let
you
go
after
it's
tag
team,
the
this
already
kind
of
exists,
except
it's
less
formal
and
I
I'm
a
fan
of
policy,
not
necessarily
formality
with
tags.
We've
created
a
lot
of
policy.
So
far
we
don't
have
enforcement,
which
is
a
bummer,
but
we
at
least
have
a
document
of
what
we
you
know.
If
we
could
have
a
pony.
This
is
what
kind
of
would
it
would
look
like.
F
So
I
feel
like
I,
have
the
exact
same
heartache,
but
for
context,
and
maybe
I
missed
remembering
this,
where
I
thought
some
of
the
discussion
was
headed
back
in
Seattle
when
we
were
talking
face-to-face
was
around
validation
that,
like
this,
given
CRI
implementation
actually
meets
the
definition
definition
of
CRI
or
this
given
CSI
driver,
does
all
of
the
things
that
we
would
expect
of
a
CSI
driver
so
sort
of
drawing
that
boundary.
That's
right.
B
C
If
I
can
jump
in
yeah
that
that
makes
sense
to
me
as
well,
it's
like
there's
two
different
conformance
tests
are
from
the
point
of
view
of
the
user,
and
it
sounds
like
I.
Wasn't
there,
unfortunately,
yet
to
say
I
don't
either
so,
but
it
sounds
like
what
Aaron's
talking
about
is,
from
the
point
of
view
of
the
system
or
from
the
control
plane
that
some
plugable
interface
works
as
expected,
which
is
really
a
completely
different
question
and
and
probably
doesn't
even
belong
in
the
yeah.
We
would
want
even
to
end
test
at
that
point.
G
Well,
the
other
of
the
example
came
up
in
Seattle
was
when
we
were
looking
at
windows
and
what
we
were
gonna
do
about
windows
at
the
time.
Windows
was
pushing
for
a
whole
bunch
of
what's
the
right
word.
They
wanted
to
have
their
own
sort
of
conformance
suite
where
they
tweaked
the
conformance
suite.
They
had
basically
they're
writing
the
test.
G
Hey
skip
if
Windows
kind
of
thing,
so
the
other
piece
of
context
that
we
had
in
Seattle
was
that
and
and
again
one
of
the
things
we
were
thinking
for
validation,
Suites
was
okay
well,
go,
build
a
validation,
suite
and
and
see
what
that
does,
and
then
you
know
come
back
to
us
now.
There
were
some
other
issues
and
some
other
folks
had
other
concerns
with
with
windows,
and
they
totally
changed
the
model
and
said:
oh
just
walls
or
something
but
all
I'm
trying
to
get
across.
You
can
use
an
existing
tag.
G
I,
don't
care
I'm,
not
in
love
with
the
tag.
I
just
need
people
to
think
and
have
a
way
to
think.
The
first
thing
that
should
not
pop
in
my
head
is
I'm,
creating
a
new
performance
profile.
So
and
then
the
storage
jokes,
the
classic
example,
because
they're
like
what
we
have
all
the
storage
tests.
But
not
everybody
supports
storage.
What
do
we
do?
Go
create
a
storage
profile.
So
that's
what
I'm
trying
to
to
to
stop
from
happening.
If
that
makes
sense,
I
agree.
C
That
makes
sense,
I
think
at
this
point
we
don't
have
any
profiles
really
like.
We
we've
managed
to
resist
it,
maybe
maybe
to
a
fault,
but
we
managed
to
resist
it
and
I.
Don't
even
know
of
profiles.
I
mean
yes,
is
too
much
trouble
to
start
out,
but
I
don't
even
know.
If
it's
the
right
approach
right
I
mean
like
right
now
we
don't
even
have
nearly
enough
coverage
for
core
solely
the
profiles
he
was
like,
but.
I
The
ideas
basically
bucket
is
the
test
that
should
be
the
core
behavior
of
that
subsystem.
Right
I
mean
that
is
what
you
are
going
to
fall
as
validation,
which
is
a
precursor
to
profiles
at
some
point
in
time,
but
unless
we
we
provide
that
path,
there
will
be
set
up
meeting
leaders
which
will
be
coming
through
to
see
which
is
part
of
coke
informants
or
what
not.
C
So
last
we
needed
of
the
in
the
last
meeting
we
talked
about
some
other
issues
related
to
optional
features,
feature
enablement
like
there's
a
bunch
of
related
things
and
I,
think
validation,
Swedes
and
profiles
are
sort
of
been
two
attempts
to
do
this
and
I
was
tasked
with
creating
an
issue
which
I
just
did
before
this
meeting.
That
sort
of
is
supposed
to
be
an
umbrella
and
it
pulls
in
in
all
of
those
and
I.
C
Don't
know
why
I'm
not
satisfied
I
guess
with
validation,
sweets
or
profiles
as
answers
for
all
the
questions
of
how
we
deal
with
optional
features
and
features
that,
like
they
came
up
because
of
HPA
that
are
really
considered
kind
of
central
features,
but
rely
upon
an
optional
component
being
deployed
in
the
cluster
really
dns,
frankly,
is
not
technically
required
in
it.
But
it's
something
we
require
for
conformance
right.
So.
G
I
think
we
were
trying
to
fit
the
thing
that
Aaron
was
talking
about,
which
is
people
would
always
shop
and
say
well,
I
need
a
suite
of
tests
that
verify
X,
right
and
I
didn't
want.
I
didn't
want
to
stand
on
those
peoples
play
I,
want
them
to
go,
build
their
tests
right,
but
I
didn't
want
the
first
thing
coming
out
of
their
mouth
being:
oh
and
I'm
just
going
to
call
it
a
profile
that
so
I'm
cool.
G
If
you
know
my
personal
opinion,
if
it's
not
been
transparent,
enough
is
I
think
the
day
we
start
doing
conformance
profiles
is
the
day
we
from
Mendeley
hurt
cooper,
Nettie's
and
its
ability
to
run
everywhere.
That's
my
personal
opinion.
So
if
John
says
Brad,
you
know
I
think
the
best
thing
is
we
only
have
poor
conformance
weren't
done.
I
wholeheartedly
agree.
The
only
reason
we
tried
to
do
validation,
Suites
was
when
I
took
that
very
firm
position.
G
A
We
haven't
settled
even
on
the
notion
of
Protoss,
as
John
was
mentioned
earlier,
like
right
now,
there's
nothing
that
prevents
anyone
from
using
the
feature
tag
and
it's
totally
out
of
band.
Anyone
can
do
it
so
there's
nothing
that
prevents
them
from
doing
that
today
and
they
do
it
all
the
time.
So,
if
you
anything
that's
feature
dated
in
the
main
repository
even
extension,
points
at
this
point
can
have
a
feature
wrap
flag
for
doing
that,
and
they
do
that
today.
A
So
I
think
creating
distinction
before
we've
actually
figured
out
what
we
even
want
to
do
it,
maybe
a
little
bit
I
think
we
need
to
figure
out
a
taxonomy,
a
layering
for
this,
which
we
haven't
even
agreed
upon
the
HPA
issue,
kind
of
forced
us
to
retaught
about
it.
Is
there
whatever
the
problem
we
currently
have?
Is
we
have
v1
api's
that
we
can't
necessarily
promote
to
conformance
because
they're
optional
features,
right
HP
is
a
classic
example.
A
Where,
like
you,
don't
need
to
have
HPA,
join
MTTF,
GNSS
john
mission,
and,
what's
really
on
and
coming
up
on
us
to
figure
out.
How
do
we
test
these
things?
I?
Don't
think
validation
Suites
having
to
get
another
tag
kind
of
fits
in
this
I
do
think
that
the
nomenclature
you
use
there
I
found
to
be
confusing,
probably
because
I
wasn't
part
of
the
conversation
originally
so
reading
it
I
reread
it
like
three
times
and
I
was
like
it
just
reads
like
my
interpretation
of
it
equals
x.
A
So
I
am
of
the
point
where,
like
I,
think
we
need
to
figure
out
how
to
document
this
stuff
and
I
think
that's
a
reasonable
expectation
upon
this
group
to
do
and
bounce
that
he
started
to
open
an
issue
against
that
I'm,
not
in
favor
of
adding
yet
another
stratification
to
the
existing
system.
I
think.
G
That's
entirely
fair
and
again
the
context
was
there
were
people
at
the
time,
William
Dennis,
for
example,
and
others
that
were
pressuring
they
were.
They
were
causing
pressure
and
saying
that
stratification
was
needed,
so
the
validation,
suite
concept
was
a
response
to
the
pressure
we
were
receiving
at
coop
condoms
yeah.
No.
But
if
we
are
all
in
agreement
that
that
pressure
is
not
as
severe
right
now,
I'm
so
cool
with
that
I'm.
G
So
cool
with
you
know,
put
these
things
on
the
Shelf:
let's
go
figure
it
out,
but
if
you
remember
at
the
time
we
had
the
window
platform
guys
come
and
saying:
I
need
a
Windows
profile
and
we
had
the
storage
folks
in
the
room
so
and
I
need
a
storage
profile.
We
have
I
think
this
might
be
four
before.
C
H
Oh
sorry,
I
just
went
and
reread
that
that
issue
and
my
comments
on
it
and
refresh
my
memory,
so
my
opinion
is
that
different
from
yours
that
that
we
absolutely
need
profiles
and
I
think
I
can
you
know
very
succinctly,
explain
why
I
also
think
that
there
appear
to
be
other
needs
for
groupings
which
are
different
from
my
understanding
of
the
need
for
profiles
and
I.
Think
it's
fairly
reasonable
to
allow
those
kinds
of
cool
things
in
a
storage
here,
in
my
mind,
is
not
actually
a
profile.
It's
a
grouping.
H
Strongly
was
getting
to,
it
was
just
a
precursor
so
to
the
short
summary
of
what
I
was
about
to
say
is
I
would
be
happy
and
I
think
I
signed
up
and
I
just
dropped
the
ball
on
reviewing
that
proposal
and
or
coming
hang
up
with
a
counterproposal
and
I.
Think
I
just
dropped
the
ball
on
that
you
know
back
in
June
or
whenever
it
was
and
lots
of
things
that
happened
since
then.
So
I'd
be
happy
to
pick
that
ball
up
again.
Have
the
debates
with
whoever
is
interested
in
having
them
either
refine.
H
What
has
put
on
I
think
Brad
wrote
that
it
had
some
value,
I
think
I
added
a
lot
of
comments
which,
which
is
along
similar
lines
to
yours,
Tim,
which
is
that
there's
lots
of
words
and
lots
of
different
tax
on
amines
that
I'm
not
entirely
clear
I'd,
be
happy
to
try
and
help
clean
that
up
either
with
Brad
or
anybody
else
who
needs
to
participate
and
I.
Think
that
would
be
a
useful
exercise
if
for
no
other
reason
that
it
would
allow
people
like
Patrick.
G
Issue
and
that's,
we
used
to
hear
a
lot
more
of
that
in
the
past.
What
has
to
have-
and
that's
a
fine
view,
but
that
view
has
to
be
managed
properly,
and
this
is
what
made
my
marriage
properly.
You
want.
80%,
whatever
are
the
things
that
are.
This
is
whatever
you
want
to
call
the
sweet
spot
for
kubernetes
and
I
want
to
be
careful,
because
if
I,
don't,
if
I,
don't
categorize
it
right,
somebody
gets
their
feathers
ruffled
but
for
giggles,
let's
say:
Enterprise
kubernetes
and
running
enterprise
workloads
on
a
public
cloud
platform.
G
Whatever
we
do
with
these
things,
we're
going
to
want
to
make
sure
that
all
of
those
equivalence
class
of
platforms
support
all
the
same
performance
profiles,
because
I
guarantee
you
the
day
that
you
decide
that
Google's,
ok
with
supporting
its
said
and
IBM
supports
its
ass
or
supports
its
set.
You
have
a
lost
work,
look
Portability
and
you
didn't
climatically
damage.
H
G
H
C
Want
to
be
involved
in
that
I
did
just
post
in
the
chat.
The
link
today,
I
created
earlier
that
stairs
to
gather
all
these
things.
I
think
I
left
out
of
there
was
I,
should
feature
enablement,
so
how
we
manage
feature
enablement
and
discoverability
of
feature
enablement,
which
we
really
have
right
now,
but
that's
a
place.
At
least
we
can
gather
all
of
that
discussion.
C
H
A
I
A
E
Yeah,
it
was
just
a
real
short
comment,
so
Brett
had
brought
up
the
thing
about
Windows
profiles
and
it's
a
large
reason
we
backed
away
from
it
from
the
time
being
is
personally.
One
of
the
goals
of
the
December
meeting
was
to
get
a
path
board
on
how
we
would
test
Windows
and
a
couple
principles
that
we
all
agreed
upon,
or
that
it
makes
sense
for
new
tests
to
be
written
agnostic.
If
that
means
replacing
an
image
with
one
that
is
agnostic
across
Oh
asses
is
done.
That's
been
an
effort
that
we've
been
carrying
forward.
E
So
it's
it's
going
to
be
one
of
those
long
efforts.
That's
all
I
have
on
that,
and
so
whether
you
want
to
call
this
a
validation,
sweet
or
conformance
profile,
either
way,
I'd
be
happy
to
help
define
the
one
that
reflects.
You
know
that
Windows
is
an
optional
feature
that
people
can
somehow
opt
into
and
get
you
know
a
minbari
functionality.
That's
tested
for
it
sounds.
A
A
A
A
Project
worked
out,
and
there
is
only
one
thing
that
is
critical
importance,
and
that
was
the
data
coming
back
from
I-ight.
With
regards
to
what
is
the
current
coverage?
We
have
with
just
the
pots
back
in
his
fields,
the
next
on
the
list
that
somebody
had
added.
He
was
questioned
about
dependencies
on
guestbook
applications
for
performance,
I
thought
we
already
talked
about
something
like
this.
Before
did
we
I
swear?
We
talked
about
this.
A
J
F
Was
just
the
other
thing
is
just
brings
to
mind
is
this
sounds
like
another
domain
expert
question
except
I,
don't
even
know
which
domain
expert
I'm
trying
to
ask
here,
because
the
test
name
says
six
CLI
coudl
client
should
be
able
to
create
the
guestbook
application,
but
I
don't
understand
what
side
of
the
here's
that's
actually
trying
to
exercise
like.
Is
this
proving
that
cou
puddle
client
can
create
a
thing
given
some
arbitrary
resources?
Or
is
it
like
this
specific
thing?
C
From
what
I
understand
we
can
bring
in
Brian
on
this
one,
so
Brian
said
that
they
originally
just
the
guestbook
was
just
what
what
he
is
for
validation.
You
know
early
on,
and
maybe
it's
not
the
most
appropriate
way
to
do
it
at
this
point
for
like
we
already
have
tests
to
cover
each
of
those
specific
behaviors,
you
could
have
I
guess
it's
sort
of
a
an
overall
integration
test,
that's
exercising
a
whole
bunch
of
different
behaviors
all
at
the
same
time,
but
it's
not
like.
C
F
I
I
imagine
that
this
is
something
folks
have
used
as
a
proxy
for
some
kind
of
general
smoke
test
that
exercises
a
couple
behaviors
and
if,
if
we
still
need
that,
then
it's
probably
fair
to
have
a
conversation
of.
Is
this
specific
test
back
with
the
guestbook
application,
with
this
specific
version
of
Redis
no
longer
suitable?
For
that,
should
we
rewrite
it
into
something
else?
Great.
E
F
And
my
problem
is
I,
don't
know
who
owns
that
aggregates
smoke
test,
if
it's
that,
like,
maybe
it
is
this
baby,
it's
take
architecture.
Maybe
we
should
be
talking
to
cig
networking
this.
This
feels
like
a
case
where
we
should
be
parachuting
in
some
timely
and
experts
and
I
just
don't
know
who
to
get
involved
so.
C
C
F
Like
I,
don't
own
that
list
of
behaviors
nor
do
I
own,
like
which
of
those
should
be
exercised
for
us
to
all
agree,
it's
a
smoke
test,
which
is
why
I
feel
like
it's
a
cigarettes
decision
when
it
comes
to
numerating
behaviors.
But
it
sounds
like
specifically
here.
The
set
of
behaviors
that
we're
having
questions
about
are
all
network
related
as
well.
C
Well,
we're
relying
upon
we
needed
upgrade
renders
because
of
a
networking
issue,
but
you
know
it's
just
so
happens.
It
doesn't
work
on
Windows
I
mean
I,
don't
know
that,
like
it's
the
application
of
this
the
guestbook
application,
maybe
it
doesn't
need
to
use
Redis
right.
Maybe
it
can
use
something
else
and
I.
Think
that's
what
Patrick
or
claudia
was
suggesting
no
I.
F
E
A
E
C
C
A
E
J
A
You
have
three
minutes
left
I,
don't
necessarily
want
to
go
through
the
backlog.
Again,
we
had
done
it
last
couple
of
times
we
did
an
impromptu
meeting
to
go
through
it.
I
knew
that
folks
I've
actually
been
turning
against.
It.
I
think
that
the
approval
list
has
actually
been
knocked
down
a
couple
of
issues:
that's
good
too,
are
there
any
other?
Last
questions
come
as
complaints
concerns,
given
that
we
have
three
minutes
left.
F
C
C
My
only
concern
with
that
was
and
I
just
don't
know
like
if
we're
going
to
add
that
provide
that
the
way
the
test
is
written
right
now,
oh
geez,
it's
gonna
start
failing
because
it's
not
gonna
be
skipped
on
GCE
I
put
in
tooling,
so
that
you
skip
based
on
that
provider.
Trying
to
do
the
future
times.
I
see.
A
A
Don't
know
how
this
happens,
but
it's
been
happening
so
I'm
gonna
be
conversing
with
Jordan
about
this
in
more
detail,
but
I
think
we
need
to
kind
of
raise
that
as
a
awareness
for
people
who
are
doing
API,
reviews
or
and
the
SIG's
if
they
have
a
cap,
that
they
need
to
be
responsible
for
promoting
test.
One.
C
Of
the
things
we're
also
doing
Tim
is
you
probably
saw
this.
This
production
readiness
review
like
I,
think
we
need
an
overall
there's,
a
number
of
gates
that
things
should
go
through
as
they
go
through
from
alpha
to
beta
decay
and
right
now
we
don't
really
it's
just
like
putting
the
cap.
Your
graduation
criteria,
like
I,
think
we
need
probably
some
more
formal
process
around
here.
The
criteria
for
going
to
beta
here
the
criteria
for
ETA
knockin
readiness
review
would
be
proud
of
that
conforms.
Review.
F
F
An
acquaintance
question:
real,
quick
I,
don't
think
we
have
a
canonical
guide
to
writing
a
test.
Well,
yet
we
still
lack
an
example.
I
can't
even
think
of
one
off
the
top
of
my
head
of
like
this.
Is
the
perfect
test
just
copy
this.
Until
we
have
a
don't
think
we
should
close
the
issue
around.
Could
we
have
a
guide
on
how
to
write
tests?
Well?
This
is
partially
also
owned
by
the
testing.
F
Commons
sub-project
who've
been
doing
a
lot
of
refactoring
to
the
e2b
framework
and
have
some
specific
anti
patterns
that
they
weren't,
like
avoided,
you're
chipping
away
at
this
iteratively
with
the
requirements
than
the
conformance
doc,
which
I've
been
asking
the
ìiî
folks
to
update
as
they
encounter
roadblocks
or
questions
instead
of
us
having
tribally.
Instead
of
having
us
answer
those
questions
tribally.
Those
questions
should
be
answered
by
the
doc
and
if
they
aren't,
we
should
update
the
doc,
and
this
group
should
come
to
consensus
on
whether
that's
the
appropriate
answer.
That's.