►
From YouTube: 20200505 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
That's
right,
hello,
everyone
and
welcome
to
the
conformance
subgroup,
meaning
of
cig
architecture,
I'm
your
host,
hippy
hacker
and
just
remember:
we
are
abiding
by
the
CN
CF
Code
of
Conduct,
which
is
be
kind
and
we
will
be
recording.
This
live
it'll,
be
on
YouTube
later
today,
I
can
figure
out
what
button
to
hit
I
will
go
ahead
and
share
my
screen.
A
So
we
can
look
at
the
agenda
today
and
there
we
are.
This
is
a
sub-project
of
cigar.
So,
if
you
click
on
the
top
level,
you
can
get
there.
There's
our
meeting
information
and
requirements.
We're
going
to
talk
a
bit
about
some
of
our
our
backlog
on
watches
on,
as
far
as
how
we
do
that.
Please
add
yourself
to
the
attendees
notes
and
if
someone
wants
to
take
over
his
note-taker
I
am
gonna,
be
unable
to
do
that
today.
A
Quick
rundown
of
action
items
from
last
week
is:
we
need
some
tooling
and
actually
a
couple
of
tools.
One
of
them
was
specifically
for
replication,
controller
and
Clayton
was
willing
to
give
a
hand
with
that
and,
and
did
so
as
far
as
a
bit
of
feedback,
but
there
may
be
a
bit
more
in
the
pipeline
for
that.
This
was
a
issue
that
we
opened
that
has
around
the
events
watch
tooling,
and
we
have
a
small
proposed
solution
that
we'd
love
to
get
feedback
on.
A
We
can
go
through
that,
but
my
suggestion
is
that
we
we
have
some
feedback
from
Clayton
and
a
little
more
feedback
from
the
community
would
be
great.
This
doesn't.
It
affects
at
least
seven
points
on
the
board
if
we
can
find
a
way
to
for
these
expected
watch
trends-
and
this
is
oh,
this
is
the
general.
Maybe
I
got
the
link
wrong
actually
because
our
this
was
supposed
to
be.
This
is
there's
another
ticket
for
replication
controller
to
link.
So
that
was
this
other
link
down
here
here.
A
This
one
is
specific
to
replication
controller
chilling,
so
my
apologies
and
this
PR,
let's
see
where
we
are
here,
I,
don't
know
GPM
from
me
I'll
review
from
there
and
Neil
it.
So
we
can
look
into
that
Thank
You
Yale
it
for
for
the
review
and
we'll
go
back
and
update
that
tooling.
With
this
it's
quite
a
bit,
I.
A
Thank
you
and
then
I'm
gonna
delete
this
other
one
here
for
the
the
launch
event,
because
it
was
a
replication
to
learning
about
there.
The
other
action
item
we
had
is
a
win.
Can
we
promote
tests
and
I
went
through
and
asked
the
sig
release
team
and
eventually
this
merged,
and
to
give
us
the
new
and
updated
release
cycle?
It's
likely
we're
gonna
do
three
releases
this
year
and
the
tests
need
to
be
in
by
the
July
16th
for
119
and
the
test
freeze
on
week.
A
16
means
July
thirtieth
being
our
deadline,
so
we
have
a
few
more
June
and
you
were
months
to
get
that
in.
We
would
like
to
get
a
full
list
of
behaviors
so
that
we
can
start
having
a
denominator
for
comparison.
I
think
we're
still
working
on
that
and
there
may
be
another
PR
down
down
below
Bob
on
our
numbers
were
actually
down
and
we
don't
show
the
up-and-down
graph
yeah
it's
just
instead
of
being
plus
11,
it's
plus
7
and
I.
Think
it's
because
we
removed
some
some
tests.
I
didn't
I.
A
A
One
of
the
things
affecting
our
numbers
is,
we
keep
writing
tests
or
looking
at
test
or
end
points
that
are
not
able
to
be
tested
either
do
the
framework
issues
of
others,
and
we
need
to
start
documenting
that
so
that
we
know
so.
We
put
together
within
API
snippet,
just
a
quick
query
of
stuff,
that's
related
to
volumes
and
I
just
wanted
us
to,
as
a
group
agree
that
these
end
points,
and
the
selection
here
is
when
there's
a
path
like
volume.
A
So
this
exact
list
of
twelve
endpoints
is
not
at
least
for
now,
gonna
be
part
of
conformance,
and
that
means,
as
far
as
our
denominator
of
things,
that
we're
measuring
you
know
the
ability
to
get
to
coverage
on
our
endpoints
that
we
shouldn't
be
counting
these
twelve.
So
two
things
one:
is
this
an
accurate
list
that
we're
not
going
to
be
part
of
conformance
for
now,
and
that's
for
all
of
the
ones
in
this
list
and
secondary
of
everything
in
this
list?
How
do
we
keep
track
of
that
other
than
a
file?
An
API
snoop.
A
B
C
C
Like
it'd
be
cool,
if
we
could
say
every
kubernetes
cluster
ever
is
supposed
to
have
at
least
local
persistent
volume,
support
or
mock
persistent
volume,
support
or
some
kind
of
driver.
That
exercises
like
the
bare
the
bare
bones
of
persistent
volumes
and
some
of
the
expectations,
but
I
think
right
now
and
you're
free
to
correct
me.
If
I'm
wrong,
it's
kind
of
up
to
each
individual
to
lustre
provider,
to
attach
whatever
class
of
persistent
volume
support
and
can
attach
and
configure
whatever
class
of
persistent
volume
support
they
want,
whatever
works
best
for
them
and.
D
You're,
an
actually
there's
a
there's,
a
key
point
here
to
on
it's
certainly
possible
for
us
to
develop
plugins
that
we
install
to
test
conformance
of
things
like
CSI
and
all
of
that
or
to
test
these
mechanisms
like
there's
no
way
to
test
CSI
without
installing
some
of
these
test
providers.
Does
that
cross?
The
line
that
we
had
previously
defined
for
like
privileged
and
what's
acceptable
for
conformance?
D
D
And
this
is
a
great
one,
which
is
like
it's
very
difficult
to
test,
but
now
I
will
say,
though,
there's
nothing
that
prevents
many
of
these
from
being
tested
without
actually
having
a
persistent
volume
provider
like
the
api's
can
be
driven
directly
by
an
administrative
user.
So
that
might
be
another
option
like
you
can
create
a
persistent
volume
and
then,
while.
C
That
is
true,
I,
don't
know.
If
that's
necessarily
it
sounds
like
you're
saying,
like
sure
we
could
exercise
an
API
and
see
like
200
okay's
come
back,
but
I
think
the
spirit
of
this
is
we
want
to
describe
the
end
user,
visible,
set
of
expected
behaviors
when
we
attach
a
persistent
volume,
the
pod
should
actually
have
a
persistent
volume
and
it
should
be
able
to
write
to
it
and
read
from
it
and
things
like
that,
which
we
can't
exercise
unless
there's
and
harness
to
getting
this
persistent
volume.
So
we
can.
D
Through
through
the
extension
mechanisms
but
yeah
you're,
then
you
have
a
persistent
volume
with
host
path,
although
we
don't
really
allow
it
not
all
platforms
are
gonna,
allow
it
so
yeah.
This
is
a
really
important
one.
That's
completely
uncovered,
and
so
what
do
we
have?
What
does
it
take
to
figure
out
whether
this
is
a
profile
or
a
or
just
something?
That's
super
important
that
needs
to
have
an
optional
behavior
of
being
bypassed.
If
you
don't
declare
it,
but
you
can
be
conformant
today
without
passing.
D
C
Think
I
don't
know
I
feel
like.
Maybe
this
is
a
question
I'd
like
to
have
John's
input
on
to
wait
fully,
but
because
I
feel
like
there,
there
may
be
such
a
thing
that
is
useful
as
a
kubernetes
cluster
that
supports
kind
of
like
stateless
workloads
without
the
need
for
persistent
volumes
and
I
want
to
be
able
to
say
like
yes,
that
is
a
keeper
at
ease,
I,
think
the
alternative
first,
as
you
say,
but
Aaron.
B
C
As
far
as
how
to
best
document
or
represent
these
things,
don't
have
a
great
idea
off.
The
top
of
my
head.
I
sometimes
feel
like
the
closer
to
your
code,
the
better
but
I,
also
I,
think
like
maybe
for
for
things
where,
like
you're,
not
gonna,
touch
it
ever
like
component
and
Status
we're
not
doing
it,
because
it's
deprecated,
okay
cool,
but
for
things
like
versus
mine,
we're
going
to
do
it
now,
but
we
do
think
it
needs
to
be
covered
under
profile
or
something
I,
wonder
if
you
like
linked
up.
C
A
On
my
suggestion
that
ties
that,
together
to
like
a
PR,
something
would
be
met.
We
don't
have
the
metadata
around
the
PR
like
we
don't
have
tagging
for
operation
yeah.
We
I
tried
to
put
a
PR
a
while
back
around
adding
some
some
metadata
dat,
I
I,
think
operation
IgE
through
on
the
logging,
but
it
involved
updating
the
way
that
we
generate
our
swagger
JSON
to
include
some
extra
metadata
and
that's
the
cleanest
way
that
I
it
all.
The
way
back
to
our
code
is
when
we
generate
the
definition
saying
these
endpoints
are
consistent.
A
Now
we
got
stuff
in
core
and
we're
at
about
50%
coverage
on
core,
but
there's
probably
there's
at
least
the
ones
on
this
list.
There's
going
to
be
another
20
to
30%
of
core
that
we're
just
not
that
have
this
other
path
and
being
super
clear
about
that
and
tying
it.
So
my
suggestion
is
a
PR
that
updates
the
way
that
swagger
JSON
is
generated
so
that,
if
there's
a
when
you
go
back
in
history,
you
look
at.
Why
was
it
tagged
this
way?
Oh,
it
was
a
decision
of
changing
the
API
definition,
I.
C
It
sounds
like
what
you're
trying
to
do
is
kind
of
refine.
Your
definition
of
done
for
this
particular
piece
of
work
and
so
I
feel
like
it's
more
appropriate
for
you
to
do
the
refinement
on
your
end,
rather
than
pushing
it
through
the
upstream
projects
that
you
are
testing
and
so
I
feel
like
something
that
describes
like
why
you're
filtering
these
endpoints
out
or
whatever
from
your
definition
of
done
is,
is
maybe
a
more
appropriate
way
of
doing
it
to.
A
Get
that
authoritative
statement
we,
you
know
we
were
bringing
it
up
in
this
meeting,
but
it
would
it
be
best
to
send
an
email
with
these
endpoints
and
to
like
six
storage,
for
example,
just
to
make
who
its
conformance
right.
So
it's
kind
of
our
we're
curating,
the
definition
of
conformance
and
how
do
I
in
our
definition
of
done.
Is
this
enough
for
us
to
have
this
meeting
and
say
I'm
going
to
remove
these
twelve
endpoints
from
our
target.
I.
C
Feel
like
it's
enough,
I,
don't
know
if
you
want,
like
I
hate,
saying
the
word
spreadsheet,
but
you
know
you
could
have
a
spreadsheet
and
you
could
have
the
list
of
all
the
endpoints
and
you
could
highlight
in
green
all
the
ones
you
got
covered
when
we
could
add
a
new
column
and
describe
why
you're
not
covering
these
and
then
we
sort
of
see
how
many
remain
all.
A
Right
simplicity
and
you
saying
that
this
is
enough-
is
fine
with
me,
and
we
just
have
a
text
file
in
here
with
the
commit
that
references.
This
meeting
to
say
well
have
like
we
have
the
total
number
of
endpoints
total
number
of
tested
and
then
we'll
have
this
other
number
that
is
conformance
plus
the
ones
we're
going
to
ignore
as
well.
Inside
API
snoop
have
a
list
of
things
that
during
this
meeting,
we
decided
we're
not
going
to
use
as
a
denominator.
C
A
A
E
C
D
D
On
some
platforms,
the
node
object
might
get
recreated
and
there's
like
seven
bugs
and
seven
caps
on
like
going
and
trying
to
like
make
this
make
sense
for
those
cases
which
I
don't
know.
If
we
want
to
get
into
or
not
and
then
the
third
one
is,
what
does
it
mean
like?
Who
should
like
what
the
behavior
when
it
comes
back?
I,
don't
know
if
what
we
can
specified,
but
it
is
a
pretty
fundamental
part
of
like
the
break
glass.
D
C
D
And
like
the
fundamental
thing
to
that,
this
touches
on
is
pod
safety.
My
pod
safety
is
one
of
those
fundamental
guarantees
that
nobody
cares
about
until
their
data
gets
eaten,
and
you
know
the
basics
of
it
is
you
know,
two
nodes
in
a
cluster
will
ever
have
the
same
pod
with
the
same
name
that
thinks
it's
the
same
pod
running
on
it.
So
like
the
cubelet,
only
the
cubelet
is
the
one
who
deletes
the
pot,
not
the
control
plane.
So
if
they're
partitioned,
you
got
to
wait
for
the
partition
to
heal.
D
So
the
cube
looks
like
yep.
This
process
is
done
now,
I
delete
that
allows
stateful
sets
to
work,
and
that
allows
things
like
read/write
storage
like
read/write
many
to
work
so
like
I,
scuzzy
and
fiber
channel
those
don't
have
any
protections
built
into
them.
She
provides
the
protections.
If
someone
violates
that
rule
people
running
serious
production
transaction
systems
on
fibre
channel,
because
they're
crazy
could
actually
have
two
pods
thinking
they're
the
same,
and
they
had
a
guarantee
that
we
didn't
preserve.
D
That's
another
behavioral
one
that
really
does
require
disruptive
action,
so
I
think
that's
the
same
principle
that
Aaron
highlighted
before,
which
is
anything
scary
or
surprising,
to
a
customer
or
to
a
to
a
cluster
user
or
cluster
admin
should
probably
be
something
separate
to.
As
avoid
you
know,
hey,
you
must
destroy
workloads
in
order
to
test
this
function,
where
you
might
generate
an
alert
hi.
A
I,
remember
a
conversation
we
had
a
few
meetings
ago,
maybe
a
couple
of
months
ago
around
our
should
we
be
able
to
run
conformance
tests
and
expect
them
to
destroy
things,
particularly
for
submission
to
the
CNC
F,
slash
gates,
conformance,
give
me
my
shiny
badge
and
that
definition
of
performance,
I
feel
should
include
surviving
massive
deletion
of
things.
In
the
way
the
claim
is
describing
not
necessarily
carved
off
into
a
profile.
It's
changing
the
expectations
of
people
wanting
to
use
the
conformance
tool,
as
is
my
healthy
cluster.
D
A
Will
add
these
notes
to
the
removal
of
delete
core
and
create
core
node
for
now
and
well,
I?
Think
what?
But,
in
addition
to
that,
we're
going
to
create
based
on
Clayton's
feedback,
a
mock
test
that
includes
his
feedback,
that
we
can
I,
don't
want
to
move
that
one
out
of
the
triage
column
until
we
have
the
behavior
of
the
conformance
or
the
behavior
definition
written
to
tie
it
to
maybe
that's
a
good
one
to
time
two
behaviors
and
get
a
more
clear
definition.
C
A
Why
I'm
phrasing
it
as
we
are
not
going
to
we're
going
we're
going
to
say
we're
not
going
to
do,
delete
core
node
or
create
core
node
we're
gonna,
remove
it
just
like
the
rest
of
them,
but
we're
going
to
document
because
Clayton
spelled
it
all
right,
we're
going
to
create
a
mock
ticket.
It's
going
to
sit
there
in
triage
until
such
as
we
get
a
nice
behavior
system
in
place,
and
we
have
something
more
defined
on.
Are
we
going
to
destroy
people's
nodes.
A
That
is
the
list
of
1
to
12.
That's
16
points
that
will
work
on
that.
What
we're
going
to
gain
back.
Thank
you
for
your
time.
Everybody
Wow
lots
of
notes
and
changes.
This
is
excellent,
so
for
for
Rihanna
and
Brno
I
think
there's
a
way
where
you
can
get
access
to
to
edit
this
doc.
I
think
you
have
to
join
a
Google
Group
I'm,
not
sure
where
that
it
was
on
the
top
of
my
head.
A
E
A
A
This
is
the
watch
event
and
verification,
tooling,
and
there
this
is
just
the
this
is
just
the
issue
in
kubernetes
I.
Think
right.
This
is
an
issue
can't
see
the
URL
bar,
but
I
think
it's
an
issue
and
it's
related
to
our
API,
snoop
or
file
over
here,
and
this
is
our
I
think
this
is
pointing
back
to
itself
here,
because
our
files,
this
is
Jo
generated,
and
we
need
to
know
that
things
happen
in
a
particular
order
and
we
want
to
collect
and
make
sure
that
they're
done
we're
gonna
propose
that
we
actually
Kayla.
A
F
They'll
be
great,
so
this
is
just
like
a
bit
of
sample
code
to
exercise
the
behavior
of
retrying,
the
an
event
if
it
doesn't
occur
in
the
order
which
was
expected.
So
at
the
moment
it's
just
like
a
string,
but
this
could
be
replaced
with
v1
dot
event
or
whatever
is,
and
we're
just
expecting
an
event
maybe
occur
in
the
order
of
added
modified
deleted
and
yeah.
E
E
B
C
C
There
was,
there
was
something
that
was
using
like
a
I
paste.
It
indicates
conformance
channel.
It
was
something
that
used
the
list
watcher
to
define
list
in
watch
functions
and
then
I
think
it
used
that
to
dump
some
events
into
an
array,
and
then
it
checked
the
the
things
that
expected
from
those
events.
B
F
Just
I've
just
had
a
short
read
and
is
the
behavior
that
you
would
be
wanting
as
to
avoid
flakes
be
so
that
it
gets
the
right
order,
but
there
can
be
other
things
in
between
that
could
be
not
a
part
of
what
we're
expecting
for
that.
Just
ignores
it.
So
it
says
we're
expecting
added
and
then
perhaps
a
delete
comes
out,
but
then
we
also
want
to
modify
after
that,
and
so
we
expect
to
be
modified
in
that
order.
Yeah.
F
B
A
A
B
F
B
E
A
Gonna
move
along.
Thank
you
so
much
Neil
it
for
that
very,
very
helpful,
and
the
last
thing
we
do
on:
let's
just
go
through
our
project
board
and
we
have
a
lot
of
stuff
in
the
middle,
but
I'm
gonna
focus
on
things
on
the
end,
because
we
want
to
get
stuff
all
the
way
through
to
done
and
also
back
load
the
the
in
progress
board.
So
we
say
that
these
are
okay
tests.
To
write.
A
Down
the
bottom
and
see
how
we're
going
and
we're
just
needing
just
needing
an
approve
and
a
yep
the
world
that
is
in
the
right
place
and
we'll
we
can
reach
out
to
them.
That
would
be
great.
A
G
A
C
Is
actually
if
you
look
closely,
it
doesn't
have
a
little
great
box
next
to
it
that
says
required,
so
this
PR
will
totally,
even
if
that
fails
out
of
my
paranoia
I
would
I'm
gonna
retest
it,
because
that
job's
supposed
to
be
verifying
that
you
know
no
non-gaap
eyes
are
being
used
on
running
conformance
tests.
Okay
and
the
test.
That's
failing
has
the
word
dip
in
it.
I
don't
know
I
forget
if
that's
the
test
that
this
is
trying
to
promote,
but
I'm
just
gonna
run
it
anyway.
All.
A
Right,
I'll
revisit.
We
need
to
we'll
look
at
revisiting
this
in
a
day
or
two
just
to
make
sure
that
that
job
went
through
and
that
we
can
put
it
in
the
to
approved
column.
So
it
looks
like
it's
so
and
if
and
if
it
passes
and
then
we're
good
go
back
to
the
board,
we
have
some
things
that
are
in
the
needs
review
column
rather
than
reviewing
stuff.
A
I
A
Okay,
I
can
go
through
it
next
time,
so
the
first
thing
at
the
top
is
our
link
to
our
files
that
create
our
our
process,
and
then
we
go
ahead
and
create
an
issue
out
of
that.
So
this
is
our
approval
issue,
which
is
the
current
one.
We're
looking
at.
We
go
through
and
create
a
selection
statement
for
what
to
focus
on,
and
the
limiter
here
is
pretty
much
focused
on
one
end
point:
the
delete
collection,
names,
class
template,
so
one
thing
I
wanted
to
check
before
we
go
any
further.
Is
this
one?
C
Like
we
have
talked
in
the
past
about
not
wanting
to
exercise
the
pod
template
resource
and
yeah
like
I,
would
put
this
lower
on
I.
Put
this
at
the
bottom
of
your
priority
list
right
now
compared
to
everything
else.
The
idea
I
think
it's
something
like.
We
regret
that
we
exposed
pod
templates
as
a
bear
resource
and
would
have
instead
preferred
that
they
were
just
part
of
the
deployments
and
replication
controllers
and
stateful
sets
and
replica
sets
that
use
them
as
an
embedded
object.
Okay,
I
might,
it
might
be
nice,
remember
I.
A
C
H
J
I
A
Thank
you,
Steve
we're
basically
going
to
create
three
pod
template
to
confirm
that
they're
created
and
delete
them,
be
a
delete
collection
and
confirm
that
all
of
them
got
deleted.
Here's
the
working
test
without
using
the
Eco
framework
and
we've
written
a
function
that
turns
this
into
a
ginko
test,
which
is
nice
so
step
from
here
to
test
is
very
short.
We
want
to
go
through
that
closely
now,
so
we
know
we
agree,
but
basically
it's
a
pipeline
from
here
to
conformance.
If,
if
we
look
at
this
close
enough.
C
I
A
C
C
It
looks
to
me
like
this
is
written,
assuming
that
everything
is
nice
and
synchronous
and
blocking
so
like
the
delete.
Collection
call,
won't
return
until
the
collection
has
truly
been
deleted
and
all
of
the
pod
templates
within
there
have
been
deleted,
and
so,
when
you
then
list
you'll,
totally
see
what
you
expect.
It
may
be
that
that
takes
some
time
and
you'll
need
to
do
what
we
have
done.
In
other
tests,
which
we
have
tried
to
write
that
watch
event
framework
to
to
help.
B
C
Which
is
you
like
watch
the
thing
and
then
wait
to
see
a
delete
event
on
the
thing,
and
then
you
verify
that
your
expectations
hold
true.
Now
that
the
delete
event
tells
you,
the
delete
has
happened
so
like
if
I
the
contact
looks
great
but
I
feel
like
this
may
not
just
be
a
copy-paste.
Does
that
make
sense,
yep.
I
I
already
had
discussions
with
Caleb
about
some
of
the
watch
stuffs
that
he's
been
looking
at.
So
I
was
just
waiting
for
the
talling
to
be
a
little
bit
further
along
to
make
it
easy
for
me
to
play
car.
A
C
C
Being
kind
of
brute
force
about
this
and
copy
pasting
stuff
and
following
in
a
common
pattern,
but
I
do
feel,
like
others
might
ask,
is
there
a
way
we?
This
is
basically
the
same
accepted
expected
API
behavior,
regardless
of
what
the
underlying
resources
is
there
any
way
we
could
write
this
test
so
that
it
is
resource
agnostic.
C
It's
just
something
to
consider,
like
maybe
I'm
for
all
I
know,
there's
somebody
at
Google
who's
trying
to
work
on
that
thing,
like
auto-generating,
a
bunch
of
test
cases
based
on
generic
like
API,
behavior
resources,
I,
don't
know,
but
I
mean,
if
you're
just
changing
the
word
pod
to
pod
template
and
then
to
the
word
event
I
think
was
the
other
one
seems
like
going
to
be
interesting
to
figure
out
if
there
was
a
way
to
parameter.
Is
this
on
the
resource
type.
A
A
A
Okay
comment
and
then
we
have
another
one
from
Rihanna,
Rihanna
Wow
six
points.
This
will
be
fun
and
this
is
nodes
which
has
a
delete
node
which
may
again
earlier
they
were
talk
about.
What
can
we
do
and
not
do
so
this?
This
would
we'll
need
a
bit
of
modifying,
and
this
is
RIANZ
first
PR,
our
first
creation,
so
he's
looking
at
I.
J
A
A
So
node
status
is
okay,
but
I'm,
not
sure
about
deleting
notes,
because
I
think
you
just
this
is
the
selection
that
you're
looking
for
here
you're
just
show
me
everything
that
doesn't
hit
volume.
That's
part
of
note
and
it's
too
wide
of
scope
and
note
status
is
already
taken
care
of
and
we're
not
going
to
delete
nodes
for
now.
So
I
think
this
just
needs
to
be
retargeted
like
a
closed.
A
A
Is
this
accurate,
okay
cool
and
that
stopped
writing
any
tests
way
ahead
of
anything?
We
gave
agreed
that
that
wasn't
going
to
happen
so
Bobby's
got
another
one
for
looking
at
things
that
are
daemon,
sets
that
don't
hit
volumes,
and
so
we
have
replaced
patch
read
list
delete.
This
is
a
+10
to
be
a
good
one,
plus
five.
F
A
F
A
Good
to
know
where
your
we're
headed
and
then
because
you
won't
know
what
you
actually
hit
until
you
write
the
test,
because
those
are
still
untested
on
hit
right,
because
this
is
from
untested,
stable,
core
endpoints
and
then
you
were
able
to
choose
out
a
set
in
general.
Your
you
want
to
walk
through
this
Hey
look
sisters,
yeah
thanks.
F
Think
you
also
know
actually
there's
nothing
too
special
about
this
one.
It's
it's
pretty
normal,
we're
changing
the
container
image
a
bunch,
because
that's
a
pretty
easy
thing
to
do
and
set
out
the
labels
and
that
label
select.
Is
this.
This
test
is
pretty
modulus.
It's
pretty
easy
to
just
change
what
it!
What
resources,
though
the
the
specific
bits
of
data
at
the
top
just
creates
it
and
then
goes
from
there.
C
My
only
comment
here
is
I'm
not
sure
how
I
feel
about
verifying
the
number
of
pods
scheduled
is
equal
to
sorry.
The
number
the
pods
ready
is
equal
to
the
desired
numbers
scheduled
that
says,
the
daemon
sets
doing
what
it's
telling
you
it's
doing
when
I'm
less
here.
If
that
describes
what
an
end
user
expects
a
daemon
set
to
do
and
I
as
an
end
user,
I
expected
a
min
set
to
deploy
a
single
pot
on
all
of
the
nodes
to
which
that
pod
can
be
scheduled,
and
so
now.
C
Way
here,
where
we're
independently
verifying
like
so
I,
don't
know
if
we
have
to
go
as
granular
as
like
what
nodes
should
this
not
be
scheduled
on?
And
what
note
should
this
be
scheduled
on,
but
I
feel
like?
Maybe
it's
up
to
anybody
want
to
compute
the
number
of
pots
that
we
expect
to
see
or
or
like
the
number
of
nodes
me,
would
expect
this
to
be
scheduled
to
verify
a
family,
but.
F
C
Yeah
yeah,
maybe
I
I,
don't
know
off
the
top
of
my
hand.
If
there
is
part
of
art
for
this
in
the
codebase
I,
the
way
I
would
probably
look
for
this
is
I,
would
look
at
any
of
the
scheduling
tests
and
see
how
they
do
it,
but
I
think
they
all
expect
to
be
run
in
serial.
Well,
so
maybe
there
are
some
other
tests
out
there
that
look
at
it,
because
I
we're
not
necessarily
trying
to
exercise
the
scheduling
mechanisms
here,
like
taints
or
node
selectors
things
like
that.
C
But
I
do
you
feel
like
you?
Don't
necessarily
you
have
control
over
how
many
nodes
that
are
scheduled,
able
on
a
cluster
that
you're
running
this
test?
It
could
be
that
you're
running
against
the
cluster
that
has
it's
like
it's
control
plane
is
deployed
on
some
notes
that
are
explicitly
marking
the
taint
to
that
schedule
or
some
other
taint
that
things
don't
tolerate.
C
Typically,
so
you
kind
of
need
to
feel
like
there's
got
to
be
some
way
to
describe
expressed
that
I
may
be
sending
you
on
a
wild,
goose
chase,
you're
free
to
tell
me
that
I
am
but
I.
Just
feel
like
this
isn't
really
describing
this
is
describing
the
containment
say:
it's
gonna
create
as
many
pauses
it
says
it's
going
to
create
that
doesn't
necessarily
guarantee
that
those
pods
are
created,
one
per
nade
and
to
and
the
nodes
that
schedule
to
you
are
the
nodes
that
it
should
be
scheduled
to.
That
makes
any
sense.
E
C
A
C
Meat
in
the
middle
of
the
edge
I'm
saying
the
concept
of
testing
Damon
set
sounds
great.
That
that
mock
test
is
written
is
not
how
I
would
recommend,
even
if
he
exercised
it.
So
it's
up
to
you
whether
you
want
to
say
you
know
what
we're
not
going
to
record
with
this
until
we've
kind
of
honed
the
mock
test,
or
if
you
want
to
start
iterating
on
it,
you
know.
After
the
fact.
That's
you.