►
From YouTube: 20200324 Sig Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Thanks
Caleb
hi
I
am
hippy
hacker
and
welcome
to
the
Sig
art
conformance
office
hours.
These
meetings
are
ruled
by
the
CNC,
F
and
communities
code
of
conduct,
which
basically
says
be
kind
and
do
not
be
unkind,
and
this
meeting
is
being
recorded
for
everybody
for
later
I.
Think
it's
a
pretty
simple
agenda
today.
I
didn't
see
anybody
add
anything
except
going
through
the
board,
so
I
wanted
to
make
it
super
easy
for
us
to
do
so.
B
The
agenda
we've
gotten
through
so
far
is
emotional
check-in
and
so,
and
we've
got
some
promotions
to
talk
about
some
tests
that
are
in
progress,
some
sorted
backlogs
that
don't
have
PRS
yet
and
15
points
worth
of
triage.
We've
also
got
some
rewrites
in
progress
and
a
little
update
on
the
website
itself.
So
first
part
of
the
agenda.
B
What
this
meeting
is
about
and
what
we're
measured
on
and
what
we're
planning
to
do
is
finish
the
migration
for
AWS
for
the
main
site,
so
that
all
the
different
members
of
our
community
from
the
corporate
sponsors
can
have
a
little
bit
of
contribution
to
making
sure
that
this
community
runs
well
and
as
far
as
the
the
I
team
and
our
Q&A,
ok
ours,
that
we
went
over
with
a
month
ago.
I
we're
on
task
to
do
that.
B
But
again
with
the
the
code,
trees
and
other
things
going
on
in
the
world,
it'll
be
a
bit
bit
slower,
but
on
I
think
we're
still
going
to
get
there.
This
is
our
promotions,
so
from
this
area,
I'm
going
to
pull
up
the
column
link
for
that
and
go
ahead
and
pull
up
these
three
tickets
as
we
go
through
that
we
have
to
move
my
video
thing
out
of
the
way
for
a
moment.
B
Click
on
here,
so
this
column,
I've
kind
of
focused
it
on
on
Caleb
and
the
column
with
needs
review
and
making
sure
that
anything
in
this
column
it
needs
review
for
promotions,
is,
is
ready
and
not
block.
So
I'm
gonna
go
down
to
the
bottom
and
key
see
where,
where
we
are,
we
need
to
update
the
release
30
seconds
ago.
Thank
you,
Thank
You
Erin.
So
we
have
an
action
on
that.
Any
other
comments
on
this
particular
ticket.
B
We
did
a
retest
retest,
this
conformance
data
I'm,
going
to
note
that
I
think
this
one's
gonna
gonna,
merge
soon
and
promote
our
Note
will
be
looks
like
it
will
promote
soon.
I
wish
there
was
a
way
to
log
in
as
the
as
a
group
to
comment
co-author
that
statement.
If
everybody
agrees
next,
one
is
two
points
we
go
down
and
see.
Thank
you,
and
once
again
does
any
of
this
it
looks
like
it
looks
like
this'll
merge.
Let's
make
a
quick
Chang.
B
We
remove
to
do
not
merge
and
I
think
we're
passing
the
tide,
not
merge.
Well,
so
now
that's
remove
yeah.
It
looks
like
this
will
very
soon
yeah,
hopefully,
together
with
all
of
these
points,
we'll
get
that
full
amount
of
coverage
for
our
promotions.
So
again
we'll
go
to
the
bottom
of
the
promotion
and
check.
Thank
you
again
for
unblocking
us
there
Erin
and
again
the
emerge
awesome
so
just
to
summarize
that
we
had
eight
points
possible
for
a
promotion
and
I'm
I.
B
C
D
C
B
D
C
D
C
F
G
Others
think
I
mean
the
original
rule
is
just
because
people
were
using
events
to
write
bad
tests
right
like
they
were
using
events
in
a
way.
That
was
not
what
events
were
designed
for,
but
checking
that
an
event
was
fired
and
requiring
that
event
to
be
fired.
I
mean
there
are
some
limitations
there,
but
if
events
aren't
propagating
through
the
system,
that
would
also
be
what
I
would
consider
conformance
failure.
It
is
may
be
that
the
not
all
of
the
failure
modes
are
obvious
to
people
when
they
test
for
events.
F
F
I
believe
the
the
the
rule
is
there
in
that
you
can
look
for
well,
I
would
have
to
go
back
and
there
was
because
there
are
some
that
were
using
that
we're
using
actual
events,
but
we
said
you
couldn't
look
for
specific
messages.
For
example,
in
the
events
I
may
be,
I
may
be
mixing
up,
I
must
go
back,
yeah.
G
There
was
there's,
there's
just
a
lot
of
confusion,
because
people
originally
were
looking
at
events
and
building
logic
off
of
how
events
were
fired,
which
wasn't
the
intent
of
events.
It
was
like
looking
at
log
file
lines
instead
of
using
the
underlying
primitive
to
verify
the
system
observed,
but
it
would
be
totally
okay
to
have
a
conformance
test
that
expected
a
certain
log
line
to
show
up
if
we
could
make
an
argument
for
it.
This.
C
G
It
so,
in
the
case,
so
I
would
probably
say,
there's
also
problems
with
using
watches.
If
you
properly
handle
the
error
cases,
we
have
had
bugs,
because
people
did
not
understand
that
failures
and
is
we're
actually
system
level
bugs
and
so
some
of
the
conformance
tests
or
switched
away
from
using
watches
we're
switched
to
something
that
no
longer
actually
caught
that
a
fundamental
part
of
the
system
had
regressed,
and
so
this
kind
of
gets
into.
G
If
you're,
using
a
watch,
you
have
to
be
aware
of
the
potential
the
the
scenarios
under
which
watches
can
fail
in
a
valid
sense
and
then
retry
to
test,
and
so
I
would
prefer
in
many
cases
that,
like
certainly
they'll,
be
people
who'll
come
through
and
say,
like
oh
you're,
not
supposed
to
use
watches
for
this.
That's
actually
incorrect.
You
should
use
watches,
but
we
need
to
make
sure
that
they're
used
in
a
way
that
matches,
and
that's
often
requires
some
external
knowledge.
D
G
A
watch
yeah
there's
so
like
a
great
example
of
this
is
the
pod
graceful
termination
test
was
originally
written
in
a
way
that
correctly
verified
that
the
sequence
of
events
coming
from
the
API
server
matched
a
very
specific
behavior.
At
some
point,
people
were
like:
oh,
this
is
flaky
and
they
refactored
it,
and
then
we
actually
broke
that
fundamental
behavior
and
the
tests
no
longer
caught
it.
So
usually
what
the
informer
is
shielding
you
from
is
dealing
with
the
consequences
of
a
watch.
G
Dealing
with
right
now
are
because
we
don't
have
good
ete
testing
that
verifies
that
invariance
the
system
aren't
violated
and
watches
tend
to
show
you
that
it's
not
that
much
different
from
doing
a
poll.
So
in
many
cases
as
you're
saying
Aaron,
like
an
informer
or
just
doing
a
poll,
you
just
be
checking
to
see
if
you
don't
see
any
invalid
States,
but
that's
also
a
harder
test
to
write
and
I.
Think
we
struggled
with
this
in
a
lot
of
the
conformance
testing
around
system.
G
F
Right,
that's
basically
what
this
does
right
now,
I'm
just
looking
over
it.
Is
this
simple
things
why
we
we
patch
you
patch
it,
and
then
we
make
sure
that
that
you
know
replicas,
ready,
replicas
gifts
to
the
right
state
and
then
we
you
know,
we
deleted,
make
sure
that
we
get
a
delete
from
the
watch
that
the
controller
RC
was
deleted.
G
Kind
of
test
that
I'm
advocating
for
I,
actually
don't
think,
has
to
be
a
conformance
test.
The
compartments
test
should
verify,
be
the
high-level
observed
behavior
to
maximize
return
on
conformance
I.
Think
the
second
one
is
verifying
that
the
code
is
that
the
system
is
doing
the
right
thing
under
pressure,
I
mean
I,
guess
the
question
is.
G
And
we're
not
right,
I
mean
it's
funny,
because
it's
like
so
like
a
lot
of
what
I've
been
spending.
The
last
six
months
on
is
like
places
where
our
core
fundamental
model
is
broken.
So,
for
instance,
for
over
a
year
we
had
a
bug
where
a
pod
that,
if
it
ran,
would
always
fail,
was
reported
as
successful
and
so
a
job
controller,
that's
written
on
top
of
retrying
failed
pods,
but
not
successful
pods
would
break
right.
There
were
edge
cases
where,
if
you
were
like
hey
I'll
use
a
job
and
I
say
retry
on
failure.
G
Oh
the
job
succeeded
great.
My
stuff
got
run
that
actually
wasn't
true
I.
Think,
given
the
difficulty
in
testing
that
stuff,
that's
just
I
think
this
is
a
separate
discussions.
We
should
have
those
tests
before
we
worry
about
putting
them
in
conformance
yeah
sig
note
we
had
a
discussion
today
about.
We
actually
need
to
do
that
for
several
other
things,
because
we've
observed
more
status,
anomalies
in
the
cubelet
and
we
have
regressions
and
subtle,
bugs
I
bet.
D
C
D
F
F
A
G
F
B
F
B
Go
to
the
next
one.
This
is
create
event
logical
test.
This
is
the
PR
that
has
is
just
looking
for
LG,
TMS,
I,
think
and
approved
and
approach.
You
know
so
far.
There
was
some
resolve
issues
from
Erin
and
conversation,
somebody
on
the
sign
themselves.
It's
basically
been
Erin
and
Bobby.
Sorry
Caleb.
D
E
D
The
context
of
our
last
discussion,
so
this
test
is
like
generating
a
synthetic
event
and
then
making
sure
that,
like
lifecycle
of
that
synthetic
event,
is
okay,
which
I
interpret
to
be
okay,
because
we're
not
relying
on
arbitrary
kubernetes
components
to
like
send
certain
events
at
a
certain
time.
It's
just
pretty
much
straight
up,
crud
on
an
event
now.
D
H
D
G
Think
we
should
absolutely
test
events:
the
API
we
probably
should
test
the
reporter
I,
so
components
in
cube
using
the
client
go
recorder
library
if
they
can't
send
a
valid
event
to
the
recording
infrastructure,
and
that
is
not
a
conformant
event.
Implementation
in
the
sense
of
like
we
define
what
the
api's
are
but
like
in
practice.
G
D
B
F
Yeah,
basically
it
was
fine,
it
was
just
some
I
think
terminology
in
there
that
was
confusing
and
that
it
would
ease
a
service
when
actually
these
are
strictly
endpoints.
There's
no
Associated
service,
there's
not
I,
don't
they
don't
believe
the
test
creates
an
Associated
service,
and
so
this
is
really
just
testing
the
crud.
It's
not
testing.
So
this
is
another
comment.
That's
not
on
there.
It's
not
actually
testing
right
now
that
that
these
manually,
created
endpoints
that
are
created
by
the
Quint
controller
are
plumbed
through
to
keep
proxy.
G
You
might
have
temporary
latency
delays,
but
I
think
all
of
those
could
be
resolved
by
the
test
itself,
explicitly
retrying
and
then
cataloging.
Some
of
that
I.
It's
weird
to
me,
cuz,
it's
like
kind
of
what
we're
saying
when
we
talk
about
like
watches,
not
being
reliable.
As
we're
saying
we
implicitly
don't
know
how
to
bound
the
reliability
of
what
we're
doing
a
clean
shutdown
on
the
watch
should
be
interpreted
as
that
we
can
go
back
and
start
the
test
over
from
the
beginning
and
if
we
never
complete,
that's
a
serious
problem.
G
If
you
can't
hold
a
watch
open
for
more
than
60
seconds
on
a
particular
cluster,
we've
never
actually
tried
to
state
that
so
I'm
not
saying
that
we
should
go.
Do
that
right
now
in
this
conformance
test,
but
the
general
idea
of
if
you
make
a
call
to
the
kubernetes
api
to
open
a
watch
of
a
resource
version
within
a
certain
fixed
linear
time
and
you
get
weirdness,
that's
a
bug
in
cube
or
in
the
implemented
cube
again.
I.
Don't
think
we
have
to
do
it
here.
We
would
probably
one
way
that
we
might
address.
G
I'm
gonna
fail
this
conformance
test
because
we
have
another
conformance
test
that
also
says:
watches
should
be
reliable
for
60
seconds
no
more
than
you
know,
ten
percent
of
the
time,
but
until
we
have
that,
it's
probably
premature
to
me
to
do
it
here,
I,
like
watches,
aren't
supposed
to
be
unreliable.
So
maybe
this
is
just
us
needing
to
claw
our
way
back
into
taking
control
of
the
problem.
I.
G
The
test
infrastructure
needs
sometimes
doesn't
like
this
theory,
and
maybe
we
need
to
come
back
in
and
say
like
sig,
a
Pima
Sheen,
or
he
needs
to
have
a
conformance
test.
That
says,
a
watch
should
be
open
for
a
hundred
or
should
be
able
to
be
open
for
this
set
of
criteria
in
this
set
of
criteria,
and
thus
that
is
a
definition
of
what
API
machinery
considers
a
conform.
A
tree
implementation
of
both
the
API
server
and,
under
the
underlying
data,
store
to
provide
some
consistency
of
experience
for
API
clients
across
all
cubes
great
I'm.
G
Watch
it's
kind
of
weird
because
it's
like
watch
is
like
if
you
can't
actually
watch
things
with
watch.
Why
do
we
have
that
API
like
if
your
watch
is
closed
after
30
seconds
all
the
time
that
might
be
okay,
but
then
you
should
be
able
to
resume
them?
If
you
can,
if
somebody
had
an
implementation
of
cube
that
compacted
every
15
seconds
and
lost
stuff,
I
actually
think
to
an
end
consumer
that
would
appear
to
be
a
non-conforming
cluster.
G
We
haven't
defined
it
as
such,
so
I
don't
want
to
you're
right,
Aaron
I,
don't
want
to
over
bound
it
I'm,
just
like
as
we're
talking
about
this
I'm
like
if
I
tried
to
use
a
cluster
that
couldn't
keep
a
watch
open
for
180
seconds.
I
would
file
a
bug
against
them,
and
I
would
probably
come
to
this
group
and
say
these
guys
aren't
conformant.
They
need
to
get
in
line
if
watch
doesn't
really
work.
I
think
you
would
be
nonconformist
in
my
opinion,
but
I
think
that's
something
that
we
could
do
as
a
process.
F
C
A
F
G
Machinery
to
give
us-
and
this
actually
raises
a
bunch
of
other
issues-
cuz
like
k3s,
went
and
did
the
sequel,
light
back
end
and
cloud
providers
that
are
playing
around
with
non
with
non
ed.
Cd
backends
are
emulating
the
API,
and
so
I
wouldn't
want
to
I
would
want
this
to
be
a
I
think
it's
in
the
best
interest
of
all
cube
users
and
conformance.
If
people
know
what
the
minimum
bar
is.
G
If
we
were
to
go
set
the
minimum
bar
today
and
people
didn't
do
it,
we
would
just
need
a
process
for
ensuring
that
they
understood
how
to
change
it
like
and
I'm,
not
not
implying
that
key
3s
would
ever
do
this
but
like
if
k3s
was
like
well,
we
just
can't
make
watch
work
and
we
got
one
too
I
think
that's
just
a
separate
discussion.
We
could
get
to
and
I'm
only
people
on
them,
because
I
know
they
did
the
sequel,
light
thing
and
I
think
they're
crazy.
Well,.
F
I
mean
but
I
think
we
hard
part
of
that
has
to
be
not
as
part
of
watch
conformance.
We
have
to
make
sure
that
we're
not
making
accidental
behaviors
inherited
by
Ed
from
sed
that's
part
of
the
conformance
like
we
should
be
careful
about
setting
that
bar
you
mentioned
as
low
as
possible
and
what
we
really
need
out
of
watch
as.
G
Opposed
to-
and
it's
funny
because
yeah,
because
sig
API
machinery
has
explicitly
said
that
they
will
not
take
on
scope
for
nonet
CD
back
ends,
and
so
it's
a
you
know
by
doing
that
is
like
I
want
people
to
be
able
to
write
conformant
things.
I,
don't
want
someone
not
to
be
able
to
rely
on
the
client,
libraries
against
a
cluster
that
looks
almost
conformant
and
then
have
to
deal
with.
They
fall
out
of
it
like
it.
G
If
I
had
a
properly
behaved
Informer
and
it
was
listing
every
15
seconds,
I'm
gonna
say
I
can't
like
this
is
not
standard
like
this
is
not
how
kubernetes
is
supposed
to
work.
This
is
not
how
kubernetes
was
intended
to
work,
and
this
is
not
what
the
cig
would
support
for
a
bug.
We
just
send
that
back
and
be
like
that's
a
Kay
Priya
I
hate
I'll,
stop
picking
on
k3s,
that's
a
clever
person
over
here
as
problem,
but
then
that
kind
of
defeats,
the
point
of
conformance
so
john
I'll.
D
D
F
G
A
very
unhappy
set
of
users
and
I
do
think.
I
think
that
it's
kind
of
it
feels
weird
to
say
things
like
this
I
would
probably
say:
yeah.
Look.
You
have
to
complete
these
tests
in
a
reasonable
amount
of
time,
and
five
minutes
is
more
180
seconds.
It's
so
much
more
time
than
you
need,
but
if
you
fail
that
I
would
question
your
life
choices.
Well,.
G
We
had
the
rules
originally
because
people
were
definitely
abusing
events
and
watch
and
watch
has
gotten
better
I
think
in
some
cases
we
were
reacting
to
the
wrong
problem.
We're
dealing
with
flakiness
using
you
know
this
the
hose
because
we
had
to,
but
now
we
need
to
go
in
with
the
water
jet
and
you
know,
be
very
be
very
gentle
around
the
gums
as
we
excise
the
remaining
fat
here
and.
C
Just
to
address
why
180
seconds
was
chosen,
I
believe
this
test
is
better
at
or
below,
60
and
yeah.
It
seemed
I,
just
changed
it
to
180
seconds
and
it
just
seemed
to
do
the
job,
so
I
I
totally
get
that
if
180
seconds
isn't
appropriate
and
bull
according
to
other
values
which
have
been
set
for
other
tests,
then
I'll
be
happy
to
change
it
to
whatever
number.
Just.
Let
me
know
where
it
is
I.
F
Don't
know
if
it's
appropriate,
it
sounds
like
like
late,
so
it's
in
theory,
it's
a
long
time
for
a
normally
running
cluster
or
someone.
Some
terms
are
CI.
A--
clusters
get
patently
overloaded
and
I
think
when
you
create
that
are
see
what
what
it's
pulling
some
images,
but
they
should
be.
Things
are
already
loaded
like
I,
don't
I,
don't
know,
I
just
wanted
to
make
sure
we
had
looked
at
other
tests
to
see
what
they're,
using
because.
G
And
I
think
that's
actually
a
really
key
thing,
which
is,
although
our
CI
clusters
being
overloaded,
so
we've
we've
toyed
with
this
for
years
right,
like
optimizing,
the
CI
environments,
we've
never
actually
defined
what
the
minimum
requirements
to
run,
and
there
are
confirmed
less
that
did
not
run
on
smaller
clusters
than
be
like.
We
fixed
a
scheduler
conformance
test
to
be
much
more
resource
efficient
because
it
was
asking
like
100
mil
occurs
and
it
didn't
need
to
to
test
conformance.
G
Maybe
this
is
another
one
where
we
need
to
describe
what
the
expectations
for
conformance
are
and
set
up
the
test
environments
to
more
realistically
mirror
that,
and
then
say
you
know,
we'll
try
to
run
close
to
the
floor
so
that
we
push
the
flakes
out,
but
conformant
implementations
that
run
more
than
this
to
do
so.
It
gets
into
other
questions
like
you
have
to
have
a
certain
amount
of
free
resources
to
run
the
conformance
test
for
scheduling.
Does
that
block
someone
from
running
conformance
on
a
k3s
single
node
cluster?
Probably?
G
D
G
Have
an
implicit,
30-second,
timeout
it
or
sorry
explicit,
10-minute
time
out,
if
you
don't
set
them,
why
we
tune
them
down
because
you
would
crash
fail
at
wait.
Ten
minutes
that
can
vaguely
remember
Aaron
at
some
point
going
in
and
yelling
at
people
for
not
having
time
outside
watches.
But
it's
been
so
long
that
I
couldn't
tell
you
sorry.
A
F
G
This
was
I,
think
the
watch.
So
in
the
cases
where
this
is
happen,
it's
usually
been
like
there's
some
edge
case.
I
I,
don't
remember
the
exact
details
on
all
these,
but
like
someone
like
replication
controller
was
flaky
at
one
point
and
Luz
creates
because
of
expectation
failures,
and
so,
as
a
result,
the
thing
that
we're
trying
to
go
in
and
frame
and
conformance
now
the
replication
controllers
create
pods
at
one
point
would
just
run
all
the
way
to
failure.
G
D
C
D
C
C
And
I
am
trying
to
catch
up
on
typing
to
address
the
first
issue,
which
you
stated
in
your
comment
on
I
I,
believe
that
service
stuff
was
still
in
there,
because
the
way
that
I
was
writing.
This
test
was
pretty
similar
to
another
test,
which
was
some
kind
of
I,
think
it
might
have
been
a
service
status
test
and
yeah.
A
F
We
didn't
want,
we
don't
want.
We
wanted
this
to
be.
You
know
if
you
narrow
the
scope
to
just
ten
points,
not
to
test
at
the
service
now
to
test
at
the
endpoints
controller,
like
that,
we
already
have
tests
for
that.
We
already
have
tested
conformance
test,
that
for
services
with
selectors,
and
so
when
we
were
just
doing
the
endpoints
resource.
F
If
eclis,
we
wanted
to
just
use
manual
endpoints
that
supposed
to
ones
created
by
the
endpoints
controller,
so
I'll
clean
that
up
your
comment,
yeah
and
then
that
the
other
thing
that
I
brought
up
was
whether
we
want
to
run
through
the
networking
side
of
this,
so
that
the
existing
tests
around
services
implicitly
test
the
the
Q
proxy
or
they
actually
look
for
the
behavior
of
key
proxy
or
whatever
it
is
plumbing
the
network.
But
the
question
is:
that's
always
for
ones
with
a
selector
sort
of.
F
Let's
leave
that
out
of
this
PR,
we
can
talk
about
it
later.
I
just
think
it's
already
essentially
tested,
but
you
know
I
don't
know
if
I
don't
know
why
somebody
would
do
it.
But
if
somebody
created
a
implementation
that
only
read
I,
don't
know
services
or
something
what
selectors
but
seems
seems
loony
I.
G
Mean
I
certainly
catch
all
doing
that
sometimes
so
it's
not
completely
unprecedented
people.
People
make
assumptions
like.
Oh,
the
only
kinds
of
workload,
controllers
and
cube
are
the
ones
that
cube
defines
out
of
the
box.
So
I
know
I,
don't
know,
there's
a
biggest
problem,
though
I
would
say
if
we
want
to
do
that,
we
should
probably
move
that
closer
to
the
site.
As
you
were
saying
drum.
B
B
The
next
part
is
our
sorted
backlog
that
doesn't
have
PRS
yet
so
for
some
of
these
we
have
our
API
snoop
ticket,
which
includes
our
the
way
that
we
work
to
create
this,
but
I'm
not
going
to
go
through
this
I'm
just
going
to
open
up
the
these
two
pieces
here.
So
this
one
is
the
research
area
for
a
ticket,
so
this
is
in
the
this,
for
the
back,
oh
I
think
we're
okay
did
too
I
think
we
already
voted
that
this
was
okay
to
do
on
last
time.
We
go
all
the
way
down.
B
B
The
backlog-
and
so
that's
what
we're
gonna
do,
we're
just
still
gonna
leave
it.
Let's
close
all
that
and
go
back
to
our
pages
here,
so
the
next
part
is
triage.
This
is
where
we
get
new
tests
too
right.
So,
let's
quickly
pull
these
out
so
that
we
can
decide
if
there's
something
wrong
before
we
write
the
test,
so
that
was
the
other
one.
So
this
one
is
port
b1
status,
stuff.
This
is
the
four
endpoints
that
it's
going
to
hit
and
the
documentation
for
it
before
we
go
into
the
test.
B
F
B
B
All
right,
I
think
one
they
may
accidentally
hit
the
other
ones
again.
It's
the
fact
that
increasing
coverage
shouldn't
be
the
focus,
the
fact
that
we
chose
it
is
the
reason
we
we
wanted
this
test
versus
writing,
one
that
would
hit.
So
if
we
write
this
test,
it
hits
all
four
are.
We
writing
it
to
hit
all
four?
No
we're
testing
some
behaviors,
so
think
of
this
is
we're
choosing
the
behavior
that
hits
the
most
endpoints
and
it's
the
deep
yeah.
F
F
F
Do
the
you
know
millions
of
tests
to
create
and
create
pods
right,
but
there's
actually
leave
some
explicit
ones
in
now
that
explicitly
created
pods,
not
for
any
workflow
controller
and
they
so
couldn't.
We
just
add
the
a
check
on
the
status
of
those
same
pots.
So
we
don't
raffle
separate
tests.
I,
don't
know
that
it
matters
I
much
but
I.
C
Think,
though,
this
test
and
a
variety
of
other
ones
which
have
written,
which
quote-unquote
lifecycle
tests
or
other
resource
lifecycle
tests,
I,
think
they're
valuable
to
have
testing
all
of
those
things
in
one,
because
it's
is
just
testing
all
of
the
general
behaviors.
All
of
the
crud,
so
I
think
that
create
how
you
saying
about
creating
a
pod
might
be
useful
if
I'm
understanding
what
you
said,
creddie.
F
B
F
B
B
Right,
yeah
next
item
is
right:
PR
cool,
thank
you
for
helping
me
focus
on
that.
That's
what
we
want
on
each
one
of
these
mock.
We
call
them
mock
tests
yeah
before
we
do
the
hard
job
of
writing
the
tests
that
we
have
most
of
the
logic
in
place.
This
is
very
similar.
We're
hitting
a
patch
list-
it's
not
similar,
but
it's
hitting
threes
three
end
points.
The
logic
is
pretty
straightforward:
create
a
service
account
secret,
ensure
it's
patched.
B
D
F
Just
a
note
that
right,
you
don't
really
need
to
as
part
of
the
test.
You
don't
really
need
to
check
like
the
status
of
the
secret
and
as
a
part
of
the
testing
you
sort
of
there's
already
separate
tests
for
secrets,
so
we're
looking
at
the
I
mean
you
obviously
need
the
secret
in
order
for
the
whole
thing
to
work,
but
just
a
sort
of
side.
F
Note
if
you
following
what
I'm
saying
like
you
like
in
other
places,
we
watch
for
the
at
the
create
of
the
the
resourcing
question
happens
and
the
patch
happens
and
the
delete
happens,
etc.
We
don't
really
need
to
follow
all
that
for
the
secret
other
than
the
implicit.
You
know
that
it
happens.
It
explicitly
for
the
service
account
customer.
That
makes
sense.
B
I'm
gonna
comment,
thank
you
and
we
got
one
minute
and
three
PRS
to
go.
Sir
I
think
what
we'll
do
with
that
is:
I'll
drop
these
in
the
conformance
Channel
and
if
we
can
maybe
do
that
asynchronously.
That
would
be
great.
The
last
thing
on
our
on
our
on
our
meeting
was
to
go
through
and
say:
there's
a
rewrite
in
progress
that
I
think.
Maybe
we
should
drop
because
it
was
based
on
increasing
endpoint,
sorry
coverage
based
on
parameter
and
then
we're
gonna
be
migrating
to
AWS
we're
in
progress
the
websites
down.