►
From YouTube: 20200421 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
Know
I
mean
I,
saw
people
reviewed.
It
I
appreciate
that
so
I
think
we
just
need
to
I.
Did
it
I,
guess
I
iterate
it
again
on
Friday
and
added
a
few
more
things
starting
to
get
into
the
solution
systems.
I.
Think
people
reviewed
it
after
I
did
that
so
I
didn't.
C
D
C
E
C
Question
I
would
love
that
right,
but
I
don't
I.
Don't
that
would
be
a
super
cool
thing
is
if
you
could
discover
from
a
cluster
what
profiles
it
supports.
That
would
be
like
you
know,
then
then
you
start
talking
about
tooling,
where
your
hound
chart
declares
the
profile
it
needs,
and
you
can
actually
have
tooling
that
says:
hey
I
can't
deploy
into
this
cluster.
It's
missing
this
profile,
which
would
be
really.
A
And
I'm
gonna
leave
that
not
make
another
AI
on
that
one
for
now,
iterating
forward
quick
review
for
our
our
statistics.
Since
their
last
release,
we
do
have
our
five
new
endpoints
and
our
11
here
that
are
completed
and
done.
I
think
we
have
some
stuff,
that's
that
was
in
the
midst
of
merging
that
we
didn't
get
in
time
before
5:00
p.m.
yesterday.
While
we
didn't
yet
thing
there.
A
For
that
I
went
I
currently
are
are
cycle,
at
least
for
the
way
that
I've
set
stuff
up
for
our
okay
ours.
It's
listed
on
January
like
orderly
and
I
kind
of
wanted
to
possibly
move
that
into
our
release
cycle
because
it
matches
when
the
deadline
is
to
get
the
best
merch
and
that's
gonna,
look
different
this
year
very
likely
so
they're
like
you
to
like
for
119.
A
F
C
Discussion
on
sig
released
this
morning
about
this
yesterday,
okay,
but
what
I
don't
think
that
yesterday
Stephen
sent
out
an
email
talking
about
a
three
week?
Push
but
we'd
asked
for
six,
and
it
seems
like
people
are
in
agreement
with
that.
So
it's
likely
really
state
is
mid-august
119
and
then
basically,
instead
of
doing
three
releases
for
the
remainder
of
the
year,
one
don't
you
will
do
two
four
month
releases
and
then
we
evaluate
about
whether
that.
E
One
of
the
concerns
I
had
I
didn't
get
a
chance
to
show
up
to
the
secret
meeting
this
morning,
because
I
have
a
conflict.
Is
it
doesn't
seem
like
the
code.
Freeze
date
is
moving
substantially,
which
means
that
for
the
amount
of
time
you're
going
to
be
able
to
spend
writing
conformance
tests
and
getting
them
included.
G
E
C
So
if
I
can
come
in
the
idea,
there
is
used
to
slow
down
your
work,
but
future
we
can
continue
or
think
of
us
to
be
more
like
I'm,
well
the
mitigation
for
that
I
guess:
I'll
put
it
this
way.
The
mitigation
for
that
is
to
eat
your
branches
and
that's
something
where
I
have
mixed
feelings
about
myself.
C
E
Appreciate
the
decision
being
made
to
use
that
without
consulting
anybody
is
working
on
the
supporting
infrastructure
and
is
aware
of
the
state
of
feature
branch
support
for
all
that
stuff,
so
I
I
think
what
we
would
want
from
a
conformance
perspective
is
some
cut.
We
used
to
use
code
freezes
the
deadlines,
maybe
there's
some
other
deadline.
We
should
be
aware
of
that.
E
C
Also
part
of
that
long
code
freeze
is
for
stability
and
for
focusing
on
things
like
tests,
I
mean
you
know,
tests
don't
add
risk
to
the
release,
they
add
actually
a
reduced
risk
and
the
part
of
the
goal
of
the
longer
cycle.
Oh
wow,
not
a
longer
cycle,
but
the
longer
code
freeze
period
is
to
reduce
risk.
So
I
I
think
that
there
could
be
some
discussion.
We
can
have
with
released
him
around
tests
editions
in
that
period,
but.
E
E
To
see
him
and
understand
the
behavior
of
the
jobs
and
what
is
expected,
and
so
we
don't
want
to
chase
after
a
moving
target.
So
I
still
needs
me.
There
has
to
be
a
deadline
by
which
we
have
the
tests.
I'm
just
concerned
that,
since
the
country's
date
hasn't
moved
at
all,
we
should
ask
if
there's
a
deadline
for
us
to
use.
C
A
Time's
up
the
primary
is
there
any
other
comments
on
the
normal
agenda?
I'm.
Actually,
gonna
I
want
to
put
the
board
last
because
we
go
through
the
board
as
much
time
as
we
have.
But
one
thing
I
wanted
to
note
is
that
we
haven't
updated
our
PP,
zeroes
and
stuff
up
here
in
over
a
year
now,
maybe
even
a
year,
now
we're
not
focusing
on
pod
spec
right
now
and
where
we
actually
are
doing
watches,
but
we're
doing
it.
Buying
writing
test
hoppers.
I.
A
C
G
C
C
Did
you
do
you
started
to
do
conversion
of
existing
tests,
and
then
we
would
see
that
coverage
I,
don't
know
if
how
far
along
that
was
that
we
can
say
I'd
love
to
have
a
number
and
be
able
to
say
pod.
Spec
behaviors
are
covered
80%
by
the
existing
tests
and
I.
Don't
know
that
number
right
now
and
there's
probably
additional
behaviors
too
right.
But
in
that
sense,
if
we
look
at
the,
if
we
look
at
kubernetes,
let
me
say
what
areas
are
most
critical
to
ensure
we
have
conformance
coverage.
A
And
that
may
be
related,
then,
because
of
the
that,
we're
looking
at
endpoints
right
now
and
we're
just
living
out
a
core
where,
as
far
as
some
of
our
initial
end
points
for
increasing
the
coverage
and
then
when
we
stopped
looking
at
the
variations
of
the
parameters
and
the
week
and
and
I
should
be
jumped
in
a
little
bit
while
we're
waiting
on
the
coverage.
So
it
may
be.
It's
fine
I.
H
A
Anything
else
before
we
go
to
the
board,
because
that
tends
to
run
the
rest
of
the
meeting
and
and
probably
won't
this
time,
I
think
we're
getting
pretty
good,
pretty
good
at
the
board.
Close
this
one
and
close
this
one.
So
the
way
we
go
through
the
board,
the
most
important
stuff
is
all
the
way
on
the
right
hand,
side.
C
G
C
E
E
C
C
A
Quite
slow,
let's
see
this
is
a
promotion.
I'm
just
gonna
hit
end
to
go
to
the
bottom,
to
see
where
we're
at
and
thank
you
for
the
override,
and
this
will
likely
merge
so
I.
Don't
think
we
need
to
look
at
this
further
and
that
will
go
to
the
done
column
on
itself
and
we'll
get
four
points
you
don't
thank
you
do
we
need
to
know
GPM
I
just
did.
A
We're
here
together
today
to
make
this
move
forward.
This
is
another
one,
that's
plus
four,
it
doesn't
promote.
I'm
just
gonna
go
to
the
bottom
and
scroll
up
a
little
bit
and
we
just
got
an
LG
TM
I
must've
looked
at
the
same
one
twice,
let's
close
that
one
well
I'm,
not
super
into
mac
and
I'm,
not
super
impressed.
So
far
with
the
speed.
E
Remember
what
it
is:
okay,
hang
on
so
endpoints
resource
life
cycle.
It's
my
there's
a
hold
on
that
PR
and
I
feel
like
the
reason
we
could
hold
on
that
PR
is
cuz.
I
had
questions
about
whether
the
test
was
actually
gonna
fail
and
during
the
the
non
happy
path
like
how
was
the
test
gonna
behave,
it's
the
launch
timed
out
and
I
feel
like
we
never
got
an
answer
to
that.
E
E
B
C
E
C
E
A
C
C
So
if
we
go
look
at
that
as
I
recall,
we're
looping
through
lists
we're
looping
through
the
training
to
the
channel,
and
if
you
don't
get
the
things
we're
looking
for,
then
the
watch
will
timeout
and
then
the
question
is
what
you
get
back
from
the
channels
ago,
but
the
channels
closed,
and
so
you
don't
get
the
event
you
were
expecting
so
I
would
think
you
would
fail,
but
it
depends
a
little
bit
on
how
they're
tested
I
was
in
drilling
to
the
code.
Actually.
E
C
Should
get
yeah
you
and
your
channel
closes,
you
should
get
a
mill
watch
of
anything
or
your.
Your
range
will
defend
your
leave.
The
lab
right,
so
channel
closes,
you
will
never
get
watch
type
added
as
well
is
to
quit
on
to
a
break,
so
you
need
a
flag
in
there
in
the
sense
that
you
got
the
added
expect.
C
Otherwise,
the
timeout
looks
exactly
like
right.
Right
now
say:
watch
type
event
is
added
break
okay,
so
you
stop
blue,
but
if
the
channel
closes
you
stop
the
loop
and
you
don't
know
the
difference.
Now.
You
wouldn't
know
the
difference
when
you
go
down
later
so,
depending
on
how
the
rest
of
the
things
it
may
fail,
but.
E
My
cold
yeah-
that
was
the
thing
that
trip
at
the
very
end.
You
delete
the
thing
and
then
you
expect
that
there's
no
error
from
the
client
issuing
a
delete,
so
you
verified
the
client
successfully
she
to
delete.
Now
you
need
to
verify
that
the
delete
has
been
actually
did
on
the
API
so
and
you
do
that
by
watching
for
a
delete
watch
event.
But
if
you
never,
you
don't
actually
verify
anything
so
yeah.
G
E
C
That's
all
that
happens.
There's
no
return
for
this
right.
If
not
I,
don't
know
when
the
watch
turns
out,
but
I
don't
know
if
they
push
a
watch
of
it
down,
it's
a
splash
tribe
out,
but
even
if
they
or
they
just
close
the
channel,
but
even
if
they
did
the
way
the
codes
written
here,
you
would
just
be
like
cool
it
closed
and
rocket.
E
A
C
A
E
E
Soon
so
just
a
thing
that
I
try
to
do
is
when
I
notice
that
there's
a
hold
on
a
PR
I
try
and
look
for
the
comment
that
put
the
hold
in
place.
Cuz,
usually
the
person
wrote
down
why
they
think
the
PR
should
be
held
back
from
urgently,
which
I
think
I
did
in
this
case.
Maybe
I
didn't,
but
that's
that's
generally
how
I'm
trying
to
get
that
across
as
well
as
bringing
it
up
in
these
meetings.
A
A
A
A
A
E
E
E
A
E
Thing
here
you
you're
not
setting
a
flag
and
then
updating
that
flag
inside
of
your
watch
think
that's
that's
the
most
immediate
problem
and
then
this
could
be
another
one
of
those
tests
that
you
will
rewrite
when
you
work
on
a
pattern
with
clay
or
somebody
to
dump
all
of
the
event
into
an
array
and
then
walk
through
the
array
and
look
for
what
events
you
expect.
Oh
sorry,
what
watching
didn't
expect
it.
A
I'm
having
to
type
bike
because
it's
that's
it's
my
mare,
computers,
receiving
the
events
within
about
24
seconds
after
I
type
in
some
typing
blind,
and
so
if
I
have
typos.
Please
forgive
me.
So
let's
go
back
and
move
these
into
the
right
place.
Then
I'm
gonna
go
back
to
our
board
and
I'm
gonna.
Put
this
back
to
in
progress.
A
A
A
Do
we
want
to
look
at
the
definition,
because
we've
tried
to
update
all
of
our
tickets,
so
they
link
back
to
the
other
right.
So
in
this
one?
It's
it's
not
at
the
top,
but
it
does
them
back
to
here
and
so
what
we're
trying
to
accomplish
here
that
we
approved
it
a
while
back
and
it's
loading.
So
this
is
Corby.
A
One
service
account,
so
we
have
our
mock
ticket,
our
issue
and
our
PR,
and
so
this,
what
we're
going
to
look
at
is
going
to
attempt
to
cover
patch
list
and
delete
service
accounts
and
we
went
to
our
documentation.
The
test
in
general
is
going
to
create
for
service
account
static
label
and
create
a
secret
and
patch
it
and
get
the
service
account
to
ensure
its
patch
and
list,
and
we
did.
A
We
agree
that
this
flow
was
okay
before
we
wrote
the
test,
and
so
the
test
that
we're
about
to
look
at
is
pretty
much
this
test
that
we
had
in
the
initial
issue
to
approve.
So
if
we
want
to
catch
these
things
early,
we
should
do
it
in
this
ticket.
Before
we
say,
let's
go
through
and
write
the
test,
I'd
love
for
that
to
happen
before
PRS
get
in
to
apply
so
before,
and
we
don't.
A
We
don't
have
to
look
at
it
here
because
we're
about
to
look
at
the
test,
but
just
a
note,
we
should
look
at
these
a
bit
closer
if
we
don't
like
what
we
what
we
see
in
this
next
part
and
just
to
verify
that
this
code
does
increase
it.
This
is
the
endpoints
that
increase.
They
give
us
our
three
points
so
now
I'm
going
to
go
back
into
this
test,
we're
actually
looking
at
the
code
itself.
A
A
A
C
G
E
E
C
A
I
was
able
to
get
the
comment
in
there.
Okay,
I
think
that
might
be
an
artifact
of
our
test.
Writing
flow
we're
in
our
initial
mock
ticket.
We
are
not
using
the
ginko
framework,
yet
we're
just
trying
to
do
something.
Super
simple
using
the
docs,
that's
not
ginkgo
specific
and
I'm.
Just
I
have
an
a
scalar.
Do
you
think
that's
why
you
were
trying
to
delete
the
service
account,
but
you
know:
do
the
deletion
if
it
failed
where's.
A
A
E
E
C
E
E
C
C
Typically,
these
things,
the
only
thing
usually
that
fails
to
create
is
if
you're,
not
the
namespace
move
to
the
time
you
can
create
something
that
references.
A
dangling
thanks,
I,
wonder
if
you
even
need
to
create
the
secret
in
order
to
actual
service
account.
Guess
you
didn't
know
service
time
wouldn't
function
as
a
real
service,
but
we're
not
testing
that
in
this
test.
There's
another
thing.
E
E
E
I
think
that's
my
suggestion.
Y'all
are
free
to
tell
me
I'm
wrong,
but
I
think
I
would
prefer
it
best.
It
relies
just
on
like
the
least
amount
of
things
possible.
So
if
you
don't
have
to
go,
create
the
secret
to
exercise
the
ability
to
patch
a
service
account
like
that's,
that's
less
likely
to
fail,
and
so
I
feel
like
primitive,
primitive
fields
are
probably
the
easiest
things
to
patch
compared
to
object
reference
fields
but
again
collate.
C
A
E
C
B
B
B
C
Ideally,
these
lifecycle
tests
would
explicitly
test
the
lifecycle
events
delete.
You
know
and
delete
collection
both
separately
as
opposed
to
you
look
right
now
you're,
adding
this
test
and
it
adds
delete
collection
which
we
didn't
have
before
and
you
didn't,
but
we
do
have
delete
before,
probably
because
some
other
test
deletes
a
service
account
somewhere.
But
it's
a
it's
a
it's,
not
an
explicit
test
of
that
functionality.
C
It's
just
implicit
in
some
other
test
and
if
somebody
changes
that
test
like
we're,
changing,
for
example,
we're
taking
the
secret
out
of
here,
we
are
deleting
a
secret.
What,
if
that
was
the
only
place
we
tested
deleting
secret
because
we
saw
there
was
no
endpoint
rich,
which
other
already
was
I'm
queen
coverage,
so
we
didn't
bother
and
then
it
actually
goes
down,
because
we
don't
have
an
explicit
test,
so
I'm
just
trying
to
say
in
general.
C
E
C
What
what
this
access
is
in
general,
so
the
patch
test,
it's
another
endpoint,
but
realistically
from
a
behaviorist
point
of
view,
it's
not
a
particularly
useful
test.
This
is
what
this
comment
says.
If
you
read
it
its
equivalent,
is
that
what
we
would
want
is
to
actually
test
that
the
intended
effect
of
that
spec
field
actually
works
right.
So
it's
it's
the
data
point
not
just
the
control
plane,
so
we
would
actually
go
when
we're
testing.
You
have
a
behavior
about
automata.
C
Things
would
say
when
this
is
set
to
true
and
you
create
a
father
use
a
service
account.
This
is
what
happens
when
it's
set
to
false,
if
you're
in
a
pod
with
feeding
the
service
account.
This
is
what
and
the
test
would
have
to
cover
all
of
those
things,
so
so
a
general
that
specifically
this
tasks,
this
behavior
that
was
described
in
this
comment
was
a
similar
thing,
that
it
was
patching
some
field
and
it
gets
that
end
point
coverage,
but
it
doesn't
really
get
any.
C
C
E
A
A
A
B
G
H
E
Take
that
back,
I
did
not.
So
that's
because
watch
time
outs
are
bound
to
happen
in
a
cluster.
That's
like
loaded
in
that
scale.
So,
ideally
we'd
like
these
tests
to
be
written
in
a
way
where
they're
not
susceptible
to
watch
timeouts,
but
they
in
stitches
reconnect,
unless
we
want
them
to.
You
know,
stop
waiting
after
some
interval
in
the
same
way
that
we
do
like
a
wait.
Dot
pull
until
some
time
out
that
we,
as
the
test
authors,
define
today's
here.
You.
C
E
But
the
weird
the
weird
thing
is
like
we
define
the
watch
time
out
at
the
very
beginning
of
the
test,
so
we
say
we
expect
this
test
in
toto
to
take
this
much
time,
whereas
the
thing
about
the
wait
whole
calls
is
we're
like
we
expect
the
change
to
be
actuated
within
this
interval.
So
it's
like
a
more
granular
way
of
specifying
how
long
we
expect
something
to
happen,
yes,
which
we're
not
doing
with
these
watches.
C
So
yeah
I
mean
I
think
this
gets
back
to
what
we
already
discussed
last
time,
which
is
we
need
this
helper
function
to
to
load
everything
into
an
array
and
then
process
the
array.
What
you
can
do.
Ideally
we
resolve
that,
but
what
you
can
do
in
the
interim
is
that
watch
timeout
would
cause
the
channel
to
your
close,
which
means
you'd
leave
your
loop.
G
C
We
need
the
better
utility
function,
given
that
we've
got
like
three
or
four
of
these
tests
right
now.
Maybe
we
just
really
need
to
focus
on
a
utility
function.
Have
you
had
a
chance
to
set
up
a
call
with
Clayton
or
I?
Think
you're
gonna
have
to
be
he's
busy
here
to
have
to
like
chase
him
a
little
bit
to
make
it
happen.
Then
we'll
do
that?
Okay,
we
have
not.