►
From YouTube: 20200114 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Awesome
welcome
to
the
January,
14th
or
15th
version
of
the
conformance
office
hours
meeting
I'm
hitting
hacker,
and
they
do
note
that
all
of
us
who
need
to
conform
to
the
code
of
conduct
and
basically
means
play
nice
I'll
share
my
screen
as
we
go
through
the
meeting
notes
and
I'll
pop
that
over
here,
which
one
would
that
be.
It's
gonna,
be
this
one
here.
B
B
B
Awesome
first
off
on
API
new,
the
its
API
snoop
seems
to
be
several
different
things
and
one
of
the
things
that
is
is
that
that
place
where
we
can
send
links
to
folks
and
say
here's
the
coverage
that's
available
on
API
snoop,
that
CN
CF
that
I
oh
I'd,
like
to
to
kind
of
have
a
feature
close
on
that.
So
we
can
refocus
and
then
double
down
the
efforts
on
other
things
like
the
blocking
job
and
supporting
test
writing.
B
That's
all
we
have
as
far
as
the
data
that's
available
dynamically
from
prow,
and
then
we
can
still
get
to
the
oldest
supported
release
because
there'll
be
a
proud
job
running
for
that
release.
So
they
don't
get
us
back
to
I.
Believe
a
year
and
a
half,
if
that's
not
okay,
then
we
need
to
find
some
way
to
look
farther
in
the
past
an
ongoing
basis.
I
think
it's
sometimes
useful
for
presentations.
I,
don't
know
that
it's
useful
an
ongoing
basis
and
I
want
to
spend
too
much
more
effort
other
than
what
we've
listed
there.
B
D
Think
it
is
useful
for
historical
context
to
make
sure
that
if
that
data
ends
up
aging
out
of
GCS,
you
can
copy
it
and
archive
it
somewhere.
I,
don't
believe
that,
so
you
guarantee
you're
relying
on
the
90
day,
retention
of
our
GCS
buckets,
but
I.
Don't
think
that
relying
on
that
for
the
N
minus
one
release
job
is
gonna.
Do
what
you
expected
to
I.
B
May
be
wrong:
I,
also,
my
understanding
on
the
N
minus
ex
release
jobs
that
those
are
running,
the
conformance
tests
that
were
right
at
that
moment
in
time
and
if
we're
running
a
test,
the
same
job
that
says
run
the
conformance.
We
should
still
have
those
historical
numbers
and
I
think
they
reach
back
far
enough
for
us
to
have
a
meaningful.
Are
we
progressing
forward?
B
D
D
When
I
gave
a
presentation
at
Shanghai
last
year,
I
did
so
it's
topical
deploy
and
my
github,
and
it's
got
like
copies
of
the
data
that
was
pulled
down
from
GCS,
because
I
figured
it
was
all
going
to
go
away
so
just
being
able
to
for
the
time
period
that
we've
been
working
on
conformance
being
able
to
show
a
consistent
set
of
numbers,
so
they
PSD
changes
being
able
to
recompute
those
numbers
right
anyway.
I
think
I
think
I'm
derailing
I'm.
D
B
I'm
trying
to
say
that's
fun,
I
think
what
and
that's
that
brings
up
another
question
which
I'll
and
for
what
we're
wanting
to
monitor.
What
are
the?
What
are
the
jobs
we
want
to
use
if
right
now
we're
we're
talking
about
using
the
like
moving
to
kind
but
I,
don't
know
which
job
we
should
use
and
if
we're
looking
for
that,
the
N
minus
-1
jobs
and
they
progressed
into
something
else
into
something.
New
I
think
it's
okay
for
a
consistency.
B
If,
if
we
define
those
jobs
and
look
at
those
numbers
and
far
as
the
the
I'm
in
pulling
the
job
history
up
and
having
consistency
back,
looking
back,
I
think
we're
at
a
point
now,
where
we've
been
previously
manually,
looking
and
parsing
JSON
data
and
having
a
different
approach
to
to
the
various
ways.
But
now
it's
all
just
query
like
loaded
into,
for
example,
into
Postgres
directly
and
grab
ql
directly
and
I.
Think
that
are
the
way
that
we
query
might
change
over
time,
but
I
think
a
trick
should
apply
to
how
we
measure
it.
B
D
Don't
want
to
throw
you
off
too
much
here,
I
think.
Maybe
what
we
are
all
collectively
more
interested
in
is
how
can
we
get
new
updates
of
the
report
today
because,
as
I
look
at
API
snoops
website
the
data
that
I
see
comes
from
December
1st
2019
and,
as
you
know,
we
have
somebody
here
working
on
informants
tests
that
are
generated
from
behaviors
we'd
love
to
see
how
things
improve
with
newer
reports,
so
a
top
priority
would
make
would
be
yeah
the
coverage
over
time
and
automatically
updating
the
report.
That's
shown
on
the
website.
B
We're
hoping
to
get
soon
to
the
point
where
what
we've
got
working
now
when
you
just
deployed
API
snoop,
it
goes
and
pulls
the
latest
release
that
day.
As
far
as
what's
on
the
front
page,
we
just
haven't
switched
that
over
to
me.
What's
on
API
snoop
that
I
oh
right
now,
and
so
that's
what
I'm
trying
to
get
to
the
point
of
closing
out
that
development
until
it
is
what's
on
the
front
page,
okay,.
C
A
Right-
and
he
isn't
that
word
because
it's
been
so
long
eventually,
we
want
to
have
some
sort
of
profiles
right
and
some
sort
of
informants
around
these
optional
features.
That
may
not
be
supported
in
kind.
So
suppose
we
agree,
there's
a
cloud
provider
profile
and
that's
going
to
include
you
know
number
of
others
like
cloud
load,
balancer,
support,
type
of
things
or
P
be
like
any
kind
supports
that
I.
Don't
you
there
may
be
things
that
aren't
supported
so
make
sense
to
use
kind,
I
suppose,
but
like
I,
don't
think
we
need
to
be
we're.
B
D
Mean
like
I,
recognize
that
you
use
kind
locally
for
development
and
that's
a
much
faster
roundtrip
thing
it
so
it'd
be
great.
If
whatever
it
is
you
compute
with
coming
locally,
is
the
same
thing
that
you
see
remotely
with
chops
that
run
on
PCP,
but
I
kind
of
feel
like
what
yeah
I
guess.
I
agree
with
John
I
feel
like
we
have
more
potential
flexibility
to
move
into
other
profiles
and
stuff.
We
continue
to
rely
on
the
GCE
job.
B
B
B
B
B
We're
missing
I
think
we
want.
This
is
I
want
to
kind
of
just
put
it
in
the
queue,
but
so
we
do
get
to
an
end
like
I
stop
until
we
meet
more
features
rather
than
trying
to
and
things
that
we
feel
are
useful.
I
just
want
to
get
direct
asks
from
the
community
on
that
word
that
anything
we're
adding
extra
to
the
APS
new
I/o
web
interface
is
adding
useful
things.
Otherwise
we're
going
to
focus
on
the
way
most
the
community
will
interact
with
it,
which
is
via
the
crowd
job.
B
B
The
next
part
is
our
test
writing
review
portion.
We
can
go
through
the
board
for
these,
but
as
I
kind
of
summarize
it
here,
so
we
can
quickly
go
through
it
through
it.
This
create
new
cube.
Pilot
deletion.
Test
is
currently
closed,
but
we
need
to
kind
of
decide
what
we're
we're
going
to
do
back
out,
make
it
not
so
so
large.
D
Like
the
reason
we
said
not
to
do,
it
was
because
we
don't
it
looked
like
it
was
exercising
performance
criteria
and
we
don't
think
that
performance
criteria
are
a
valid
conformance
thing
right.
It
was
not
clear
to
me
whether
or
not
this
test
was
written
to
exercise
some
other
end
point
or
something.
D
B
C
A
E
E
D
E
B
D
We're
still
trying
to
chase
down
these
flakes
as
they
come
up.
There
are
a
lot
of
them.
You
may
want
to
discuss
revisiting
putting
retries
back
on,
but
the
biggest
cause
of
flakes,
unfortunately,
is
something
that's
coming
down
from
a
container
sorry
run
see
and
we're
kind
of
at
the
mercy
of
when
we
can
get
that
pulled
downstream
and
then
get
the
appropriate
things,
updated
local
communities
right
next
right,
and
that
seems
like
that's
a
no
week's
thing.
B
I've
I've
structured,
these
and
I
think
it's
been
working
well
when
we
have
a
issue
to
talk
about
the
test
to
be
written
and
we
review
that
in
in
the
meeting
and
then
we
separately
when
it
gets
approved
in
this
meeting,
then
we
go
write
a
test
for
it
until
we
have
more
in
the
behavior
style
running.
So
this
was
the
the
issue
and
then
we
said
that
was
okay.
It
got
moved
to
in
progress
last
nine
days
ago,
we're
currently
sitting
at
create
priority
class
name
and
the.
D
The
conformance
requirements
have
an
awful
lot
about,
like
we're
not
allowed
to
promote
tests
that
rely
on
events
for
conformance
and
so
I
get
that
this
isn't
a
test
that
relies
on
like
a
specific
component
of
kubernetes
firing
off
a
specific
event
at
a
certain
point
in
time.
This
is
using
a
completely
fake
event
generated
by
itself,
so
I
think
that
makes
sense,
but
this
is
lied
about
like
we
don't
guarantee
event
delivery
so
conceivably.
This
test
could
fail
under
a
highly
loaded.
D
A
Iii
agree
with
you:
there
I
mean
and
and
I
think
that,
okay,
we
don't
have
it,
we
don't
get
the
event
delivery,
but
bringing
one
is
that
actually
true
in
practice,
I
mean
I
know
we
put
immense
in
a
separate
CD,
cluster
and
I
guess.
If
there
were
some
failure
there,
we
wouldn't
we
would
be
in
a
degraded
state,
not
a
completely
failed
state,
maybe
but
realistically,
like
I'm
speculating
here.
A
And
but
like
realistically,
it's
a
bad
idea
to
run
a
bunch
of
conformance
tests
on
a
cluster.
That's
so
heavily
loaded
that
it
can't
deliver
events
or
anything
I
think
we're
gonna.
You
know
other
failures
because
we're
not
gonna
schedule,
pods,
there's
all
kinds
of
crap.
That's
gonna,
go
wrong
like
I!
Think
that's
that's
kind
of
okay
yeah,
the
intent
of
that
as
I
understand.
It
is
yeah
we're
not
to
rely
on
specific
events,
because
they're
not
I
would
have
guarantees
versioning
guarantee
he's
on
how
right
so
like
from
from
cycle
to
cycle.
E
D
D
B
B
D
D
D
A
So
I
guess
I
guess
the
question
would
be
like.
Do
we
just
we
just
didn't
know
there
was
already
tested.
Did
it
somehow
we
missed
that,
though
there
was
a
test
covering
this
cuz.
They
must
use
this
same.
How
do
they
configure
it
in
that
test
below?
You
must
use
that
field,
presumably,
and
so
somehow
we
just
missed
that
there
was
already
a
test.
Is
that
what
happened.
B
B
A
B
B
D
D
If,
if
the
data,
if
the
resource
is
something
that
like
affects
change
within
the
system,
we
need
to
verify
that
that
change
also
occurs,
and
so
here
I
can
totally
see
how
you
wanted
to
write
a
test
that
cruds
priority
class.
But
then
it
also
like
exercises
the
functionality
of
the
newly
created
priority
class,
and
so,
if
that
other
priam
and
to
me
it's
unclear
whether
pre
exercising
preemption
is
taking
that
too
far
like.
D
F
F
A
Aaron's
question
was:
is
there
some
smaller
center
functionality
that
we
would
want
to
exercise
about
me
like
like?
Is
there
a
place
for
two
tests
here,
one
smaller
minimal
set
and
then
the
whole
preemption
path,
but
yeah
I
mean
in
principle.
Ideally
you
test
the
minimum,
and
then
you
test
the
bigger
thing,
but
great.
B
A
B
E
D
D
B
B
C
B
D
B
B
F
Go
back,
I
think
I.
Think
I
might
have
seen
something
when
you
were
scrolling
saying
about
an
import
for
something
something
something
types:
okay,
I've
imported
a
types
package
into
the
test
which
is
required
for
some
part,
and
that
may
be
where
some
failure
was
but
I'll
have
that
we
can
revisit
compiled,
airs
and.
B
D
C
D
B
I
was
hoping
to
find
somewhere
where
we
had.
We
could
get
a
list
of
PRS
that
are
not
yet
merged,
or
tests
like
that,
or
this
is
enough
just
if
we
have
the
over
time.
What
was
it
two
weeks
ago
and
we
do
it
once
a
week?
Are
we
for
the
last
three
months
sure
that
would
give
us?
You
know
12:12
moments
to
look
at.
B
D
B
B
D
To
recap:
there
was
a
test
case
that
exercised
a
whole
bunch
of
functionality
of
limit
range
by
like
creating
it
and
then
trading
pods.
This
way
in
creating
pods
that
way
and
creating
positives
all
the
way
and
up
and
updating
the
limit
range
and
then
treating
it.
This
other
way
and
I
felt
like
gee,
that
one
test
is
doing
an
awful
lot
to
describe
a
single
behavior
but,
and
so
I
suggested
breaking
it
up.
B
And
the
presentation
last
week
this
is
kind
of
going
back
to
that
where
we
went
through
Zacks
deploying
API
snoop
into
a
cluster
and
then
going
through
and
querying
the
list
of
tests
that
hit
very
few
endpoints
and
those
endpoints
are
hit
by
very
few
other
tests,
so
very
focused
test
that
pretty
much
they're.
The
only
test
that
hit
those
endpoints
to
go
through
those
and
see
if
they're
ready
for
a
promotion.
B
D
Okay,
I
took
a
pass
it
rolling
through
all
the
PRS
on
the
conformance
board
last
week,
and
we've
done
some
of
that
today
and
I
will
do
more
of
that
today.
But
I
really
part
of
me
feels
like
we
have
a
lot
of
PRS
in
flight
and
I'm,
actually
a
little
confused
about
like.
D
Where,
where
we're
going
with
it
like
and
so
part
of
this
is
bright
like
lacking
that
that
continuous,
updated
API
snoops
I
can
see
the
coverage
going
up
like
I,
if
I
scroll
way
to
the
top
of
the
document
you
know
in
seeing
hots
backfield
coverage
was
like
one
of
the
things
we
talked
about
from
a
behaviorist
perspective,
so
I'm
just
trying
to
understand
like
as
we
write
these
new
tests.
We're
writing
them
to.
D
You
know:
increase
the
the
core,
ga
API
endpoints,
and,
if
so,
can
can
we
have
a
report
that
shows
that
we're
going
in
the
right
direction
there
and
then
are
we
writing
tests
to
cover
different
fields
in
the
pod
spec,
and
if
so,
can
we
see
that
we're
going
in
the
right
direction
there
like?
Maybe
this
is
just
me
being
a
little
out
of
sync
with
the
meeting
as
of
late,
but
I
think.
A
That
will
that
will
the
idea
behind
that
would
be
to
get
the
kind
of
report
you're
talking
about
be
able
to
see
the
coverage
on
that
metric
metric,
but
right
now
we're
still
getting
the
measurement
of
endpoint
coverage
and
so
I
think
that
the
focus
recently
has
been
on
employee
coverage
and
I.
Think
that
there's
value
in
that
stuff
right
so
long
term
direction
I
still
want
to
go
with
the
baby
stuff.
A
It
I
just
need
to
get
back
with
Jeffery
and
he's
sick
today,
unfortunately,
but
we
wanted
to
have
full
and
and
machine
are
we
working
with
a
handful
of
behaviors
I
have
to
sync
up
with
him?
I
was
hoping
to
have
that
today
and
then
then
we
can
start
filling
out
those
lists
of
majors
and
build
out
the
mission
like
they
don't
want
the
content.
Once
we
have
the
machinery,
but
I
don't
want
to
approach
SIG's
or
anything
until
I
have
the
machinery
and
good
examples.
Working
yeah,
I
agree.
D
With
that,
oh
okay,
there's
anyway,
he
if
there's
any
way
I,
can
help
out.
Let
me
know
I'd
like
to
make
sure
he's
unblocked
yeah.
A
D
Okay,
but
so
what
I
will
try
to
do
to
help
in
the
interim
is
like
I
said,
I
feel
like
having
having
PRS
authored
by
people
back
in
August.
Still
hanging
out
on
the
board
is
driving
me
a
little
batty,
so
I'm
just
wondering
if
we
can
so
it
sounds
like
PRS
that
were
written
by
Devon
need
to
be
picked
up
by
somebody
else
who.
B
B
It
may
be
some
of
the
complexity
coming
from
this
approach
that
we
took
that
I
think
went
really
well
last
week
and
we
haven't
gotten
to
that
point
of
we're.
Writing
mock
test
tickets
and
the
tickets
go
through
our
workflow
run.
Api
snoop
used
some
query
to
identify
the
end
points.
We're
going
to
focus
on
for
the
documentation,
find
that,
for
that
particular
end,
point
talk
about
the
mock
test.
B
What
we're
going
to
do
write
an
exam
bullying,
really
simple
code,
and
then
we
actually
used
the
way
that
we
work
to
run
that
and
verify
that
we
did
hit
extra
endpoints
and
then
it
will
increase
coverage
here,
and
so
they
have
more
tests
that
we've
written
mocks
for
that
will
go
through
and
they're
currently
on
the
triage
for
the
board.
So
if
we
go
back
to
our
to
our
board
or
board
nine
or
communities,
I'm.
D
B
B
Right
so
we
don't
work
on
it
till
it's
in
sores
backlog
and
it
needs
to
be
an
issue
created
that
we
walked
through
and
said.
This
is
going
to
increase
coverage
by
this
much
here's
documentation
and
example.
Do
we
agree
that
the
example
generally
is
valid
for
conformance,
and
we
want
to
write
that
work
so
that
it's
all
kind
of
up
front
whether
we
want
to
go
through
and
prioritize
writing
that
test
is
over
the
next
two
weeks.
Okay,.
B
B
Just
getting
back
from
the
holidays,
it's
probably
just
a
bit
slow
to.
They
can't
come
up
I'm
going
to
open
these
real
quick
one.
Two
three
will
stop.
They
will
do
these
three,
so
this
one
is
proposing
a
new
test
for
list
the
namespaces
and
if
we
go
through
and
query
the
actual
endpoints
we're
going
to
focus
on
its
list
and
delete,
there's
Doc's,
here's
the
general
test
outline
and
our
and
our
art
test
here.
D
And
file
this
on
the
be
exact
same
bucket,
is
doing
something
synthetic
for
a
secret
secrets
in
config
maps
are
functionally
exercise
the
same.
So
that
sounds
great.
The
key
thing
to
keep
in
mind
when
we
learned,
while
writing
the
secret
test
is.
It
was
a
bad
idea
to
list
all
secrets
with
no
label
selector.
D
B
B
The
board-
and
then
we
move
it
to
this.
What
can
someone
else
move
it
to
the
sort
of
backlog
for
me
as
we
go
through
them?
Yeah
guys
drop
it
on
the
top
and
the
next
one
is
here's
two
more
four
core
component
statuses,
I'll,
just
kind
of
had
a
drop
down
basically
down
to
the
code
on
each
one
of
these.
We.
B
A
A
A
A
A
B
Right
I
think
we
should
probably
have
because
this
some
of
these
things
are
all
unique
to
the
particular
endpoint
that
we're
deciding
is
not
conformant
and
then
I.
Don't
really
want
to
add
that
to
the
docks
of
conformance
or
is
it
it's
okay
for
new
things
to
have
this
nice
definition
on
for
things
that
have
been
there
for
a
long
time
that
we're
not
going
to
write
conformance
for
them
before,
for
various
reasons.
D
B
A
A
You
know
this
is
sort
of
one
of
these
things
like
I,
guess
since
it's
in
core,
it's
just
an
API
like
it's
there
for
everybody,
but
if
somebody
were
to
write
their
own
API
server
like
I,
don't
see
a
need
for
this
to
be
part
of
conformance
in
the
sense
of
like
it's
really
I,
don't
know
it.
It
feels
a
squishy
definition.
I
keep
hitting
a
manor
is
almost
judgment
involved.
A
A
A
D
D
E
D
A
A
Right,
so
that's
why
we're
like
this
is
kind
of
useless
like
we
made
it,
but
we
never.
Actually.
You
know
you
never
actually
finished
the
functionalities
basically,
but
it
was
put
in
like
what
a
Brian
say.
A
Like
you
know
in
issue,
you
know
50
or
something
ridiculous
like
that,
and
so
it's
sort
of
like
it's
there
and
people
use
it,
and
so
is
that
a
sufficient
reason
to
make
it
conform
in
and
a
conformance
thing,
because
the
fact
is
its:
if
anybody
who
is
using
the
open
source
API
server
and
not
recreating
the
API
server
functionality
gets
it
for
free.
You
can
make
the
same
argument
about
component
status
component
status.