►
From YouTube: 20210309 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
welcome
to
our
bi-weekly
conformance
meeting
today
is
march,
the
9th
2021
and
I'm
your
host
brian
flananz,
and
I
will
be
presenting
this
meeting
according
to
the
cncf
code
of
conduct,
where
we,
basically
all
going
to
be
our
most
excellent
to
each
other.
Okay,
let's
start
off
thanks
for
everybody
for
showing
up
first
point:
hippie,
that's
yours!
The
change
to
protect
the
release
branches.
B
Yep,
I'm
gonna
pull
that
one
up
the
release
branch.
We
had
some
merges
into
release,
120
that
changed
conformance
which
broke
all
the
submissions
and
we
met
in
cigar
and
and
that's
the
architecture
channel
and
noted
that
we
should
probably
put
a
blockade
plug-in
on
the
branches.
However,
the
blockade
plug-in
would
not
include
support
for
branches
and
nikita
amazingly
stepped
up
very
quickly
to
not
only
submit
a
pr
to
change
that
so
that
it
did
support
it.
B
She
also
submitted
the
the
change
to
the
configuration
for
prow
there's
a
bit
of
a
fix
that
needs
to
be
applied,
but
I
just
wanted
to
shout
out
that
that
issue
has
moved
forward
and
yeah,
just
a
great
shout
out
to
nikita
for
that
work.
A
D
Thanks
the
life
cycle,
tests
for
deployment,
pod
status
and
replication
controller,
all
we're
having
a
number
of
flakes
and
going
through
the
if
you
just
we
go
to
the
very
top
there's.
D
The
basic
summary
is
pretty
much
95
of
all
the
flakes
were
all
commonly
coming
back
to
having
an
issue
was
a
volume
access
for
cube
api
access,
and
I
tracked
it
back
to
do
with
the
service
account
token
secret
is
what
it
was
getting
used
for
and
the
fact
that
it
was
getting
deleted
before
the
pod
gets
to
be
able
to
as
it's
trying
to
come
up,
which
means
then
the
pod
is
not
available
for
doing
whatever
it
needs
to
do
as
part
of
the
test.
D
It's
falling
into
an
area
that
I
really
have
no
idea
of
where
to
track
the
problem
next
and
really
wouldn't
like
a
bit
of
direction
on
who's
best
to
try
and
ping.
So
how
get
this
fixed?
Because
this
morning
I've
looked
at
a
flake
for
pod,
I'm
sorry,
pro
proxy
was
passed
and
it's
got
exactly
the
same
issue
where
the
pod
just
doesn't
come
up
based
around
a
volume
is
missing,
so
it'd
be
great
to
have
some
thoughts,
maybe
clayton,
if
around
what
might
be
causing
it.
B
Going
to
be,
I
wonder
if
there's
a
way
to
debug
them
the
reasons
behind
the
failure
to
mount
the
secrets
for
the
api
access.
It
seems
like
any
pod
by
default.
You
know,
is
going
to
be
having
this
issue
and
I
wonder
why
it
might
be
isolated
to
your
test.
B
E
Barring
a
typo
it
can
be
because
the
test
needs
to
prep
so
like
and
there's
all
sorts
of
like
weird
bugs
like
this
is
an
area
where
container
run
times,
sometimes
they're,
not
as
good
as
advertised
in
terms
of
hitting
all
the
edge
cases.
I
know,
there's
been
a
bunch
of
stuff,
that's
been
fixed
in
cryo
and
cubelet.
E
So
let
me
pull
this
up
and
obviously
kind
is.
Every
environment
is
going
to,
unfortunately,
hit
this
a
little
differently,
especially
if
it
is
like
an
error
in
the
cubelet
and
how
fast
is
running
and
how
contented
the
node
is
like
if
there
is
a
race
in
the
cubelet.
It
manifests
like
this,
and
sometimes
it's
just
reproducible
because
of
the
way
the
test
sits
in
the
test
framework
like
it
always
catches
it,
and
then
you
know
we'll
prove
performance.
Someplace,
we'll
send
a
move.
So
I
I
am
not
positive.
E
So
let
me
let
me
take
a
look
at
this
one.
D
Scroll
down
a
little
bit,
there's
actually
the
outputs
showing
the
actual
volume
getting
created
and
been
available,
but
then
it
gets
deleted
before
then.
The
appropriate
pod
is
trying
to
then
trying
to
use
the.
D
E
Use,
usually
that's
because
someone
deleted
the
pod,
that's
a
much
more
likely
outcome.
Are
you
sure
that
the
test
didn't
fail
up
above?
So
there
is
another
possibility.
So
if
you
get
into
your,
if
you
have
an
error
and
you
have
a
defer
that
deletes
the
pod
you're
going
to
see
stuff-
that's
like
this
because
you
deleted
the
pod,
because
you're
cleaning
up
because
you're
test
failing
and
you're
not
reporting
the
test
failure.
That
can
happen.
Sometimes
too.
D
The
time
stamps
when
I
was
tracking
it
they
the
stuff
to
do
with
the
volumes
happening
even
before
they
before
even
the
first
part
of
the
pod
being
really
exercised
as
part
of
the
test.
It's
like
right
at
the
very
beginning.
E
That's
at
5219,
which
is
after
5210,
which
is
when
the
test
fails.
So
everything
from
52
19,
22,
52
19.
The
second
line
onward
is
after
the
test
has
already
failed.
This
just
looks
like
you
didn't
get
the
volume
mounted,
so
it
says
verify
I
mean
only.
The
first
line
is
inside
your
test,
loop,
at
least
on
the
first
one.
E
E
You
don't
by
any
chance,
have
a
an
anti-affinity
rule.
Do
you.
D
There's
nothing
like
that.
It's
been
to
my
knowledge
has
been
set,
for
this
is
across
now
four
different
tests.
D
E
I,
if
it's
happening
on
the
test
that
you
named
so
like
on
that
first,
one
like
simple
pod,
should
return
command
exit
control
codes
and
the
two
pods
mounting
that
points
to
something
burped
in
the
cubelet
and
failed,
and
this
is
one
two
one
yeah
this
is.
This
is
definitely
like
needs
investigation,
but
a
signal
team.
This
is
not
your
test.
I
don't
think.
D
Yeah,
it's
just
trying
to
trade
from
the
very
top
other
than
the
little
clue
I
got
was
the
prefix
for
the
volume
knowing
where
to
go.
Next,
it's
like
a
black
hole
at
the
moment
for
me.
Can
you
say
this
fails
every
time?
Oh,
no!
It's
not
every
time,
it's
just
yeah!
D
If
this
is,
if
we
scroll
to
the
top
and
look
at
the
the
test,
if
you
open
up
the
test
grid,
call
links
at
the
top
there
ryan,
the
top
two
mm-hmm,
it's
like
a
few
little
minor
blips,
but
it's
consistently.
This
blip
like
99
of
the
time.
I've
got
a
few
other
little
queries
that
I'm
looking
at,
but
this
seems
to
be
the
common
problems
that
we're
facing
and
it
was
just.
E
This
is
a
this
is
a
this
is
a
node
or
a
problem
that
they
need,
or
this
is
a
container
run
timer
node
problem.
They
need
to
debug
it
it's
not
you.
I
mean
it
could
be
you,
but
like
looking
at
the
failure,
I
would
I
would
start
in
the
there's
enough
tests
that
failed
elsewhere
in
that
that
this
looks
to
me,
like
a
controller,
runtime
blip
or
could
be
kind,
is
doing
something
really
weird
you're
getting
failed
to
create
pants.
So
on
one
of
these
tests,
there's
a
context.
E
D
Okay
but
at
least
I'm
gonna,
we
can
I'll
look
at
doing
some
extra
notes
around
the
the
same
flake,
I'm
saying
and
the
proxy
with
pass,
and
then
I
can
pass
that
on.
E
Yeah
the
controller
run
time,
so
we're
we're
not
seeing
this
as
much
in
the
core
cube
test
because
we
don't
bump
parallelism
up
but
like,
for
instance,
in
openshift,
like
we
run
much
higher
parallelisms.
E
A
That's
cold.
Thank
you
very
much
clayton.
That
was
very
valuable
and
thanks
stephen
for
spending
a
lot
of
time
digging
into
this.
I
know
you
took
a
lot
of
effort
with
that
thanks
that
takes
us
to
the
next
point.
If
you
want
to
take
it
or
should
I
discuss
the
intellectual
endpoint
demo,.
A
B
While,
rather
than
having
us
inside
of
a
api
stupid
repository
inside
of
an
sql
query
inside
of
a
database
that
nobody
looks
at
the
ineligible,
endpoints
yaml
being
in
repository,
that's
super
straightforward
to
see
what
our
exceptions
are.
We
will
use
that
to
to
help
with
deciding.
What's
what
endpoints
are
remaining
for
the
body
of
work
for
this,
this
working,
this
sub
project
it
just
needs-
I
we've
I
haven't,
had
any
negative
feedback
on
it,
yet
we're
just
kind
of
waiting
for
general
approval.
A
Now
we
have
this
in
the
architecture
meeting
and
they
were
also
keen
that
it
happened
so
basically
just
moving
it
into
the
community.
So
it's
more
visible.
A
A
Thank
you
very
much.
Okay
next
topic,
current
job
endpoints,
some
good
news
there
we
have
tests
that
promoted
the
cron
jobs
from
beta
to
ga,
and
they
came
with
this
I'll
go
to
api
snip
shortly.
To
show
that
so
that's
some
good
news
there
and
then
the
slice
input
of
the
endpoint
slice
controllers
also
came
in
from
beta
to
ga.
A
However,
the
conformance
test
that
did
merge,
we
picked
up
the
only
seven
input.
Only
one
of
the
eight
endpoints
was
hit
by
the
test.
A
So
in
in
the
pr
I
did
hit
them
up
and
gave
them
this
information.
So
basically
there
are
seven
end
points
that
will
carry
into
conformance
with
those
new
technical
data
at
the
top,
which
is
clearly
not
what
we
want.
A
So
one
of
the
eight
came
in
with
the
12
crown
job,
so
we've
got
13
there
so
yeah.
We
asked
them
to
have
a
look
at
it.
We
are
at
crunch
time.
If
I
open
the
conformance
test,
it
was
quite
quite
a
hefty.
A
A
More
pr
I'll
get
to
the
that
point
of
yours
just
now,
he'll
be
more
pr's
coming.
I
see
there's
two
more
promotions
coming
up
which
are
running
out
of
time,
so
the
community
should
just
be
aware
and
keep
an
eye
out
for
those
to
make
sure
that
they
they
merge
and
do
not
bring
in
new
technical
debt
when
they
promote
from
beta
to
ga.
A
B
There's
been
a
few
times
where
we'll
have
maybe
just
one
person
present
or
nobody
present.
This
is
actually
pretty
excellent
to
have
both
clayton
and
dims
on
the
call-
and
I
just
wanted
to
make
sure
that
that
we
were
able
to
provide
a
meeting
time
that
was
flexible
and
available
to
the
people
interested
in
attending.
B
But
I
it's
that's
really
the
only
statement
I
have.
Thank
you
so
much
for
for
being
dims.
I
know
you've
got
a
lot
on
your
plate
and
attend
a
thousand
meetings.
F
It's
there's
a
conflict
usually,
and
you
know
when
when
the
conflict
is
not
there,
then
I
show
up
here
so
sorry
about
that.
So
is
this
the
end
of
the
meeting?
There
is
something
I
wanted
to
bring
up.
A
Only
anything
left
as
I
again
from
sorry
dems.
I
didn't
see
you
as
I'm
watching
the
document
shout
out
from
me
thanks
for
supporting
with
a
lot
of
promotions
going
through
this,
this
release.
A
So
what
I
what
I
would
suggest,
maybe
we
should
ask
in
the
conformance
meeting
chat
or
the
conformance
channel
if
we
should
move
this
to
another
slot,
where
we
could
have
more
folks,
because
it's
really
important
that
we
get
the
right
feedback
and
if
people
are
stuck
somewhere
else,
we
we
are
happy
to
move
the
meeting
to
get
more
community
involvement.
Is
it's
important
for
us
to
get
everybody
involved?
A
That's
if
you
want
to
you
can
go
quickly.
I
just
have
the
important
vrs
at
the
end
to
discuss
so
the
floor.
Is
yours
and
there's
a
lot
of
time
available.
F
Thank
you.
So
there
is
a
problem
floating
around
it
hit
sig,
arch
and
sig
node.
Also,
it's
called
the
exec
probe
timeout.
If
you
search
for
it,
you
will
see
a
bunch
of
hits
both
in
the
mailing
list
as
well
as
in
you
know,
pull
requests
and
issues.
F
So
there
was
a
call
just
before
this
in
signord,
where
they
were
talking
about.
You
know
what
we
need
to
do
kind
of
thing
so
that
the
did
you
catch
this
clayton.
Or
is
this
news
to
you.
E
I
think
well,
so
it
sounds
terrifyingly
like
a
discussion.
I
had
the
other
day
on
a
completely
unrelated
issue,
so
either
I
caused
it
or
or
or
serendipity
happened.
It
sounds
familiar.
Is
it.
F
F
Probes
run
so
yeah.
Let
me
explain
for
everybody,
so
in
1
20
we,
the
signaled
folks,
decided
to
make
a
change
in
exec
probe
time
out
how
it
works.
I
think
the
the
original
intention
was
to
make
sure
that
things
behave
correctly
in
docker
shim,
but
it
might
have
had
ripple
effects
elsewhere
and
they
are
trying
to
figure
that
out,
but
the
bottom
line
here
is
at
least
in
docker
shim.
F
At
that
point
in
time
the
timeout
was
not
being
honored
in
the
sense
that
the
container
was
not
getting
killed.
So
then
the
the
code
was
updated,
so
the
container
does
get
forcibly
killed
at
the
specific
timeout
and
the
default
timeout
is
one
second
or
something
like
that.
So
what
what
is
happening
is
the
microsoft
folks
ended
up
showing
up
on
the
signal
meeting
as
well?
F
As
you
know,
in
general
chatter,
on
slack
and
email
asking
oh,
we
need
a
way
to
go
back
to
you
know
previous
behavior,
and
there
is
a
flag,
there's
a
feature
flag
which
will
get
you
back
to
the
older
behavior.
F
But
then
there
was
a
comment
in
the
code
that
says
we're
going
to
take
out
this
flag
in
the
next
release,
all
right
so
and
they
were
like
scared.
Saying
oh,
don't
take
this
out.
F
We
have
to
worry
about
this.
There's
too
many
workloads
that
are
in
the
state
that
where
we
can't
move
them
out,
so
that
was
the
worry,
so
they
didn't
so
their
petition
to
signal
was
don't
take
the
flag
out
and
then
so
there
was
a
pr
that
merged
today
in
features
dot.
Go
that
removes
that
timeline
that
was
originally
specified,
but
the
fallout
to
this
team
is
that
when
the
flag
is
on
conformance
fails
apparently
so
that
is
worrisome.
F
So
I
think
they
have
lined
up
a
topic
on
in
the
sikh
architecture
meeting
too,
in
addition
to
whatever
happened
in
signal.
So
if
you
want,
please
take
a
look
at
the
sig
node
video
call
in
our
recording.
I
don't
think
it's
up
yet
I
will
poke
in
the
conformance
channel
when
it
shows
up
yeah.
So
that's
a
basic
premise
right.
F
E
No,
which
actually
terrifies
me
because
there's
like
we
found
like
three
or
four
issues
in
probe
recently,
so
I
actually
it
just.
It
was
just
similar,
which
just
means
there's
lots
of
issues.
F
Right
yeah,
there
is
one
more
that
is
landing
today.
You
know,
elena.
I
was
reviewing
hers
about
the
liveness
probe,
the
readiness
probe
with
an
additional
timeout.
You
know
not
at
the
power
level
but
at
the
probe
level.
So
that's
that
the
thing
that's
been
land
and.
E
Probably
wasn't:
a
networking
stack,
not
a
general
q
problem,
but
then
that
did
raise
the
reason
why
I
was
asking
was
because
or
why
I
thought
it
might
be
related
is
because
then
there
was
the
question
of
are
our
exec
timeouts,
correctly
gated
to
the
probe
length
and
if
they
aren't
and
if
that
went
wrong,
that
would
be
very
bad
because
that
would
be
like
execs
would
pile
up
and
there'd
be
a
bunch
of
other
issues,
so
that
was
just
my.
I
was
just
like.
E
F
Yeah
for
sure,
so
the
one
other
thing
that
I
would
ask
is
this
team
is
like.
Can
somebody
take
a
quick
look
at
the
chatter
and
you
know
maybe
help
figure
out
what
to
do
in
the
cigarch
call.
B
I've
got
that
down
for
action
items
that
can
do
some
searching
if
there's
any
existing
particular
conversations
that
are
poignant.
Starting
points
be
great
to
have
those
links.
F
B
F
Yeah
I'll
throw
you
more
links,
hippie
on
on
the
conformance
channel,
beautiful.
C
A
A
That
is
worrisome,
but
now
we've
got
it
highlighted.
So
we'll
have
a
look
at
before
sick
arch
in
two
days
time
now
last,
but
for
the
meeting
is
we
have
one
br
that
that
is
at
the
moment
running
on
the
test
grid,
which
we
still
once
merged
for
this
release.
A
If
we
look
at
the
test
grid,
it
is
running
nice
and
green.
Don't
go
red
on
me
now,
let's
see
all
nice
and
green
good
times,
so
this
would
be
ready
on
the
19th
just
in
time
18th.
I
think.
Let
me
just
go
back
to
the
issue
of
the
pr
I
did
note
somewhere.
I
think
it
would
be
on
yeah
on
the
19th
of
march.
A
It
would
be
ready
for
for
promotion,
so
we
will
prepare
the
promotion
and
bring
everybody
in
the
architecture
to
give
us
the
the
necessary
I'll
determine
approve
to
get
this
in
before
code.
Freeze
on
the
23rd
will
appreciate
some
support
there.
I
think
it's
all
good,
then.
Yesterday
we
had
a
merge
for
this
br
right
here,
which
is
a
promotion.
A
And
all
running
smooth-
and
that
takes
us
if
we
look
at
the
api
snoop
link
24
new
endpoints,
with
test
for
this
release.
It
shows
47
because
of
when
you
have
metadata
for
a
test,
that's
updated
in
a
previous
release
and
you
update
the
metadata
with
the
second
release
that
applied
to
that
specific
test.
It
pulls
it
over
to
the
second
metadata.
A
There's
no
program
programmatic
way
to
to
separate
that,
so
it
does
skew
it
a
little
so
but
24
of
those
we
keep
keep
record
of
which
ones
come
in
new
is
brand
new
covered,
and
if
we
look
at
the
technical
debt
we
had
167
in
the
previous
release,
we
got
129
so
moving
along
very
nicely
to
get
to
our
target
end
of
the
year
to
get
us
under
75
untested
endpoints,
killing
all
all
technical
debt.
So
that's
going
very
well!
Thank
you
very
much
for
everybody.
That's
supporting
us
to
get
this
over
the
line.
Hurrah.
B
The
api
snoop
and
the
conformance
stuff
on
a
side
note
we're
doing
some
things
with
the
the
kate
sniffer
working
group,
and
since
we
have
a
toc
person
on
the
call,
I'm
interested
in
look
seeking
some
sponsorship
for
the
cnc
info
working
group,
and
I
would
maybe
connect.
B
Offline
there's
a
there's
beautiful
things
happening
in
the
kubernetes
community
that
I'm
trying
to
replicate
out
that
I
go
beyond
the
kubernetes
community
to
the
entirety
of
the
cncf
and
trying
to
find
a
way
for
the
the
cloud
credits
to
get
donated
to
be
stewarded.
Well.
And
I
think
that
needs
some
governance
and
and
things
that
we've
already
have
beautiful
structures
in
place
that
I'm
trying
to
do
a
bit
of
minor
replicating
on
a
higher.