►
From YouTube: 20201022 SIG Architecture Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everybody:
this
is
the
kubernetes
sync
architecture
meeting
for
october
22nd
2020.,
please
I'll
recall
our
code
of
conduct
and
treat
each
other
with
respect,
looks
like
we.
B
A
Pretty
short
agenda
today:
first
up
we
have
laurie
apple
with
capturing
arena,
template
feedback,
laurie.
B
Hi,
so
I
just
wanted
to
present
this
topic,
but
this
group
has
the
expertise
to
actually
drive
the
conversation.
It's
basically
a
certain
criteria
that
we
want
to
have
on
test
conformance
and
then
also
some
other
quality
issues
around
how
to
graduate
caps.
A
B
C
I
am,
I
have
some
for
the
conformance
area
that
we
might
roll
up
the
with
sig
release.
C
We
have
the
release
process
that
for
the
conformance
working
group
we
have
that
the
release
blocking
job
that
says
that
no
new
ga
apis
can
promote
without
a
conformance
tests,
and
it
used
to
be
just
a
subnote
as
a
comment
in
the
kep
template,
and
now
it's
been
elevated
up
to
a
sig
release
sign
off,
which
is
the
if
you'll
follow
that
link
and
I'll
drop
it
in
the
channel
under
the
sig
sign
off
checklist.
C
There
is
an
area
that
says
test
plan
is
in
place,
giving
super
consideration
to
cigarettes
and
sig
testing
input.
I
think
we
should.
My
only
input
is
that
we
would
add
that
any
ga
apis.
You
know
the
criteria
we
have
there
with
a
link
to
it
to
be
sure
that
that
checkbox
is
ticked.
C
Sorry,
foreign
think
I
didn't
share
it.
I
didn't
share
the
right
length,
my
bad,
oh
okay,
this
link
the
sign
off
checklist.
A
C
Big
arc
speed
test,
we
need
to
have
a
checklist
item
either
update
that
one
to
say
ensure
that
ga8,
because
that's
what
our
test.
The
test
plan
is
overall,
I
think
from
from
stick
testing,
but
as
far
as
sig
architecture
is
concerned,
the
test
plan
for
when
they're
going
from
beta
to
ga
we
have
to
have
those
conformance
tests
in
there.
Otherwise
it
adds
to
the
debt,
and
we
also
have
the
blocking
job
that
we
could
say
to
check.
If
your
apis
have
promoted
this
job
will
be
failing
and
informing
sig
release
or
confirming.
A
A
If
we've
automated,
there's,
there's
two
steps
right,
there's
all
the
things
marketable
that
are
required.
It's
when
the
release
team,
the
enhancements
team
targets,
something
to
a
particular
release
and
says
I
want
this
enhancement
this
cap,
to
go
into
this
release.
We're
saying
that
the
release
team
that
enhancement
scheme
is
actually
manually
humans
checking
did
are
these
things
done.
I
mean
the
author
of
the
cap
should
check
them
off
for
the
convenience
of
that
person,
but
but
basically
things
that
are
blocked
by
a
job.
We
don't
need
to
put
in
this
checklist.
A
That's
what
I'm
saying
the
job
is
even
better
than
the
checklist.
The
checklist
is
because
we're
lazy
and
we
didn't
create
jobs
right
I
mean
that's
what
it
comes
down
to,
or
it's
for
a
different
step.
We
don't
have.
We
don't
have
a
way
to
validate
the
milestone
assignment
right
now.
Anybody
in
the
milestone
maintainers
can
just
assign
it
to
a
milestone.
A
C
A
So
lori
did
you,
I
guess:
what's
the
ask
here
a
little
bit,
you
just
want
to
bring
your
attention
that
with
this,
so
that
folks,
here
on
the
call
will
take
some
time
and
go
over
that.
B
Yeah,
like,
however,
you
see
fit
basically
jordan's
comments.
B
A
B
A
Okay,
so
some
focus
on
improving
the
test
criteria,
test
plan
and
what
he
suggests
in
particular.
Here,
I
think,
would
be
nice.
We
do
this
for
conformance
one
other
thing
we
do
for
conformance
is
like
the
tests
have
to
not
flake
for
a
certain
amount
of
time
before
they
can
be
promoted,
and
so
there's
some
people
put
in
their
promotion.
A
Pr
a
link
to
the
grid
showing
here
filtered
such
that
you
can
see
those
tests
and
that
yes,
they've,
been
passing
for
the
last
14
days
or
whatever
it
is
so
he's
asking
for
the
same
thing:
let's,
let's
create
a
a
link
within
the
cap
that
you
can
just
click
on
and
say:
oh,
yes,
the
test
associated
with
this
cap,
the
components
and
the
specific
tests
created
for
it.
The
future
are
passing.
I
think
that's
a
great
idea.
A
B
And
I
would
say,
just
drive
all
the
conversation
toward
the
enhancements
channel
because
we'll
use
this
as
criteria
for
graduating
these
cups
and
then
also
there
may
or
may
not
at
some
point
be
some
some
effect
on
the
proposed
receipts
process,
which
we
won't
be
using
testing
plan.
Like
do
you
have
a
testing
plan
as
a
criteria,
because
it's
just
too
hard
for
us
to
measure.
You
know
you
could
just
say.
B
Yes,
I
have
a
testing
plan,
but
we
don't
know
how
that
succeeds
or
fails
right
but
say
through
this
group
we
do
come
up
with
some
criteria
that
we
could
actually
measure
in
that
process.
You
know
it'd
be
good
to
think
of
it
like
that,
but
for
now
you
know
we
can't
go
with
a
yes,
no
testing
plan.
Yes,
that's
important,
no
kind
of
measuring
standard,
because
it's
just
it
won't
mean
anything
in
this
receipts
process
that
we're
proposing.
B
A
Okay
sounds
good.
Thank
you
mark
all
right,
all
right.
Next,
next,
emergency
tim,
all
claire,
are
you
on
the
line.
I
don't
see
you
in
the
list.
A
All
right
does
anybody
else
right
here,
so
this
looks
like
it's
a
discussion
that
happened
a
month
or
so
ago
on
the
mailing
list
looks
very
interesting.
I
am
not
I've,
not
gotten
myself
familiar
enough
with
it,
although
it
looks
like
a
good
one,
does
anybody
else
want
to
talk
to
that
and
then
tim
can
watch
the
recording
or
shall
we
push
this
to
next
meeting.
E
This
is
micah
tim
and
I
were
just
on
a
psc
meet.
We
should
be
joining
in
just
knows.
A
C
Sounds
good,
I've
dropped
a
link
to
the
markdown
file
for
our
release
and,
if
you
want
to,
let
me
share-
or
if
you
want
to
share
the
slides,
I
can
go
through
there.
Let
me
share
when
you
hit
the
magic,
allow
us
to
share
button,
we'll
we'll
make
it
so
I
am.
I
am,
let
me
I'm
not
there.
We
are
thank
you
beautiful.
C
So
this
is
the
the
api
snoop
and
the
conformance
sub
project
okrs
and
and
how
how
we're
doing
not
trying
to
change
too
much
just
keep
it
increasing
in
velocity
and
consistency,
quick
update,
our
primary
okr
was
to
increase
stable
coverage,
and
that
usually
is
is
the
main
goal.
We
would
love
to
get
40.,
I'm
not
feeling
it
this
this
court.
I
don't
think
we're
going
to
stretch
to
that.
Our
primary
goal
is
30..
C
This
replication
controller.
Pr,
if
we
can,
can
look
at
that,
I
think
it's
a
solid
promotion
that
was
flaky
a
while
back.
So
if
anyone
has
permission
to
do,
promotes
and
lgtms
on
this,
just
like
we
talked
about
with
the
in
the
process,
we
have
a
link
to
test
grid
that
shows
our
our
zero
failures
for
quite
a
while,
and
it
just
requires
two
weeks
of
solid
soak
as
a
normal
test
and
meeting
all
the
other
criteria
for
conformance
test
for
promotion.
C
So
that's
an
easy
seven
points
for
us
that
contribute
toward
our
30..
Currently,
we
have
zero
fully
on
the
board,
but
these
are
things
that
are
ready.
This
plus
12
pod
proxy
test
has
been
very
interesting
and
I
think
clayton
had
exposed
some
things
where
we
weren't
reading
the
values
of
the
http.
C
Request
and
we've
actually
written
a
new
piece
of
updated
arguments
for
let's
see,
if
that's
a
link
way
down
at
the
bottom
here,
for
no
it's
not
there
yet
to
update
the
software
that
we
use
for
testing
inside
the
pod
and
I'll.
I
just
wanted
to
give
an
overall
view
we're
trying
really
hard
to
get
our
our
goal,
but
we'll
need
some
we're
fighting
for
these
pretty
hard.
The
next
part
is
our
intermittent
flakes.
These
are
still
a
bit
flaky
and
we're
trying
to
debug
those.
C
Hopefully
this
will
come
through
soon.
We
were
missing
a
lot
of
our
conformance
tests
that
we
had
written
test
for
and
we
had
supported.
C
We
tried
to
patch
the
policy
because
the
policy
inside
of
our
ci
jobs
filters
out
all
of
our
event
event
logs
because
of
performance
reasons,
but
in
order
for
us
to
use
that
as
a
metric
of
coverage,
we
were
not
able
to
see
it
in
the
logs.
Therefore,
it
doesn't
hit
so
as
soon
we
ended
up
creating
a
new,
proud
job
based
on
the
kind,
the
kind
conformance
job.
C
However,
there's
some
current
operations
things
around
the
the
the
pr
not
merging
for
that
job
to
be
updated,
but
I
suspect
we'll
have
that
within
a
day
or
two.
So
that
will
be
seven
points.
We
that's
the
the
pivot
here
was
we
were
trying
to
patch
the
original
gce,
based
conformance
the
main
signal,
but
the
ways
of
the
the
suggested
ways
is
to
set
some
variables
for
the
advanced
audit
policy
and
have
it
lay
down
the
yml
file.
C
But
I
don't
no
one
is
using
that
in
any
other
test
across
test
grid
and
it
seems
to
to
fail
so
rather
than
trying
to
debug
a
gce
job.
It
was
much
easier
to
base
our
work
on
the
kind
job
we're
having
some
issues
around
pod
and
node
proxy,
so
there's
some
redirects
bugs
that
need
to
be
fixed,
I'm
hoping
that
we
can
get
those
in
after
120..
I
think
they're
unlikely
for
for
this
release,
so
those
points
will
be
shoved
out
a
bit.
C
In
conclusion,
for
that
main
okr,
I
think
we're
going
to
be
on
target
even
as
we're
hitting
edge
cases
via
flakes
policies
and
redirects
flakes
are
are
hard
and
policy
changes
took
a
lot
of
effort
to
get
through.
As
far
as
our
audit
policies
changes
to
make
sure
we
can
see
those
event-based
logs
and
proxy
redirect
having
bugs
in
the
api
implementation.
C
I
wanted
to
get
a
little
feedback
here
from
sigarch
as
far
as
the
conformance
sub
project,
when
we
identify
bugs
inside
you
know,
that's
where
we're
as
testing
should.
Are
we
responsible
for
also
fixing
those
bugs,
and
if
anybody
wants
to
step
up
to
help
with
fix
the
api
side,
that
would
help
us
focus
on
writing
tests,
for
it.
A
A
A
For
it
but
yeah
it.
C
Goes
up
goes
back
to
the
sink
for
sure
it's.
It's
definitely
stretching
our
our
learning.
So
just
understand
that
the
reason
that's
not
prioritized
is
it's
gonna
take
a
deeper
dive
for
us
than
it
might
for
someone
who's
coming
from
that
expert
yeah.
A
Yeah
I
mean
trial,
the
bug
assign
it
to
the
sake,
poke
somebody
if
you
know
them.
B
A
C
Does
it
look
like
to
you?
We
definitely
have
action
items
to
create
those
issues.
I
think
primarily
for
the
roll
up.
This
was
to
signal
the
the
the
points
we
win.
This
this
release
are
are
hard
fought
for
and
if
we
don't
quite
make
that
I
just
want
to
give
a
bit
of
a
warning
that
we're
doing
all
we
can.
C
Some
really
cool
news
is
our
technical
debt.
That
was
the
next
key
result.
Our
technical
debt
is
getting
cleared
all
the
way
back
to
111..
So
if
you
look
at
our
chart
for
the
older
releases,
we
have
some
114
debt
and
I
think
some
a
little
bit
of
111.
So
there's
there's
five
points
here
and
there's
two
points
here.
C
It
will
be
lovely
to
get
closer
and
closer.
We
only
have
that
1.5
in
earlier
debt,
and
we
were
just
supposed
to
remove
114
and
I
think
we're
going
to
be
able
to
move
all
the
way
back
to
to
110..
So
that's
exciting,
because
there's
also
another
test
we
in
reaching
out
to
the
sigs
to
take
care
of
their
own
endpoints.
C
We
were
able
to
get
a
priority
life
cycle
test.
This
was
one
we
didn't
have
to
write
hong
hue
and
the
and
this
sig
were
able
to
write
that
test
and
we
will
be
promoting
it
in
six
days,
so
yay
team
kubernetes
it
takes
a
village
to
raise
this
stuff.
A
I
I
don't
know
if
we
see
tim's
comment
in
the
chat
proxy
redirects
may
be
deprecated
and
planned
for
removal
in
120..
I'm
not.
A
Not
sure
you're
talking
about
the
same
thing
here,
but
is
this
the
the?
Can
you
explain
exactly
what
what
the
proxy
metrics
are.
We
have.
We
have
tash.
C
Sure
sure,
let
me
bring
up
the
test,
so
we
have
two
two
original
tests
here
and
then
here's
the
the
the
bug
so
I'll
now
drop
this
link
and
the
links
are
in
the
in
the
in
here,
but
I'll
drop
them
directly
to
oops.
That's,
not
I'm
not
able
to
drop
links
super
easily
from
here.
C
Node,
proxy
and
code
proxy
are
are
hitting
endpoints
that
have
head
and
options.
Do
you
see
this
this
head
and
options
and
there's
some
issues
around
the
api
server,
just
ignoring
or
squashing
head
and
options
in
the
api
service?
So
we
cannot
actually
reach
those
endpoints,
even
though
they're
exposed
as
stable
core
and
if
we're
spending
a
lot
of
time
writing
tests
for
proxy
reader
accidents,
deprecated
marked
for
removal.
I
really
need
to
prioritize
that,
because
that's
a
lot
of
our
our
effort
over
this.
F
Where
sorry,
where
is
the
redirect
portion
of
this?
Because
I
don't
think
that
node
proxy
there's
no
changes
being
done
there?
It's
your
pod,
exact,
attach
and
streaming,
which
used
to
work
via
redirect
that
redirect
capabilities
being
removed
in
120.
C
A
So
so
we
had
a
discussion
of
when
there's
right
now
the
api
server.
When,
when
you
make
this
proxy
request
with
no
path,
then
we
issue
a
redirect,
whereas
when
there's
a
path
we
can
plummet
through
to
whatever
you're
practicing
to,
I
think
that's
if
I,
if
I
recall
correctly
and.
B
A
We
added
tests
for
that.
We
get
those
redirects
for
get
and
head.
I
think
what
link
is
talking
about,
but
we
don't
want
to
do
redirects
for
post
or
put
or
delete
or
whatever
right
right
and
so
we're
trying
to
write
the
test
such
that
they
check
what
the
behavior
we
actually
want
is.
C
Thanks
for
timing
and
it's
super
helpful
to
get
direction
from
people
who
know
their
stuff
inside
the
parts
that
we're
testing.
C
We'll
go
back
to
the
presentation,
so
it's
it
may
look
less
like
fixing
bugs
and
more
only
testing
the
the
parts
that
are
that
we
want
to
test
and,
as
we
identify
those
pieces
that
will
be
either
deprecated
or
not
not,
we
want
to
test
we'll
have
to
remove
those
endpoints
from
the
target
for,
for
conformance
and
I'll
show
you
where
we
have
that
list
by
the
way
on
api
snoop
with
our
debt.
C
The
next
part
is
our
kk
release
blocking
job,
which
is
the
job
that
the
the
keps
probably
need
to
be.
Wherever
that
we'll
do
the
signal
for
it
has
been
catching
an
untested
new
endpoint,
we
had
a
lot
of
some
issues:
understanding
for
the
prowess
cncf.
We
had
not
yet
set
up
decorators,
which
we
finished
configuring
some
of
our
test
rigs
this
week,
when
you
use
decorators
entry
point,
has
some
unexpected.
C
At
least
it
was
unexpected
for
us
behavior
and
that
it
overrides
entry
point
and
just
caused
the
first
argument
of
it.
So
we
needed
to
rewrite
our
job
to
to
because
it
takes
a
combination
of
running
as
root
and
passing
the
first
darkest
postgres.
In
order
for
our
job
not
to
spin
up
our
snoop
db
to
stay
long
long
running
rather
than
spitting
out
results,
saying
these
are
the
number
of
new
endpoints
that
are
that
are
that
don't
have
tests.
C
It
was
neat
to
see
some
folks
from
from
sig
networking
interested
in
having
some
some
ci
signal
around
this.
So
I
can
pop
that
over.
C
C
C
Here's
our,
I
think,
we've
fixed
the
pro
jobs
over
since
our
sig,
our
conformance
subproject
meeting
and
we're
waiting
for
some
of
the
failing
operations,
jobs
for
basically
the
validations
for
our
job
changes
to
succeed
and-
and
I
think,
testing
ops
on
call
is-
should
be
fixing
that
sometime
today,
I
suspect
other
important
news.
Let's
remember
that
code
freeze
is
on
the
12th
and
the
release
date
is
eight
and
so
we're
doing
everything
we
can
to
get
those
points
in
again.
C
If
we
can
get
that
seven
seven
point
test
in
at
the
top:
that's
ready
for
promotion
and
we
get
the
jobs
that
should
get
us
up
to
about
14.
So
anything
else
we
get
should
get
us
a
lot
closer
to
that
30.
C
If,
if
not
over
that
the
cncf
kate's
conformance
gate
is
running
and
in
preparing
for
our
kubecon
presentation,
one
of
the
things
I
do
in
the
beginning
is
go
through
the
process
of
running
sonoboy,
with
a
single
test
and
gathering
those
results
and
creating
a
pr
against
cake
and
then
and
that's
the
beginning,
here's
how
to
submit
and
then,
at
the
end
of
it
I
show
the
the
the
kate's
conformant,
the
cncf
kate's
conformance,
submission
gate.
A
B
A
Thank
you
all
right.
Thank
you
very
much,
hippie
right
on
to
our
last
item
here
then
tim
all.
F
Claire
yeah,
I
think
we
had
said
in
the
previous
meeting,
that
we
were
gonna
punt
this
topic
to
this
week,
and
so
I
just
copied
that
over
this
was
following
up
the
discussion
that
we
had
on
the
mailing
list
about
the
best
ways
of
or
how
we
would
like
to
deal
with:
label
restrictions
or
label,
apples
or
tags
or
various
other
proposals
that
were.
A
All
right
well,
thank
you,
looks
like
tim
hawkin
has
some
comments
here.
D
Yeah,
I'm
not
sure
we
came
to
a
conclusion.
It
sort
of
fell
off
the
bottom
of
my
stack
for
a
little
bit.
D
We
should
move
towards
a
namespace,
selector
or
list
of
namespaces
that
doesn't
solve
the
general
problem
of
label,
echols
and
tags,
which
I'm
not
sure
the
same
thing
either,
and
I
didn't
track
where
the
rest
of
that
conversation
went
honestly
in
the
last
two
weeks.
A
A
To
make
some
comments
or
try
to
further
the
discussion,
otherwise
we
may
what
we
say
start
things
on
the
mailing
list,
which
is
good,
but
I
almost
wonder
a
short
doc
or
something
that
sort
of
lays
out
some
of
the
options
rather
than
everybody
having
to
go
back
to
the
thread
where
people
can
comment.
Maybe
we
already
have
that.
F
A
Okay,
excellent,
so
probably
like
you
put
it
on
the
agenda,
which
is
great
and
given
how
unprepared,
hopefully
some
of
us
are.
We
probably
should
go
back
and
read
that
and
then
again
put
this
on
that
agenda
for
two
weeks
from
now
to
have
live
discussion,
if
there's
things
that
aren't
resolved
from
the
dock,
but
it's
good
everybody
who
is
on
the
call.
A
Please
tim's
put
the
link
there
access
control
on
labels,
let's,
let's
all
make
an
effort
to
review
that
and
be
prepared
to
actually
have
a
substantive
discussion
next
time.
Does
that
make
sense
works
for
me?
Okay!
Oh
sorry,
about
that.
A
A
Okay,
nobody's
got
anything
to
say
so,
get
your
cube
contacts
recorded
today,
they're
due
today
in
a
few
hours,
and
so
good
luck,
everybody,
I
believe,
they're
due
at
5
pm
eastern
time,
if
you're
not
aware,
which
is
not
that
far
away.
So
thanks
a
lot
and
have
a
great
week
a
couple
of
weeks,
bye.