►
From YouTube: 20220113 SIG Arch Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody
today
is
january,
13th
2022..
This
is
our
first
big
architecture,
meeting
for
kubernetes
architecture
for
2022
and
welcome
happy
new
year.
Everyone.
We
have
actually
a
pretty
good
agenda
today,
since
the
last
few
meetings
was
cancelled.
I
guess
that's
not
too
much
of
a
surprise,
but
we'll
start
with
david
he's
going
to
talk
about
our
production.
Readiness
survey
results.
So
I
guess
I
need
to
give
you
permission
to
share.
Yes,
please
and.
B
Should
be
able
to
do
it
now,
all
right,
you
should
be
seeing
a
screen.
It
has
two
surveys
on
it.
One
on
the
left,
one
on
the
right
is
that
what
you're,
seeing
okay
great
yes,
so
I'm
gonna
cover
there
was
a
deep
dive
in
this.
If
you
want
to
look
at
the
prr
meeting,
recording
we
go
into
it
in
detail,
I'm
only
going
to
cover
a
few
highlights
here.
B
B
D
B
There
are
a
lot
of
pages
here
that
go
into
how
our
versions
are
split
and
whether
people
are
using
old
versions
or
new
versions.
You
can
dig
into
that
later
if
you
want,
but
one
thing
that
stood
out
is
the
reason
for
why
people
roll
back
their
versions.
B
B
While
not
perfect
is
is
good
ish,
it
means
that
we
don't
have
entire
components.
Failing
we
do
have
more
space
being
filled
in
by
things
like
a
particular
feature
fails.
B
Another
thing
that
stood
out
is
the
usage
of
our
apis
and
I'm
going
to
go
to
two
different
pages
here.
This
page
shows
us
the
number
of
people
who
enable
or
allow
beta
not
enable
it.
They
aren't
making
a
choice,
but
they
allow
beta
in
production,
and
we
can
see
that
we
have
over
90
of
clusters
that
are
running,
allow
usage
of
beta
apis
in
production,
and
this
is
a
problem
for
a
couple
of
reasons.
B
One
of
the
reasons
is
that
beta
apis
aren't
finished.
So
you
know
they'll
often
evolve
as
time
goes
on.
Another
issue
is
that
it
allows
sort
of
trip
dependencies
to
develop
where
someone
relies
on
a
beta
feature.
It
either
changes
on
its
path
to
ga
or
when
you
promote
from
beta
to
ga
you're,
guaranteed
that
your
manifest
need
to
change.
So,
if
you're
making
use
of
this,
it
is
it's
something
that's
going
to
going
to
force
you
to
update
and
change
something
later
on.
B
No
matter
what
you
do,
the
the
freeform
text,
which
I
don't
have
displayed
here.
There
are
a
lot
of
details
in
it.
Many
of
the
reasons
that
people
listed
were
it's
just
hard
to
turn
off.
It's
a
pain
for
me
to
turn
off,
and
so
they
don't
do
it
and
then
one
final
thing
that
I'd
like
to
cover
is
our
alpha
enablement.
B
This
still
still
shocks
me,
but
it
also
clearly
indicates
that
sorry,
30
of
of
clusters
turn
on
some
alpha
feature
in
production.
What
that
indicates
to
me
one
is
that
they
have
a
lot
of
confidence
in
our
alpha,
but
also
that
if
they
need
a
feature,
cluster
admins
are
able
to
identify
the
feature
that
they
need.
They
are
able
to
figure
out
how
to
turn
it
on
and
they
are
able
to
successfully
turn
it
on
wherever
they
need
to
right.
B
So
that
does
not
appear
to
be
a
significant
barrier,
but
unless
far
more
people
want
to
turn
on
that
it
seems
unlikely.
I
was
actually
shocked.
The
number
was
as
high
as
30
30
percent.
B
So
those
are
the
highlights
I
picked
out
to
share
here.
I
will
say
that
the
beta
findings
about
so
many
clusters
having
beta
turned
on
and
knowing
the
impact
that
that
could
have
led
me
to
open
a
cap.
B
B
And
was
the
question
in
survey
only
about
api,
so
only
also
about
features.
It
was
also
about
features.
The
cap
is
constrained
to
apis,
so
it's
possible
right
now
to
turn
on
a
feature
gate
for
or
a
beta
feature
will
be
on,
and
that
may
not
cause
any
turn
in
the
cluster
right.
If
you
turn
on
a
feature,
gate
there's
no
guarantee
that
you're
going
to
have
to
change
something,
but
if
you're
using
an
api,
there's
a
guarantee
that
you're
going
to
have
to
change
your
manifest
on
disk
in
order
to
continue
leveraging.
A
Yes,
you've
stopped
sharing.
Thank
you
david.
If
nobody
has
any
other
questions
for
david,
let's
check
out
the
next
item
on
the
agenda
and
by
the
way,
so
we
try
we're
trying
to
do
those
surveys
annually.
A
So
yes,
we're
presenting
it
now,
but
the
data
was
actually
collected
back
in
q2
of
2021,
so
we
took
a
little
while
to
process
the
data,
but
so
probably
in
another
quarter
or
so
we'll
end
up
sending
that
out
again
and
if
we
can
get
the
data
through
faster,
maybe
we'll
see
see
if
there's
been
any
changes
in
the
last
year,
all
right
on
the
second.
A
F
I
think
I
don't
need
to
share
unless,
like
someone
will
request
it
later,
but
I
think
I
can
start
without
sharing.
I.
A
Can
click
well
yeah,
I'm
sharing
that
tab.
So
I
can't
anyway.
F
Yeah,
so
I
wanted
to
to
bring
the
proposal
that
we
created
as
part
of
like
reliability,
working
group
for
for
discussion.
F
Basically,
it's
it's
a
little
bit
connected
with,
like
the
the
production
readiness
survey
that
david
was
was
presenting,
so
we
are
seeing
that
still
like
people
are
facing
a
bunch
of
like
reliability,
related
issues
in
in
their
production
clusters.
I
think
we
we
all
were
debugging
a
bunch
of
those
like
for
our
customers
and
so
on.
So
we
were
trying
to
come
up
with
some
proposal
like
how
we
can
improve
the
situation
here.
F
I
think
the
first
proposal,
the
first
iteration
of
that
is
like
roughly
a
year
old
and
I
think,
based
on
the
feedback
that
we
got
back
then
like
it,
seemed
to
be
way
too
strict
and
way
too
too
restrictive.
So
we
were
trying
to
come
up
with
something
hopefully
more
acceptable.
I
would
say
so
so
the
link
is
here
in
the
agenda.
I
I
was,
I
was
sharing
it
like
earlier
last
week.
F
I
guess
with
this
group-
and
I
think
also
in
december
or
yes,
some
somewhere
towards
end
of
last
year.
For
the
first
time,
the
basically
the
proposal
is,
is
to
ensure
that
us
us,
we
will
be
able
to
somehow.
F
Somehow,
in
a
more
policy-based
way,
encourage
people
to
work
or
yeah
ensure
that
people
will
not
be
completely
ignoring
the
reliability-related
issues.
F
No
keep
going?
Sorry
yeah.
So
so
so
the
proposal
that
we
are
we
are,
or
the
proposal
basically
consists
of,
like
four
four
phases,
which
include
like
test
flakiness
or
reducing
the
test.
Flakiness
increasing
test
coverage
addressing
some
long-standing
reliability
issues
and
investing
in
some
like
new
reliability-oriented
efforts
and
what
we
are.
F
What
we
are
saying
is
that,
as
part
of
as
part
of
our
reliability
working
group,
we
will
come
up
with
with
a
more
specific
list
of
like
specific
items,
especially
in
the
in
the
like
last
two
categories
that
we
will.
F
We
will
present
the
sig
architecture
and
we
will
be
able
to
say
that,
like
each
seek
should
be
working
within
some
like
defined
time
frame,
whether
it
will
be
like
two
releases
freely
listed
or
something
like
that,
but
probably
also
depending
on
like
how
how
big
the
effort
would
be
to
actually
address
that.
F
We
don't
want
to
do
anything
like
re,
reliability,
related
release
or
anything
like
that
to
because
that
that
probably
doesn't
work
that
well
and
I
I
think
we
need
also
something
more
sustainable
and
we
we
also
need
to
ensure
that
we
will
be
addressing
that
like
con
constantly
and
like
over
a
long
time
of
period
that
that
we
will
be
focusing
on
that.
F
I,
I
probably
don't
want
to
go
over
the
whole
policy
and
details,
but
I
I
wanted
to-
I
guess:
open
it
for
discussion
now
or
or
your
potential
concerns
here
and
to
hear
if
it's
something
that
you
believe
is
acceptable
or
or
it's
you
have
some
significant
concerns
here.
A
I
had
a
question
in
the
first
proposal:
there
was
a
mechanism.
Much
like
scalability
has
to
sort
of
reject
or
revert
certain
changes.
Is
that
something
you
have
in
the
latest
proposal,
because
I
think
that
was
a
little
controversial.
F
Well,
we
we
relaxed
it.
I
think
we
don't
want
to
do
anything
at
the
like
individual
pr's
level.
What
we
are
saying
is
that,
or
the
enforcement
mechanism
will
be
purely
on
the
cap
level.
So
if
a
given
sick
will
not
be
not
be
working
or
will
be
ignore
consistently
ignoring
like
all
the
reliability
related
requests,
we
will
at
some
point
in
in
the
next
release.
F
We
will
reject
all
the
graduations
for
for
the
features
that
are
working,
so
they
still
will
be
able
to
commit
any
code
to
kubernetes
or
anything
like
that.
They
will
not
just
not
be
able
to
graduate
any
or
sorry
they
will
not
be
able
to
introduce
new
features.
They
will
be
able
to
graduate
the
existing
ones
because
that
improves
inferiorities,
at
least
it
improves
reliability
so
like
both
or
beta
to
ga.
Graduation
will
be
possible,
but
they
will.
F
They
won't
be
able
to
introduce
a
new
alpha
features
nor
graduate
the
alpha
to
to
better
features.
So
it's
still
like
a
stick
more
than
a
carrot,
I
would
say,
but
but
it's
hopefully
much
less
aggressive.
I.
A
Well,
I
mean
I'll
I'll
say
I
guess
I
think
it's
reasonable
to
put
some
constraints.
I
would
hope
that
our
sigs
are
responsible
enough
to
not
get
to
that
state
anyway,
but
you
know
it's
reasonable
to
have
some
looming
stick.
I
guess
that
we
need
to
protect
the
the
reliability
of
the
overall
project
it
just
it
can't
be
abused.
The
that
stick
so.
B
Is
it
the
sort
of
thing
where
we
would
want
to
see
that
we
would
look
and
say?
Yes,
we
think
this
is
a
good
idea,
but
before
we
make
it
enforcing,
we
would
like
to
see
what
it
would
actually
clamp
down
on
first
to
see
if
it
had
any
false
positives
or
things
we
perceived
as
false
positives.
A
I
mean
to
me
none
of
nothing.
We
do
is
we're
not
talking
about
mechanical
unchangeable
things,
it's
all
a
bunch
of
people
agreeing
to
something,
so
they
can
always
agree
that
you
know
know
that
that
we,
you
can
always
argue
that
we
disagree
that
this
was
a
bad
idea,
whether
it's
quote
enforceable
or
not
like
it.
It's
all
just
up
to
us
anyway,.
F
Sure
that
that
makes
perfect
sense
also
the
enforcement
really
will
be
on
us.
It's
not
that
there
will
be
any
automation.
It
will
be
us
who
I'm
assuming
like
in
the
end
like
it
will
be
cigar
approval
or
yes,
no,
that
we
we
want
to
block
some
some
enhancement
or
some
graduation,
so
but
yeah
that
that
makes
perfect
sense.
What
you're
saying
jordan.
F
So
so
I
guess
my
question
is
like
what
is
the
next
step?
Should
I
create
like
a
more
transform
it
more
to
a
cap
format
or
something
like
that
and
then
open
it
for
like
more
formal
approval
for
for
the
cigars.
A
F
F
A
E
Yeah,
I
started
a
thread
on
the
mailing
list
earlier
this
week,
talking
specifically
about
what
our
deprecation
policy
says
about
treatment
of
stable
api
versions
and
just
raising
the
point
that
it
doesn't
actually
the
way
the
policy
is
written.
It
doesn't
actually
make
them
seem
that
stable
in
practice,
we've
never
removed
a
stable
api
version,
and
I
have
a
hard
time
imagining
us
removing
an
entire
stable
api
version,
but
what
the
policy
actually
says
is
with
if
there
was
a
v2
which
hpa
just
hit
v2.
E
So,
according
to
the
policy,
we
could
deprecate
v1
and
remove
it
in
a
year.
Would
we
no-
and
I
can't
imagine
us
removing
it,
but
the
policy
says
we
can,
and
so
that
does
a
few
things.
It
minimizes
the
difference
between
beta
and
stable
apis
in
the
deprecation
policy
and
makes
it
look
like
oh
yeah,
I
mean
stable
apis.
E
Aren't
that
much
more
stable
than
beta
ones,
which
does
not
match
reality,
but
it
also
just
sort
of
sets
the
expectation
that
for
developers
on
kubernetes
that
give
stuff
a
year
and
we
can
actually
remove
something
pretty
significant,
and
I
don't
think
that's
true
anymore.
E
I
think
the
pain
that
we've
seen
the
ecosystem
go
through
when
data
stuff
was
removed
is
like
a
tiny,
tiny
fraction
of
the
pain
that
we
would
feel
if
something
stable
was
removed,
and
so
I
had
proposed
basically
removing
that
out
and
having
the
default
for
stable
api
versions,
be
that
they're
permanent,
and
so
I
hoisted
a
few
of
the
comments
from
the
thread
into
the
agenda,
several
of
them
centered
around,
like
what
do
we
do
with
edge
cases
for
this,
where
there's
a
particular
field
or
a
particular
value
that
we
discover
is
problematic,
so
that
was
the
pvc
reclaim
strategy
was
given
as
an
example
or
a
particular
element
in
the
unversioned
portion
of
an
api.
E
So
like
the
self-link
example
that
voytech
pointed
out,
I
think
the
networking
team
brought
up
issues
with
behaviors.
So
even
if
the
api
service
remained
the
same,
if
the
behavior
of
a
controller
backing
it
was
problematic,
like
the
scale
issues
we
saw
with
endpoints,
what
are
what
are
we
saying?
Our
guarantees
are,
and
then
david
had
just
today
brought
up
an
example
of
like
a
particular
v1
api
that
really
doesn't
have
a
future.
In
terms
of
like
a
v2
api
or
continued
development
like
component
status,
so
what
do
we
do
with
that?
E
And
so
I
didn't
know
if
we
wanted
to
have
like
a
high
bandwidth
discussion
of
any
of
these
things
here
or
continue
it
in
the
thread.
But
I
I
did
want
to
point
out.
The
deprecation
policy
already
has
like
sort
of
an
escape
hatch
clause
at
the
end
where
it
says
like.
E
If
there's,
if
there's
something,
that's
really
hindering
progress
to
the
system
and
it's
harmful
like
we
have
like
this
policy
is
a
living
document
like
we
have
the
ability
to
sort
of
work
with
sigs
and
users
and
figure
out
ways
to
deal
with
things
so,
but
what
I
want
to
see
is
a
shift
in
the
default.
The
default
currently
says,
deprecated
stable
thing,
and
you
can
remove
it
in
a
year.
I
think
the
default
should
be
once
it's
stable.
People
should
not
expect
it
to
be
removed.
A
A
A
So,
like
you
know,
to
decide
that
they
no
longer
which
we
already
could
do
with
an
api
right.
We
just
decided
to
no
longer
support
an
api
or
something,
and
should
there
be
a
way
to
make
the
default
become
off
at
some
point,
as
opposed
to
the
defaulting
on.
B
David,
can
you
just
say
your
comment
sure
if
we
turned
off
the
endpoint
controller,
is
our
cluster
still
compliant,
and
would
that
be
a
problem
for
deciding
to
do
that
because.
D
B
See
I
see
a
lot
of
utility
in
these
stable
apis,
as
these
are
the
stable
apis
and
basically
you
can
expect
them
to
work
everywhere,
and
so
these
edge
cases
like
component
status.
They
don't
work
everywhere.
So
if
we
removed.
A
Well,
we
can
already
configure
a
cluster
any
cluster
to
not
be
conformant
right,
so
like
you're
right
that
if
we
switched
it
the
default
to
off,
then
it
would
fall
out
of
the
conformance
program
says
that
a
given
vendor
is
able
to
create
a
cluster
that
supports
all
of
these
behaviors.
It
doesn't
say
that
every
cluster
created
individual
clusters
like
it
like
it's,
it's
a
little
weaker
than
that,
I
guess
to
say
so.
I'm
not
that
doesn't
concern
me
too
much.
You
could
still
turn
on
now.
A
There
is
you're,
more
metapoint
right,
maybe
being
not
be
so
pedantic
about
it.
Is
that
hey
we've
said:
we've
got
this
set
of
apis
that
works
across
all
of
the
different
vendors,
and
so
you
know
how
do
we
keep
in
sync
that
set
of
apis
and
behaviors
with
with
what
vendors
are
doing
a
sort
of
de
facto
or
in
practice?
E
But
I
think
the
key-
and
this
came
up
in
a
few
other
places
too
like
I
think
the
key
is
that
this
is
an
option
for
the
person
running
the
cluster
like
if
you're
running
the
cluster
and
you're
hitting
scale
problems.
There's
a
way
for
you
to
like
improve
your
life.
E
B
Sorry,
it's
stale
yeah
api
apis
that
that
aren't
consistently
available
and
don't
consistently
work
seem
like
a
shame
to
have
unstable
right,
like
is,
is
it
would
this
policy
be
used
to
to
do
that
with
the
basis
for
like
we
aren't
going
to
remove
this
just
be?
I
don't
want
to
have
to
deal
with
like
the
few
clusters
where
it
works,
like
the
policy
will
be
used
to
defend
that
versus,
I
think,
going
forward.
We
wouldn't.
E
G
So
there's
a
the
conformance
group
has
don't
don't
bring
conformance
into
this
yet
because
conformance
is
unnecessarily
limiting
the
amount
of
valid
stable
apis
that
we
should
be
able
to
support,
because
we
don't
have
a
mental
mode
that
allows
us
to
say,
and
we
have
a
way
to
have
conformant
optional
apis.
But
the
core
point
you
were
making
jordan.
G
I
agree
with
an
api
we
exposed
to
end
users
at
stable
would
have
a
higher
bar
than
it
did
earlier
in
the
project
because
of
arguments
like
this,
but
I
don't
think
it's
fair
to
look
at
what
we
would
do
going
forward
and
say
well,
because
we
wouldn't
do
this
going
forward.
We
would
break
existing
users.
I
don't.
I
view
that
as
a
logical,
two
separate
discussions
that
aren't
coupled.
E
So
we've
largely
focused
on
like.
Would
we
remove
an
api
like
a
type
or
an
api
version
right,
like
I
think
it's
safe
to
say
that
within
an
api
version,
we're
not
going
to
ever
remove
a
type
like
we're,
not
going
to
take
component
status
and
say
no
longer
exists?
Any
api
request
in
v1
component
status
will
fail
right.
It
might
exist
and
do
nothing,
but
I
don't
think
we're
talking
about
removing
it
right.
E
Is
that
right,
jordan,
right,
yeah,
okay,
and
so
the
further
question,
maybe
is
within
an
api
version?
Are
we
willing
to
leave
ourselves
room
to
nullify
behaviors
right?
Like
the
pvc
reclaim
stuff,
the
field
still
exists.
You
can
still
send
me
ammo
that
has
it,
but
I'm
not
going
to
respect
it
anymore.
E
G
Did
we
make
the
example,
though,
that
I
mean
is
the
endpoint
slice
a
reasonable
example
of
a
reasonable
position
in
order
to
avoid
breaking
behavior
is
to
preserve
the
behavior
in
a
stable
api
unless
the
user
opts
in
to
being
broken
and
the
opting
into
being
broken
is
the
justification
for
that
opt-in
and
supporting
the
other
side
of
it
being
disabled
is
there's
a
significant
benefit
to
most
net
users,
and
then
maybe
the
trade-off
would
be
what
if
pvc
reclaim
was
in
conformance
in
a
way
that
was
reasonably
like
there
would
be
features
that
occasionally
would
be
like
they
go
to
stable.
G
They
don't
get
a
lot
of
use.
You
could
make
an
argument
that,
just
because
something
isn't
widely
used
is
not
an
argument
that
we
should
remove
from
stable,
that's,
but
then
there
is
a
you
know,
there's
a
valid
thing,
which
is:
if
we
could
know
with
certainty,
it
was
not
being
used,
it
might
be
worth
it,
but
we
will
never
know
with
certainty,
who's
impacted
and
the
cost
of
an
impact
of
removing
something
fundamental
to
someone's
workload
is
very
high
and
we
have
a.
We
have
other
existence.
Proofs
of.
E
That
right,
like
external
ips
and
and
stuff,
like
that,
where
we've
added
options
to
block
the
usage
of
those
features
because
they're
just
bad,
but
we're
not
going
to
get
rid
of
them,
because
we
simply
cannot
know
that
they're
not
in
use.
That
was
that
was
a
good
example
of
like
if
you're
talking
about
policy
like
what
who
can
write.
What
to
the
api,
you
can
already
set
up
a
web
hook,
admission
web
hook
that
will
block
certain
values
or
certain
things.
It's.
E
The
key
is
that
the
api
itself
is
still
there
and
is
still
possible
to
write
to.
So.
If
the
reasons
for
removing
access
to
this
thing
don't
apply
to
you,
you
can
continue
to
plumb
data
through
those
fields,
your
and
so
really
puts
control
in
the
cluster
operator
or
cluster
owner's
hands
in
terms
of
like
what
does
your
use
case,
the
flip
side
of
that
there
was
a
lot
of
complexity
to
get
like
good
behavior
right.
It's
like
how
do
we
evolve
towards
secure
by
default,
safe
by
default,
not
stupid
by
default.
E
G
Proposal
overall,
is
there
anything
we
can
learn
from
others
in
the
ecosystem,
jordan.
We
didn't
really
talk
about
like
linux,
sys
calls
or
other
patterns
of
api
stability
that
we
might
be
relevant
as
exemplars
of
long-running
successful
evolve
systems
is
that
is
that
a
as
part
of
the
kappa
part
of
the
proposal
there
should
be
some.
We
do
some
more
effort,
maybe
like
through
reliability
or
other
efforts,
to
look
at
long-term
maintenance
of
this,
but
we
don't
have
to
do
it
as
part
of
this
statement.
E
A
Okay,
tim,
your
hands
still
up
is
hippie
here.
H
Hello,
quick
update
from
the
conformance
sub
project.
We
had
some
early
wins
for
124
with
12
proxy
endpoints.
Thank
you,
everyone
who
was
a
part
of
that
and
we're
excited
to
get
that
that
win
early
in
the
year.
We
also
had
the
hpi
endpoints,
which
I
mentioned
earlier-
that
are
currently
ineligible
at
some
point.
We'll
need
to
do
a
framework,
as
was
mentioned
earlier,
on
the
call
as
well
for
things
that
are
optional,
but
it
would
be
nice
to
know
if
it's
supported
by
your
provider.
H
Hopefully
we'll
have
a
big
party,
maybe
in
person
or
no
we'll
see
so
it's
in
detroit
anyhow,
that's
our
update
from
the
conformance
subproject
any
questions
or
thoughts,
or
things
you'd
like
to
see
there.
H
H
There's
two
components
that
are
important
here:
one
is
the
the
gates
for
the
informing
jobs
so
that
sig
release
gets
a
signal
when
somebody
starts
to
promote
a
stable
api
that
doesn't
have
matching
conformance
tests
and
I
think
that's
the
primary
piece
where
api
snoop
comes
into
play.
H
That's
directly
kubernetes
community
impacting
versus
the
flip
side,
where
we
have
some
automation
in
play,
for
when
cloud
providers
submit
their
sonoboy
test
results
and
we
say
you're
missing
a
few
of
our
important
and
maybe
sometimes
optional,
or
you
can
say,
here's
the
number
of
of
services
that
you
offer
as
we
as
we
progress
that
so
to
be
clear,
I
think
that
second
part's
not
as
important
from
a
kubernetes
community
perspective,
but
it
might
be
better
overall
if
the
the
gating
and
the
and
the
jobs
and
things
are
owned
by
the
kubernetes
community
versus
cncf.
A
So
the
currently
the
I
guess
my
question
would
be
one
is:
who
would
own
it,
which
say
two
is
so
there's
one
aspect
is
the
code.
Another
aspect
is
the
sort
of
operation
of
api
snoop
the
website.
All
of
that
stuff.
I
don't
know
that
that
is
a
that
seems
like
part
of
the
cncf
portion
of
the
conformance
project,
as
opposed
to
the
same
architecture,
part
that
that
owns
conformance.
H
So
I
think
the
the
things
that
should
be
owned
by
the
kubernetes
community
are
the
the
jobs
and
the
jobs
aren't
a
website
right.
However
yeah,
I
think
we
pull
directly
from
the
repository
for
the
updated
list
of
the
data
to
drive
it
coming
from
a
job,
so
the
basically
the
website
is
fed
from
a
job
that's
run
by
the
community,
so
that
the
definition
of
what's
actually
accurately
tested
versus-
or
you
know
where
the
api
sits.
H
The
status
and
health
of
the
api
is
published
as
an
output
of
api
from
the
conformance
test
runs,
which
combines
you
know
the
state
of
the
current
api,
with
all
the
conformance
tests
when
they're
combined.
What
is
that-
and
I
think
it's
a
json
output
when
we're
done
so
it's
easily
consumable,
but
that
process
needs
to
be
curated
by
either
cigarette
or
sig
testing.
I
think
maybe
sig
release.
I
don't
know.
H
I
think
I
think
the
the
expertise
probably
lies
more
closely
with,
as
far
as
the
infrastructure
part
on
the
jobs
kate's
in
for
a
working
group
or
sick
testing.
H
A
Yeah
yeah
was
there
a
link?
Did
you
have
details
like
which
sig
or
what
was
the
outcome
of?
I
wasn't
in
this
infra
segment
for
a
discussion.
H
I
don't
remember
any
particular
detail.
I
think
it
was
just
pushed
out
due
to
time.
I
guess
I
think
it's.
A
A
H
How
do
we
inform
the
cncf
of
what
is
for
what
the
coverage
is
right
and
I
think
that's
the
the
thing
is
currently.
The
coverage
is
owned
by
the
cncf
and
the
and
the
thing
if
we
separate
that
out
where
what
is
the
health
of
our
our
testing
versus
our
api,
that
whole
piece,
I
think,
needs
to
be
separated
out
so
that
our
community
knows
what's
defined,
and
the
handover
is
when
we
have
what's
defined
as
in
part
of
the
website
and
the
the
verification
of
submitted
results
right.
A
E
I
can
actually
start
speaking
to
it,
I'm
looking
at
the
top
this.
This
stems
from
the
component
base
component
config
work
that
happened
over
the
last
few
years
and
kind
of
petered
out,
but
a
few
of
the
common
configuration
structs
did
get
defined
in
the
component-based
repo.
E
I
think
around
things
like
logging
and
debug
options
and
maybe
client
configuration
there
are
a
few
sort
of
struct
fragments
that
are
intended
to
be
embedded
into
config
file
types
for
particular
components.
E
The
issue
is
that
all
of
the
component
base
config
structs,
are
in
a
v1
alpha
one
package,
but
they
are
embedded.
The
versioning
of
those
trucks
is
not
exposed
to
users
in
any
way.
The
versioning
depends
entirely
on
the
consuming
component,
so
cubelet
config
is
at
beta.
Scheduler
config
is
at
beta
api
server.
Config
is
still
at
alpha,
but
from
the
user's
perspective,
all
they
know
is
my
cubicle
config
is
like
a
beta
config
file.
E
The
fact
that
the
logging
struct
under
that
comes
from
an
alpha
package
does
not
matter
to
them
at
all,
and
so
it's
kind
of
the
current
situation
is
just
kind
of
strange.
It's
not
really
clear
what
benefit
we're
getting
from
like
versioning,
these
common
component-based
config
structures,
so
we're
trying
to
figure
out
what
to
do
with
that.
Whether
to
have
like
alpha
and
beta
packages
and
then
say
like
consuming
components,
should
just
pull
from
a
level
that
matches
their
stability
level
or,
if
we
should
just
say
these
are
small
enough.
E
Config
structs,
there's
like
three
fields
in
each
one.
Let's
just
think
carefully
about
this
and
then
say
it's
like
treated
as
stable,
like
we're,
not
really
sure
what
to
do
so.
Patrick
can
edit
it
here
anything
else
you
want
to
add
patrick.
I
saw
you
went
back
on
mute.
I'm
sorry!
If
you
were
trying
to
talk.
E
Well,
while
we're
waiting
for
patrick
tim
or
clayton
or
david
or
other
people
who
are
familiar
with
config
apis,
any
any
gut
reactions
to
to
that
issue.
E
E
Yet
so
maybe
I
agree
with
you
for
login
configuration,
but
I
think
patrick's
point
is
well
taken
that
this
is
kind
of
a
broader
issue
for,
like
any
of
these
little
shared,
reusable
config
structs,
so
maybe
set
logging
aside
and
we'll
think
about
I'm
also
down
on
reusing
struct
overall,
like
I
started
on
a
mini
mission
and
then
ran
out
of
time
of
like
d
d,
reusing
the
entire
api
surface.
It
just
always
ends
up
in
pain.
B
B
Sure,
let's
say
not
logs
what
if
we
did
it
for
serving
rules
right,
because
we
have
certificates
and
ports
and
what
you're
gonna
and
bind
addresses
like
all
those
things,
I'm
thinking
to
myself
like
if
I'm
a
cluster
admin
and
they
aren't
all
the
same
and
so
like
I
can
figure
slightly
different
structs
for
the
cubelet
than
I
do
for
the
cube
controller
manager
than
I
do
for
the
scheduler
than
I
do
for
the
qapi
server.
I
don't
think
I
will
thank
future
us.
E
The
the
tim,
the
the
reason
this
the
common
structure
seems
useful,
is
that
these
are
actually
wiring
up
to
the
same
component
like
leader.
Election
is
a
good
example
voytek,
just
added
that,
like
there's,
there's
a
single
leader
election
component
and
the
options
for
that
should
be
uniform
across
all
consumers
of
that
component
and
the
same
kubernetes
release
like
it
would
be
maddening
if
the
scheduler
options
were
not
the
same
as
the
cube
controller
manager.
E
A
Sounds
like
they're
in
a
separate
package
somewhere.
That's
that's
like
evolving
separately,
because
it
was
done
by
the
component
config
group,
as
opposed
to
by
the
as
opposed
to
the
commodity
group,
providing
a
a
library
or
a
framework
or
practices
for
the
individual
leader
election
and
whatever
pieces
of
code
owners
to
to
create
their
own
structs.
A
E
Well,
I
mean
maybe
it
could
be,
but
the
whether
the
versioning
is
associated
with
the
component
or
this
component
base
thing.
Neither
of
those
are
going
to
be
visible
to
the
end
user.
Like
the
end
user's
perspective,
is
I've
got
a
v1
beta,
1,
cubelet,
config
and
then
like
some
substanza
of
that?
Is
these
options
for
this
leader
election
or
whatever
and
so
versioning
my
leader
election
options
inside
my
versioned
cubelet
config
file
is
super
weird.
A
E
So
I
think
the
question
mostly
is:
how
do
we
make
this
make
sense
to
a
user
but,
and
then
that's
the
thing
I
care
about
the
most
actually
like.
If
I,
if
I'm
using
what
I
think
is
a
stable
config
file,
everything
in
that
file
should
be
stable,
but
then
the
second
question
is
say:
you've
got
a
stable
config
file.
E
What
how
do
we
manage
options
to
these
components
that
are
experimental?
So
a
good
example
is
like
the
sanitization
log
standardization
like
that
was
an
alpha
thing
that
instrumentation
worked
on
for
a
while
and
turns
out
didn't
pan
out.
So
it's
going
to
be
removed
like
that
was
driven
by
a
field
under
logging
options.
E
D
No
idea,
I
think
my
gain
on
my
microphone
was
turned
down
by
something,
and
I
have
no
idea.
I
just
came
out
of
a
zoom
conference.
Everything
was
working
fine.
I
joined
the
next
one
and
boom
anyway
good.
I
missed
some
of
the
discussion
because
I
was
also
restarting
soon
so
I
I
agree.
The
current
situation
super
can't
set
up
with
this
version.
A
structure
is
just
weird.
I
think
I
understand
the
original
purpose.
D
Saying
this
field
is
alpha
or
experimental
whatever
we
want
to
call
it
in
the
documentation
and
move
on
and
just
have
one
struct.
That
is
both
is
the
same
in
all
in
all
conflicts
that
it
gets
embedded
in
and
that's
what
the
pr,
but
I
that
I
created
implements
it
just
basically
undoes
all
this
complicated
design
that
doesn't
bring
us
anything
and
just
has
one
struct
which
is
treated
as
an
api.
So
we
need
to
be
careful
when
making
changes
to
it
because
it
does
get
embedded
elsewhere,
but
I
think
it's
a
cleaner
solution.
D
One
thing
that
is
not
in
the
pr
is
is
feature
gates
and
I've.
There
has
been
a
discussion
whether
that
is
useful,
for
I
think
a
command
line
or
for
something
somewhere
else,
and
I
think
the
conclusion
was
that
yeah
it
would
be
nice
to
have
something
like
a
feature
gate
but
says
I
want
to
disable
experimental
features.
I
want
to
be
sure
that
I'm
using
just
stable
things
also
in
my
command
line,
so
I
could
imagine
that
it
would
be
useful
to
have
say
a
feature,
gate,
experimental
logging
or
alpha
beta
logging.
D
We
need
to
would
need
to
agree
on
a
name
for
those
and
then,
depending
on
what
feature
functionality
is
considered
beta
or
alpha.
There
would
be
an
error
if
someone
tries
to
use
that
without
that
feature,
gate
enabled,
but
that's
perhaps
too
complicated.
Perhaps
we
don't
even
need
that
it's
it's
just
a
fault
that
I
had.
A
Okay,
thank
you
patrick
we're
just
about
out
of
time
so
and
we're
not
going
to
solve
any
problems
here.
So
it
looks
like
in
the
agenda.
Can
you
it
looks
like
you
have
a
link
to
the
pr?
A
So
please,
folks,
if
you
have
comments,
thoughts,
ideas,
criticisms,
go
ahead
and
go
there,
and
I
wanted
to
give.
We
have
one
more
agenda
item
james
from
the
release
team.
I
think,
wants
to
say
hello,
so
james,
why
don't
you
go
ahead?.
I
Yeah,
can
anyone
hear
me.
I
Nothing
cool
so
yeah
hi.
My
name
is
james.
I'm
the
release
team
lead
for
kubernetes
124..
I
just
wanted
to
come
by
and
say
hello
really,
as
I'm
doing
to
every
sig.
I
was
going
to
say
that
sig
architecture
generally
does
not
submit
caps,
but
maybe
you
will
with
3136
so
at
the
risk
of
telling
you
all
things
that
you
already
know
the
prr
soft
freeze
on
the
27th
of
january
and
then
handsome
series
itself
is
either
the
third
or
fourth
of
february,
depending
on
your
time
zone.
I
So
there's
a
deadline
to
think
of.
If
you
are
something
cap,
so
if
you're
another
six
arsenals
in
caps,
I
just
really
wanted
to
ask
if
anyone
had
any
questions
about
the
release
or
if
I
can
help
in
any
way
with
how
things
are
going.
A
All
right
well,
thank
you,
james,
looking
forward
to
the
next
release.
So,
okay,
everybody
well
we're
just
down
to
a
minute
left.
So
unless
anybody
has
any
other
quick
questions,
we'll
let
it
go
and
we'll
see
you
all
in
two
weeks.