►
From YouTube: 20220922 SIG Architecture Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello:
everybody
welcome
to
the
kubernetes
league
architecture,
community
meeting
for
thursday
september
22nd
2022.,
let's
get
started,
hey
han.
I
see
you
have
this
on
upcoming
meeting
item
agendas.
But
did
you
want
to
talk
about
that
today?
Yeah?
I
did.
A
Move
that
down,
then
we
will
start
with
when
you
get
the
you
know,
I'm
putting
it
to
the
end.
If
I
just
put
it
in
a
day,
it's
fine,
unless
you
know
it's
gonna,
take
like
an
hour.
Okay,
awesome!
Thank
you!
Everybody.
Let's
start
with
david.
C
A
C
A
C
Excellent,
so
every
year
we
for
the
past
three
years,
we
have
sent
out
a
survey
trying
to
ask
people
who
use
kubernetes
how
they
make
use
of
it.
What
problems
they've
had
and
we've
been
refining
as
we
go
along
this
year
we
had
fewer
respondents
than
we
had
in
previous
years.
We
had
less
than
100
this
year
and
more
than
150
last
year,
so
that
is
about
what
is
that?
A
third
less,
but
still
enough
to
get
some
results
from
the
overall
breakdown
of
the
users
is
still
very
consistent.
C
If,
if
that
effort
is
not
valuable,
then
we
need
to
find
a
different
way
to
accomplish
our
goals,
and
this
question
had
75
percent
of
people
saying
yes,
and
that
I
took
that
as
to
be
very
very
good.
We
had
a
big,
fair
chunk
about
20,
saying
that
they
weren't
sure
or
I'm
to
take
no
answer
as,
and
it
was
about
the
same
as
last
year,
and
we
only
had
three
percent
who
said
no
cube
is
not
more
reliable.
C
So
I
was
very
pleased
by
that
result.
The
other
interesting
thing
that
we
found
is
we
always
ask
about
what
versions
are
represented
and
I'm
actually
going
to
skip
ahead
one?
We,
we
updated
our
visualization
to
indicate
how
many
releases
back
a
particular
cluster,
the
oldest
version
you
support,
is
compared
to
the
most
recently
released.
C
The
most
recently
released
this
year
was
124,
and
last
year
was
121,
and
what's
noteworthy
here
is
that
the
oldest
version
that
cluster
admins
have
to
to
manage
is
is
at
121
for
a
very
high
percentage
of
clusters
compared
to
previous
years,
so
n
minus
3
was
only
at
25
percent.
Last
year
and
n,
minus
2
had
significantly
shifted
forward
significantly.
C
This
is
the
most
obvious
reason
for
this
is
likely
to
be
beta
api
removal,
and
it
was
a
thing
we
were
concerned
about
happening.
It
looks
like
it
has
happened,
but
it's
also
why
we
stopped
serving
beta
apis
by
default.
So
what
we're
hoping
is
that
moving
forward
after
after
125,
there
will
not
be
a
cliff
where
people
have
built
up
a
large
amount
of
of
integrations
built
on
beta
features
that
had
taken
a
very,
very
long
time
to
progress
to
ga
and
had
been
enabled
in
their
clusters
by
default
for
the
entire
time.
C
So
we're
looking
to
see
improvement
there
there's
a
lot
more
detail
in
these
charts.
Much
of
it
is
pretty
consistent
with
what
we
have
seen
in
the
past.
I
just
want
to
hit
one
more
thing
to
try
to
keep
this
fairly
brief,
and
it
is
the
rate
of
alpha
api
enablement,
and
this
has
been
a
concern,
a
minor
concern,
but
a
concern
for
a
significant
period
of
time.
This
was
a
concern
in
the
very
first
survey
we
gave.
The
response
rates
were
about
the
same.
C
In
the
second
survey
we
gave,
which
is
20
21
on
the
right-
and
here
we
are
seeing
a
significant
reduction
in
all
categories
for
administrators
that
have
turned
on
alpha
features
in
production,
and
I
think
this
is
also
good.
I
think
it
shows
that
our
platform
has
developed
to
the
point
where
you
no
longer
need
to
enable
alpha
features
in
production
in
order
to
get
the
behavior
that
you
need
that
we
have
successfully
graduated
those
at
least
to
beta.
C
A
So
david,
could
you
I
believe
we
can
share
the
links
to
these
at
least
read-only
versions
of
these
reports?
Maybe
put
them
in
the
agenda
if
people
want
to
browse
them
on
their
own.
A
And
then
the
other
point
I
want
to
make
is
that
that
I
agree
that
likely
a
big
chunk
of
that
is
related
to
the
122
removals,
but
it
is
also,
I
believe
that
the
because
this
goes
out.
I
mean
it's
over
a
period
of
time,
but
it's
sort
of
the
survey
is
done
during
q2ish
time
frame.
I
believe
120
four
was.
A
It
had
very
recently
come
out
when
we
did
the
survey
or
came
out
during
the
while
the
survey
was
open
even
potentially,
so
there
might
also
just
be
a
little
bit
of
lag,
but
I
think
that
if,
if
you
were
to
go
back
it
it
also,
the
n
minus
2
is
really
low.
So
I
I
think
it
still
is
also
people
stalling
at
121.,
but
just
just
to
sort
of
caveat
on
that
statement.
A
A
Okay,
I
believe
rihanna's
next.
D
Thanks
john
sorry,
for
no
camera,
I've
got
some
internet
issues
three
tests.
Today
the
resource
quota
is
the
same
one
that
we
brought
last
week,
eric
reviewed
it
and
gave
us
analogy
to
approve
rights
to
to
get
that
in.
So
thank
you
very
much
for
that.
D
Then
we
brought
two
new
ones
this
week
that
we
have
not
brought
last
week
read
replace
controller
replication
controller.
We
need
some
eyes
on
that
for
a
review.
We
appreciate
and
then
the
last
one
is
the
limit
range
test
that
is
been
being
reviewed
by
scheduling.
I
think
that
they're
meeting
the
previous
hour
they're
fairly
happy
that
the
second
pass.
D
We
expect
that
one
to
go
through
shortly,
although
from
six
scheduling
he's
working
on
that,
so
one
approved
needed
and
one
review
needed
that
would
be
great,
then
rolling
over
to
we
started
looking
at
the
finalizing
when
there's
one
replacement.
D
We
would
like
to
hear
in
the
in
the
midst
of
the
knowledgeable
people
in
this
meeting
any
thoughts
on
it
should
it
be
tested.
Is
there
any
reasons
why
not
anything
that
it's
a
bit
of
an
odd
endpoint
that
we
think
might
not
be
visible
by
the
user
agent
in
the
way
it
works,
but
we
just
started
scratching
on
it.
A
So
is
that
that's
just
the
the
finalizer
updates
for
I
don't.
I
don't
actually
know
I'm
not
intimately
familiar
with
the
endpoint
naming
and
is
that
setting
the
replacing
the
finalizers
in
a
namespace
or
what
does
it
do
actually,
because
if
we
do
then
probably
yeah,
we
need
to
test
that.
If
that's
what,
if
it's
modifying
the
finalizers,
that's
something
we
probably
need
to.
E
A
E
A
Know
you're
on
mute
that
you
are
you
just
briefly
unmuted
yeah,
so
we
don't
know
ryan,
I
guess
is
the
answer.
Maybe
maybe
david
knows
but
he's
silent.
So
my
guess
is
he
doesn't.
C
A
D
Yes,
thank
you
very
much,
then,
just
one
last
point
that
it's
not
on
the
agenda,
but
it
is
lower
down
the
twentieth
of
august.
We
discussed
the
fourth
endpoints
or
I
have
created
an
issue
and
dropped
that
in
the
sequel
and
david.
I
also
thanked
you
for
that.
Just
for
some
review
of
eligibility
of
all
the
inputs,
it
was
not
surety.
C
D
E
Tim,
hey
everybody,
so
this
is
one
of
those
code
areas
that
falls
under
no
other
sig,
so
sig
arch
gets
it
by
default.
I
thought
I
would
show
up
here
just
to
give
a
brief
update
on
it.
Some
of
you
may
know
that
I've
spent
some
of
my
free
time
over
the
last
six
months.
E
Looking
at
what
go
has
built
is
a
feature
called
workspaces
which
are
multi-module
concept
space,
it's
really
designed
for
what
we're
doing
with
kubernetes
and
staging,
and
it
lets
a
lot
of
the
tooling
just
work
transparently
across
modules,
so
you'll
no
longer
get
those
weirdo
error
messages
that
say
you
know.
Package
main
is
not
in
this
module
or
something
like
that.
E
It
was
originally
added
in
the
hope
that
if
we
generated
them
on
the
fly
we
wouldn't
have
to
check
them
in
then
the
main
problems
with
checking
them
in
have
been
resolved.
The
the
conflict
hubs
that
used
to
exist
are
gone
and
we
do
check
them
in
and
I
don't
really
see
us
changing
that
anytime
in
the
future,
which
leaves
me
questioning
whether
than
the
make
file
silliness
is
actually
worth
its
weight.
E
I've
personally
come
to
the
conclusion
that
it's
probably
not,
and
I
reached
out
a
little
bit
to
jordan
and
clayton
and
daniel
and
and
some
other
folks,
and
they
all
sort
of
agreed
that
the
bus
factor
on
that
area
is
too
high.
It's
too
complicated.
It's
too
brittle
it
only
half
works
anyway,
there's
still
a
bunch
of
code
generation
that
doesn't
run
through
that
mechanism
and
it's
slow.
It
impacts
every
build.
Every
time
you
use
them
use
make
for
anything.
E
It
has
to
spend
15
seconds
figuring
out,
there's
nothing
for
it
to
do,
and
so
I'm
proposing
to
abandon
ship
on
that,
and
so
I've
been
working
on
a
pull
request
to
basically
convert
the
the
five
or
six
code
generation
that
we
do
automatically
into
one
of
the
regular
old
hack
scripts,
and
we
would
just
run
it
when
you
need
to
run
it,
and
there
is
an
outside
chance
that
people
will
make
an
api
change
and
then
run
tests
without
running
code
generation,
and
then
it
will
all
pass
and
then
it
shouldn't
pass
and
it'll
push
up
to
ci,
but
ci's
gonna
make
clean
on
it
anyway,
so
it
I
will
try
to
put
in
place
mechanisms
to
make
sure
that
that
doesn't
actually
make
it
into
the
tree.
E
So
then
the
I
guess,
I'm
here
to
say:
does
anybody
object
right?
The
script
to
replace
the
makefile
is
significantly
shorter
than
the
makefile
itself
and
much
more
obvious
to
read
and
to
extend.
I
haven't
offered
a
pull
request
just
yet,
but
I
will
hopefully
within
a
couple
of
weeks.
Anybody
want
to
stop
me
before
I
do.
C
I'm
trying
to
remember
if
you're
talking
about
the
bit
that
does
the
deep
copy,
yep
yeah
yeah
I'd,
be
very
happy
to
see
that
die.
E
A
You
tim
all
right.
B
Tom
yeah
thanks
yeah,
so
I
wanted
to
bring
this
up
because
I
have
a
cap
out
specifically
to
add
component
health
sli's
to
each
kubernetes
component.
So
in
short,
I
don't
I'm
not
going
to
make
everyone
read
the
cap.
It's
it's
a
very
simple
idea:
we
have
health,
z,
endpoints,
live
z,
ready,
z,
endpoints
on
all
of
our
kubernetes
components.
B
Right
now
for
people
to
make
slis
from
that,
you
basically
have
some
outside
process
which
hits
that
endpoint
and
then
converts
this
into
a
metric
with
a
one
or
zero
value,
and
then
that
gets
used
in
an
slo.
So
this
is
like
everybody
basically
has
to
do
this,
so
there's
a
simpler
way
to
do
this,
which
is
to
emit
a
metric
which
corresponds
to
the
health
value
of
the
health
endpoint,
and
we
can
expose
this
on
each
of
the
components.
B
The
reason
why
I
want
a
separate,
endpoint
and
not
reuse,
the
existing
metrics
endpoint
on
each
of
the
components
is
because
for
sli
data
you
generally
want
greater
granularity,
which
means
higher
scrape
intervals
and
as
a
result
of
this,
this
disincentivizes
you
from
wanting
to
use
the
existing
metrics
endpoint.
And
if
you
have
stuff
like
api
server,
where
you
have
like
a
zillion
metrics
with
tons
of
cardinality.
B
You
basically
can't
scrape
that
every
five
seconds,
because
the
cost
would
be
extremely
prohibitive,
and
that
is
basically
why
I'm
advocating
for
adding
a
new
endpoint
for
component
health
slice,
and
I
just
wanted
to
make
sure
that
people
were
okay
with
this,
since
this
is
basically
adding
a
new
endpoint
to
all
of
the
components.
C
B
C
It
might
have
been
something
that
we
built
out
in
a
branch
or
or
something
like
that,
but
I
thought
that
we
could
hit
the
metrics
end
point
and
say
give
us
only
these
six
metrics.
B
We
have
I
mean
you
can
grab,
you
can
also.
We
also
have
a
disallow
list,
but
that
has
to
be
passed
in
as
a
flag
to
the
component
to
limit
the
number
of
metrics.
A
Of
I
mean
the
reason,
being
I
wonder
about
things
like
you
know,
there's
there's.
I
know
that
people
protect
those
metrics
endpoints
and
then
they
have
our
back.
They
have
like
proxies
for
rmac
associated
with
them
and
if
we
have
a
new
endpoint,
is
this
gonna
trigger
a
bunch
of
new
config
that
people
have
to
do
in
order
to
protect
that
endpoint
and
and
or
or
it
won't
be
protected
like
what
would
be
the
the
sort
of?
Maybe
you've
got
this
in
your
cap.
But
what
would
be
that
yeah.
B
B
With
the
different
environments,
so
the
rba
rules
are
usually
path-based,
so
if
you
have
metrics,
you
should
have
access
to
metrics
health
and
second,
the
problem
with
actually
including
it.
In
the
let's
say,
you
included
this
into
the
raw
metrics
endpoint
and
we
somehow
supported
a
query,
parameter
string
that
allowed
you
to
see
some
set
of
metrics.
B
Basically,
what
you
would
have
to
do
is
you
would
have
to
have
a
scraper
which
scraped
that
same
endpoint,
but
with
a
query
string
in
both
in
both
versions
like
you,
you
would
need
like
metrics
a
through.
D
A
C
I
could
certainly
see
having
a
positive
and
negative
list.
I
guess
I'm
looking
at
this
and
the
problem
that
you
raised
of
of
I
want
to
base
some
sort
of
sli
on
a
subset
of
my
metrics
seems
like
a
general,
I
have
these
metrics
and
some
of
them.
I
want
to
scrape
more
often
and
selecting
your
favorite
and
putting
it
into
a
separate
endpoint.
C
C
C
So
we've
got
health
and
we've
got
ready
now.
So
so
would
I
add,
percent
failure
on
500
series
error
messages
because
you
know
400,
so
I
don't
really
care
about,
but
the
500s.
Those
are
probably
me.
Would
I
add
that.
B
C
But
but
let's
make
sure,
let's
make
it
smaller,
let's,
let's,
let's
make
one
that
I
want
to
have
an
sli
on
or
or
I
want
to
have
a
q
depth
check
on
my
controller
manager.
C
Yeah,
and
so
like
I
I
understand
like
making
you
endpoint,
is
one
way
to
do
that,
but
if
it
would
be
more
effective
to
say
everyone
might
have
different
sli's,
we
have
what
we
think
they
might
be
if
we
have
a
way
to
a
loud
list,
the
set
of
things
we
want
and
hit
the
metrics
endpoint
and
then
turn
that
around
and
blacklist
on
the
other
side.
Do
we
actually
get
a
better
result.
B
C
Right
and
so
you
could
have
allow
list
equals
and
then
you
can
have
deny
list
equals,
allowing
deny
don't
work
there,
but
the
if
you
have
both
the
positive
and
the
negative
filtering,
then
you
only
ever
specify
12.,
and
these
are
my
slis,
and
these
are
my
not
sli's.
A
Why
why
don't
you
think
it
through
and
propose
some
alternatives
in
the
cap?
I
mean
we
can
talk
about
it
here.
We
have
time,
but
I
just
like
the
basic.
The
basic
idea
is:
don't
limit
yourself
to
this
set
of
metrics
that
people
might
want
sls
for
people
may
want
slis
for
other
metrics.
What
would
be
your
solutions
to
that?
So
a
second
stream
would
include
exclude
as
one
a
separate
endpoint
is
another
different
categories
that
they
could
configure
is
another
there's
there's
lots
of
different
ways
to
solve
the
problem
and.
B
A
A
I
mean
that
might
not
be
the
right
solution
then,
but
my
point
is:
I'm
not
sure
why
first
of
all,
but
but
I
think
I
think
david's
point
is
like
you're-
proposing
a
solution
to
a
one-off
problem
that
may
be
a
class
of
problems.
So
can
we
propose
a
solution
to
the
class
of
problems?
That's
that's
the
question.
I.
B
Would
be
okay
with
renaming
the
metrics
endpoint
to
something
like
to
something
that
was
more
generic?
That
would
allow
for,
like
quicker,
scrape
intervals
but
partitioning
the
metrics
makes
more
sense
to
me
than
having
a
weird
configuration
option
that
like
because
people
don't
have
to
scrape
these
metrics
like
they
don't
right,
they
don't
exist
today
and
if
you
were
to
get
them
in
a
normal,
metrics
endpoint
and
use
them
like
there's
the
potential
that
someone
could
use
them
incorrectly
right
so
like
currently,
people
set
their
default
configuration
for
bringing
atheists
to
this
one
endpoint.
B
C
Worth
I
think
it
is
worth
it
exploring
on
a
cap.
My
my
point
was
what
you
were
saying,
john,
that,
like
this
looks
identifying
a
subclass
of
I
want
to
scrape
these
more
often
seems
valuable,
but
I
don't
know
that
we
can
assume
that
as
kubernetes
developers
we
know
what
that
perfect
set
is
going
to
be
and
so
allowing
it
to
serve
an
intent
of.
I
want
to
scrape
these
for
sli,
so
I
want
to
scrape
them
more
often,
I
want
to
choose
a
small
set
and
allowing
an
end
user
to
somehow
configure.
C
C
Another
way
to
do
that
would
be,
you
know,
command
line
flag
that
says
put
these
into
the
metrics
sli
end
point,
I'm
sure
you
can
think
of
other
ones
as
well,
but
solving
that
that
general
problem,
if
I
want
to
collect
these
more
often,
is
more
seems
more
compelling
to
me
than.
B
C
I
am
I'm
interested
in
how
it
gets
configured.
I
do
think
that
we
want
to
have
distributors
or
deployers
have
some
level
of
control
over
what
gets
exposed
there.
C
We
know
things
we
would
certainly
suggest,
but
I
think
they
would
know
what
their
what
they
actually
want
to
watch
and
then
potentially
place
objectives
on,
and
that
we
may
not
know
that
in
advance.
A
So
can
you
put
a
link
to
the
cap
in
the
agenda,
so
anybody
coming
along
and
watching
the
recording
or
those
of
us
here
can
find
it
easily
and
then
are
there
any
other
I
mean
you
know.
I
think,
let's
see
what
that
looks
like
in
the
cap
and
once
we
update
it.
Based
on
this
conversation
is
there
any
other
questions
you
have.
A
If
there's
no
other
discussion
on
that
or
any
other
topics,
let's,
let's
stop
early.