►
From YouTube: 20230406 SIG Architecture Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
today
is
April
6th.
The
release
is
not
yet
over
so
sparse
attendance
today,
but
we
do
have
a
few
urgent
items
to
take
care
of
Jordan.
Would
you
like
to
get
us
started?
Please
sure.
Do
you.
B
B
C
A
I
did:
can
you
try
now
yeah,
okay,.
B
A
D
A
C
All
right,
can
you
see
my
screen?
Yes,
I
can
make
it
a
little
bigger
that
helps.
C
So
I
I
just
opened
this
proposal
earlier
this
week
and
basically
spitting
up
discussions
around
us,
but
the
the
summary
is.
It
would
be
awesome
if
the
oldest
nodes
that
we
support
and
the
newest
control
planes
we
support
work
together.
C
That's
that's
the
tldr
on
the
proposal,
the
the
current
SKU
that
we
support,
which
is
n
minus
two,
like
there's
nothing
magical
about
two
versions,
like
the
reason
we
chose
n
minus
two
was
so
that
if
we're
supporting
three
minor
versions,
the
oldest
nodes
will
work
with
the
newest
control
planes
and
when
we
made
the
switch
to
the
annual
support
period-
and
we
realized
that
people
actually
need
a
couple
months
to
like
qualify
and
roll
out
a
new
version,
and
so
we
extended
the
last
minor
version
for
a
couple
more
months
to
allow
for
that
overlap.
C
We
didn't
really
consider
that
our
node
skew
didn't
allow
people
to
actually
leave
their
node
pools
on
the
version,
get
their
control
planes
up
to
the
latest
version
and
then
do
the
upgrades.
So
it
was
sort
of
like
a
boundary
off
by
one
error.
C
So
I
I
go
through
a
bunch
of
motivation
stuff.
This
picture
is
probably
the
the
most
useful.
So
if
you,
if
you
consider
like
we,
you
know
release,
this
is
an
example,
so
we
released
like
140
and
then
later
we
released
141,
then
142.
C
Then,
a
year
later,
we
released
143
like
how
would
someone
who
just
wanted
to
stay
on
supported
versions
with
as
little
turn
as
possible
like
how
would
they
go
about
getting
to
the
new
version
and
so,
if
to
stay
within
what
we
currently
say,
we
support
they
can
start
their
control,
plane
upgrades
but
then
have
to
do.
Node
upgrades
like
halfway
through,
which
means
draining
and
recreating
all
their
pods
and
workloads
and
everything.
C
Everything
then
get
their
control
playing
the
last
version
up
and
then
do
all
their
nodes
again,
and
so,
if
we
actually
supported
the
oldest
nodes
and
newest
control
planes,
their
upgrade
could
look
like
this.
It
also
drops
one
of
the
control
plane,
node
accommodations
after
qualify.
C
Anyway.
That's
that's
the
motivation.
The
rest
of
the
cap
is
like
talking
about
all
right.
What
would
it
cost
us
to
do
this
and
how
disruptive
or
problematic
would
this
be?
C
You
know
in
terms
of
testing
or
disrupting
Sig's
plans,
and
so
I
did
actually
a
lot
of
analysis
of
sort
of
the
last
couple
years
of
work
that
sigs
have
done
and
what
I
know
of
that's
planned
for
the
next
year
in
terms
of
what
supporting
one
additional
node
version
would
have
impacted,
and
it's
actually
relatively
small,
most
of
the
times
that
we
roll
out
features
new
features.
C
We
just
say:
if
you
want
to
use
the
new
features
you
have
to
be
on
new
nodes
and
which
is
actually
pretty
reasonable.
Like
there's,
there's
things
we
could
do
maybe
to
make
that
a
little
safer
or
a
little
more
user
friendly,
instead
of
just
failing
a
pod.
If
someone
tries
to
use
a
feature
that
their
nodes
don't
support
yet,
but
generally
saying,
if
you
want
to
use
new
stuff,
if
you
have
to
upgrade
your
nodes,
that
seems
okay
to
me.
C
C
I
could
only
find
a
couple
instances
of
that
and
it's
usually
around
like
relaxing
security
policies
or
like
changes
in
default
behavior
that
would
actually
regress
or
break
things,
if
not
all
nodes
supported
the
feature,
and
so
there
were
only
a
couple
instances
in
the
last
couple
of
years
that
I
could
find
where
we
we
were
waiting
for
the
oldest
node
to
support
a
thing
so.
C
So
I
actually
went
through
starting
with
122
for
like
the
last
two
years,
so
122
to
127
and
looked
at
all
these
categories
of
things
so
feature
editions
that
we
waited
to
graduate
or
permanently
enable
until
the
oldest
supported
nodes
had
them
locked
on,
and
so
the
balance
service
account
token
change
was
the
only
one
I
could
find
that,
like
the
oldest
node
had
to
support,
otherwise
your
pods
would
Now
fail
to
start,
and
so,
if
we
were
supporting
n
minus
three
nodes,
we
would
have
wait.
C
C
C
Yeah,
so
this
actually
touches
on
another
point,
which
is
our
feature:
rollout
strategy
is
pretty
simplistic
and
naive
right
now.
It's
basically
lock
something
on
and
then
wait
months
or
years
and.
C
I
would
love
to
see
that
improved,
but
I
think
that's
orthogonal
too
to
this,
but
yeah
I
I
would
love
to
see
that
improve.
C
So
the
first
category
was
features
that
we
waited
to
enable,
and
they
were
really
only
one
or
two
and
both
were
say
goth.
Actually
so
I'm,
like
security
stuff,
tends
to
be
the
thing
that
you
have
to
stay
compatible
with
the
oldest
thing
forever,
so
feature
editions
were
one
dimension
and
probably
the
most
important
like
I,
don't
want
to
I,
don't
want
to
slow
down
sigs
working
on
stuff
and
make
it
take
longer
for
them
if
we
thought
that
was
going
to
be
a
common
common
occurrence.
C
The
second
category
of
stuff
was
removal
of
deprecated
Behavior,
and
so
the
sort
of
the
flip
side
like
if
nodes
no
longer
need
a
particular
Behavior.
C
The
control
plane
has
to
still
support
it
until
you
know
it
ages,
out
of
the
oldest
skewed
node,
and
so
the
the
only
example
of
that
I
found
was
Sig
storage,
dropping
the
entry
volume
plug-ins
so
once
those
switched
to
CSI
and
locked
CSI
migration
on
the
control
planes
had
to
still
support
the
entry
volume
plug-ins
until
the
oldest
skewed
node
had
CSI
locked
on
for
that
volume.
Type.
C
I
mean
it
is
like
big
cloud
provider.
Libraries
like
we're
happy
to
see
those
drop
out.
It
would
have
been
a
little
irritating
to
like
have
to
keep
those
LinkedIn
for
another
release,
but
it's
not
the
end
of
the
world
and
people
who
really
care
can
already
build
provider
lists.
So,
if
they're
not
actually
using
those
things,
they've
already
switched
to
CSI
migration
and
they
really
care
about.
Having
that
dependency,
they
can
build
providerless
anyway.
Those
those
were
the.
So
we
don't
want
to
slow
down
the
feature.
C
Edition
and
I
couldn't
find
examples
of
delaying
cleanup.
That
would
have
like
actually
increased
maintenance,
a
whole
lot.
Mostly,
it's
just
don't
touch
that
code
for
One
More
release
and
then
delete
it,
and
then
the
last
category
was
like
rest
apis
and
actually
the
we're
doing
really
really
awesome.
Here
when
we
pushed
to
get
all
the
required
apis
to
GA
in
119
like
ever
since
then,
nodes
have
always
been.
C
No
components
have
always
been
able
to
operate
against
API
servers
like
three
versions
ahead,
so
we
we
did
a
really
good
job
of
cleaning
up
usage
of
deprecated,
beta
apis
in
node
components,
and
so
there
were
like
in
122.
We
dropped
apis,
but
nodes
had
long
ago
switched
to
the
ga
apis
same
thing.
In.
C
I
might
have
put
that
in
the
wrong
release.
I
was
doing
this
late
at
night.
Endpoint
slice,
similar
thing
125
dropped
to
the
beta
version,
but
two
proxy
already
switched
to
the
ga
version
122.
anyway.
This
is
a
I
tried
to
think
really
carefully
about
like
what
this
would
cost
in
terms
of
impacting
project
velocity,
and
it
didn't
seem
significant.
C
So
then
the
cost
primarily
became
like
a
testing,
testing
effort
and
so
I
talked
with
Sigma
cluster
lifecycle,
who
set
up
the
current
spew
tests
that
are
exercising
n
minus
one
and
minus
two
nodes,
and
it
didn't
wouldn't
be
difficult
to
generate
n
minus
three.
The
main
risk
there
is
that
Cube
atom
technically
only
supports
like
one
version
holder
of
node.
It
doesn't
even
support
n
minus
two
and.
C
Ignoring
the
cube,
atom,
guardrails
and
just
saying
I,
don't
care
just
set
up
a
two
version
old
node,
and
so
if
the
cubelet
like
started
changing
how
you
have
to
invoke
it
command
line
or
node
config
or
something
if
it
started
changing
in
ways
that
Cube
Adam
had
to
react
to
even
the
existing
SKU
jobs
might
start
breaking.
We
might
have
to
rework
them
and
use
like
an.
C
C
Maybe
so
I
look
at
this
as
like,
whatever
the
core
kubernetes
component
support,
those
are
building
blocks
and
those
sort
of
give
the
the
lowest
common
denominator
like
if
the
core
components
don't
support
it,
then
nobody
can
reasonably
say
they
support
it
like
if
particular
management
tools
or
upgrade
managers
want
to
support
n
minus
two
or
minus
three
like
they
can
choose
to.
E
D
D
A
A
Not
yeah
the
cube
medium
is
that
you
know
in
the
phase
where
they
want
to
have
the
cake
and
eat
it
too.
So
they
want
to
be
part
of
the
project.
But
you
know
we
haven't
been
able
to
staff
a
separate
team
for
cubadium,
which
will
like
take
it
out
of
the
KK
repository
I,
say
for
the
longest
time.
C
Yeah
so
I
I,
probably
like
Mark,
that
as
like
potential
future
Project
work
like
if
projects
deployment
tools
want
to
support
SKU
like
they
could.
That
would
probably
be
useful,
but
they
could
do
their
own.
Like
you
know,
surveys
of
users
to
say
like
are
people
who
are
using
Cube
Adam
wanting
this
I
I,
don't
know.
E
C
Anyway,
that's
the
context
of
this
I.
What
I
wanted
to
do
here
was
just
signal
boost
this
goal
and
like
ask
if
there
were
things
that
I
wasn't
thinking
about
categories
of
work
or
testing
that
I
didn't
consider
here
that
this
would
really
impact
or
slow
down
or
cause
problems.
C
D
A
I
think
we
have
to
call
out
this
that
this
covers
only
the
basic
components
and
not
like
the
add-ons,
because
you
know
things
like
Calico
and
other
other
cni
providers,
for
example,
live
in
the
node,
but
the
call
API
server
too
right.
A
So
that
is
one
one
thing
and
like
we
don't
have
any
control
over
that.
So
you
know
we're
gonna,
not
talk
about
that
at
all,
and
it's
up
to
them
up
to
the
cni
providers.
What's
queue
that
they
want
to
support,
so
we
we
can.
We
should
clearly
put
that
right
on
top,
so
people
will
confuse
you
know
what
we
are
doing
here
with
like
hey
it's
up
to
them
right.
So
at
this
point
they
are
probably
doing
one
n
minus
two.
C
I've
seen
I've,
sometimes
I
see
it
versions
with
the
control
plane
actually
like
if
they're
running
with
a
Daemon
set,
they
might
actually
be
versioning
their
cni
driver
with
the
control
plane,
CSI
drivers
do
that
a
lot
right.
A
C
I
actually
like
when
I
was
scrubbing
through,
like
the
last
two
years
of
Feature
work.
There
were
things
that
spanned
control,
planes
and
nodes
like
set
comp,
but
we
we
were
so
slow
in
Rolling.
Those
changes
out.
The
node
changes
landed
like
way.
C
A
Right
if
we
can't
go
faster
than
that
which
is
okay,
because
we
are
doing
it
slowly
anyway,
so
yeah.
C
And
there
are
plenty
of
features
that
you
know
get
introduced
and
then
graduate
debate
and
graduate
to
GA
but
they're
already
not
waiting
for
not
roll
outs.
They're
saying
like
we
don't
work
with
the
N
minus
one
nodes.
If
you
want
this
feature
right
nodes
and
honestly,
that
seems
totally
fine
like
if
we
could
make
that
more
obvious
or
more
user-friendly,
so
that
if
someone
tries
to
use
the
feature
instead
of
their
deployment,
just
failing
it
told
them
like
sorry.
C
B
A
To
me
Jordan,
this
definitely
helps,
and-
and
you
also
when
we
were
talking
on
slack-
you
mentioned
that
this
was
embedded
in
one
of
those
docs
from
before
the
Caps
from
before,
when
we
were
doing
the.
C
The
annual
release,
yeah
yeah
I
think
where
was
it
annual
release.
G
C
Yeah
like
it,
it
made
reference
to
basically
exactly
what
I'm
trying
to
accomplish,
but
I
think
it
did
the
math
wrong
and
ended
up
not
actually
allowing
the
oldest
cubelet
to
work
against
the
newest
API
server,
because
it
didn't
account
for
that.
That
lag
where
we
release
a
new
version
and
then
support
the
oldest
Miner
for
a
couple
months
to
let
people
qualify
and
upgrade,
and
so
during
those
couple
months
like
we
don't
actually
support
the
oldest
giblet
against
the
newest
API
server
anymore.
G
A
The
other
thing
I
think
we
should
also
call
out,
is
like
we
can
say
that
you
know
the
thing
that
came
out
from
your
chart,
which
says
you
can
keep
moving
the
control
plane
and
only
at
the
end
of
the
year.
Do
you
really
have
to
move
the
notes.
A
I
think
that
is
like
big
plus
point
right,
like
for
all
the
people
who
are
going
very
slow,
We
you
we
can
say
that
hey
now
you
have
one
more
option
and
I
think
that
is
very
powerful
option
that
we're
giving
them
here.
G
C
Derek
is
not
here
today,
I'm
talking
taking
this
to
sick
node
next
week,
but
Derek
and
I
were
talking
about
this
and
he
was
just
emphasizing
like
control.
Plane,
upgrades
and
node
upgrades
are
not
equal
in
terms
of
disruption
like
yeah
nodes
are
inherently
way
more
disruptive
because
you're
guaranteeing
you're
cycling
every
workload
in
the
cluster.
If
you're,
upgrading
all
your
nodes,
so.
A
C
But
but
that's
true,
I
I
think
we
underestimate
how
many
people
are
like
you
know.
Kubernetes
is
solving
a
business
problem
for
us
and
it's
actually
solved
like
we're
just
using
it
we're
not.
We
don't
actually
need
new
stuff
and
new
releases.
We
just
want
to
stay,
supported
and
stay
stable
and
stop
being
disrupted
and,
like
just.
G
D
H
C
So
we
support
patch
releases
on
these
old
versions
like
we're
and
I.
We
actually
just
had
a
blog
post
go
live
yesterday,
talking
about
like
we're
we're
trying
to
identify
places
where
we're
having
difficulty,
keeping
those
old
versions,
secure
and
fixed
those
things,
and
so
like
sitting
on
these
140
nodes
for
a
year
like
those
will
get
patch
releases
with
security
fixes,
and
now
they
actually
get
go
security
fixes
as
well,
and
so
that
part.
H
C
Right,
that's
fair!
Do
not
fair
enough
fair
enough!
I
mean
if
you're,
if
you're
using
immutable
nodes,
then
yeah
you're,
gonna
have
I
mean
you're
gonna
have
to
cycle
them
to
get
the
patch
upgrade,
but
then
there's
actually
a
reason
like
what
I'm
trying
to
avoid
is
making
people
upgrade
for
things
they
don't
care
about,
and
so,
if
someone
wants
a
security
fix,
they
need
a
security
fix.
Then
sure
you
need
to
upgrade.
Maybe
you
can
do
that
not
disruptively?
Maybe
you
have
to
be
disruptive,
but.
C
That's
that's.
What
I've
got
I'll
probably
tag
a
couple:
Arch
books
I'm
trying
to
get
a
couple
people
from
each
Sig
to
actually
be
like
officially
listed
as
reviewers,
but
obviously
anyone
who
who
wants
to
take
a
take,
a
look
and
leave
comments
about
things,
I
missed
or
costs
that
I
didn't
consider.
B
C
A
Tim
I
would
also
say
make
sure
you
you
have
your
own
registry
and
don't
pull
fast
if
you
are
doing
all
these
kinds
of
stuff.
Oh.
B
A
Okay,
I
think
yeah
any
any
questions
for
Jordan
once
going
twice.
B
Okay:
let's
go
to
the
next
one,
which
is.
A
H
So
my
my
question
not
nearly
as
ambitious
as
the
previous
but
I
was
reviewing
PRS
over
the
last
few
weeks
and
I
noticed
an
increasing
trend
of
getting
tagged
on
PRS
that
just
need
approval,
because
they've
already
been
reviewed
by
the
area
experts
and
they
just
don't
have
the
approval
rights
that
they
need
in
the
directories
that
they're
touching
and
I
promised
that
I
would
bring
this
up
with
this
group
because
I
know
a
lot
of
the
people
here
suffer
the
same
syndrome
as
me,
and
the
question
I
I
guess
I
want
to
propose
is
how
do
we
do
better
than
that?
H
It
seems
stupid
for
people
to
have
to
wait
for
me
to
pay
attention
to
their
thing
and
Jordan's
much
better
at
sweeping
up
the
small
PR's
and
approving
them.
But
you
know
at
some
point
people
get
stuck
and
they
wait
for
one
of
us
to
come
along
and
just
say:
yeah
all
the
people
who
know
this
code
have
already
approved
it.
I'm
just
boilerplate
stamp.
H
You
know
so
the
question
came
up
of:
should
we
refactor
our
packages
into
better
structures
so
that
people
have
better
more
more
granular
ownership,
and
somebody
mentioned
that
there
was
or
is
an
effort
to
do
file
pattern
matching
in
owner's
files
and
that
maybe
that
effort
is
stalled
out
and
maybe
we
should
Infuse
a
little
bit
of
energy
into
it,
certainly
easier
to
refactor
into
files
than
to
refactor
into
packages.
A
Yeah
that
that
is
waiting
for
the
approval,
V2
plugin
for
pro,
and
that
is
stuck-
it's
been
stuck
for
like
a
couple
of
years
now,
I
I
think
we
should
look
at
it
for
128
as
soon
as
Master
opens
up
to
drive
it's
waiting
for
reviews.
Basically,
the
code
has
been
ready.
It's
been
tested
by
folks
on,
like
other,
you
know,
GitHub
orgs,
we
we
just
haven't
been
able
to
enable
it
here.
The
code
is
ready.
It's
just
waiting
for
reviews.
A
I
think
we
should
speed
up
the
reviews
and
to
get
that
in.
If
there's
anything
else,
we
can
think
of
for
sure.
A
So
one
other
idea
that
I,
that
you
know
I
tried
a
little
bit
before,
but
it
didn't
really
work
because,
like
the
pr
reviews
Channel,
we
could
ask
authors
to
when
they
have
the
reasonable
amount
of
lgtms
of
in
the
areas
that
are
covered.
They
can
pop
up
there
and
ask
for
help,
for
you
know,
approval,
for
you
know
things
from
you
and
Jordan,
for
example,.
H
A
The
only
problem
with
that
is
like
keeping
that
refreshed
and
up-to-date
the
you
know
that
that
is
where
we
are
failing
right.
We,
we
have
stale
owners,
stale
approvers
and
we
haven't
been
able
to
nail
that
problem
yet
so,
if
we
make,
if
we
go
around
like
doing
all
the
changes
now,
six
months
from
down
the
line,
it's
not
going
to
be
the
same
right
like
people
come
and
go
so
that
that
problem
we
haven't
solved.
H
A
H
A
Okay,
thanks
Tim
Cece
you're
up
next.
F
Yes,
hi
everyone
I'm
here
just
to
seeking
approval
for
a
new
repo
under
kubernetes
for
the
future
of
Education
admission
policy,
as
most
people
might
have
aware
of
they
have
this
new
feature
in
Shirley
Conicelli
admission
controller,
which
aim
to
offer
an
declarative
in
progress
alternative
to
the
edit
animation
web
hook,
using
cell
and
I'm
happy
to
share
that
this
feature
like
brought
a
lot
of
interest
among
like
the
community
and
the
ecosystem.
F
So
a
lot
of
yeah
teams
are
internally
externally
reached
out
to
see
when
they
were
able
to
use
the
feature.
But,
unfortunately,
due
to
like
the
recent
update
on
the
API
like
default
policy
change
and
the
feature
gate
nature
like
the
feature
has
to
be
rated
until
it
goes
to
stable
GA.
Before,
like
the
manage,
the
Clusters.
Can
users
kind
of
like
adopt
the
feature
so
I'm
here
to
see
if
it's
possible
to
sorry.
F
To
offer
and
the
for
the
interest
group
to
adopt
it
early
and
I
know
like
a
similar
mechanism,
whereas,
like
offered
earlier
for
portal
security
animation
to
offer
an
alternative
before
people
can
fully
kind
of
do
the
migration
so
I'm
here
to
see,
if
it's
possible
to
kind
of
have
a
repo
for
that.
A
A
F
Yeah,
that's
a
good
point,
so
we
plan
to
offer
like
a
fully
kind
of
implementation
like
ready
to
be
used
by
the
user.
Now
through
the
RDS,
it's
technically,
it's
your
web
hook.
E
F
Like
whatever,
like
the
feature,
the
entry
feature
has
like
to
this
external
repo.
So
this
way,
like
user,
can
early
adopt
the
feature
and
be
able
to
use
the
cell
in
the
very
similar
way
as
the
veterinary
admission
policy
and
also
later,
when
the
feature
is
fully
ready,
like
the
transition
will
be
as
most
as
just
applying
like
a
bunch
of
existing
young
files.
F
G
No
consider
dropping
the
or
making
the
validating
piece
more
generic,
because
I
think
we
see
mutating
admission
coming
eventually,
it
would
be
nice
to
just
be
able
to
reuse
the
same
repo
for
that
as
well.
A
Tim,
can
you
add
your
comment
on
the
issue
itself?
Please
yeah
get
lost.
B
A
H
I
will
miss
you
all
desperately
and
I
will
have
a
beer
at
noon
just
so
that
I
can
be
drinking
at
the
same
time
as
you,
but
I
will
not
be
there.