►
From YouTube: 20210225 SIG Architecture Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome.
Everybody
today
is
february
25th,
I
believe
2021,
and
this
is
a
sick
architecture.
Community
meeting,
please
everybody
be
respectful
and
let's
get
started
so
atlanta
ashman.
You
have
the
first
agenda
item.
If
you
want
to
introduce
the
topic.
B
B
In
theory,
a
118
cubelet
should
be
compatible
with
it,
but
I
was
doing
some
research
and
I
could
not
actually
find
anywhere
that
we
were
testing
for
this
compatibility,
and
so
I
raised
this
in
sig
node,
not
this
week,
but
the
week
before
and
don
gave
some
really
great
history
on
what
we
had
done
with
this
stuff
and
from
what
I
can
tell
we
haven't
actually
tested
for
this
nci
as
like
a
release
blocking
thing
or
something
like
that
since
around
the
114
release.
B
So
I
discussed
with
insignode
and
signo
said:
well,
we
didn't
own
those
tests.
Those
were
part
of
cluster
lifecycle.
I
think
you
know,
go
and
talk
to
either
the
conformance
group,
because
it's
a
conformance
gap
or
to
sig
architecture
and
see
what
they
say
about
what
we
should
be
doing
for
this.
So
I
sent
the
email.
I
am
raising
it
here.
I
would
like
to
discuss.
I
think
that
we
should
either
do
one
of
two
things
we
should
say
we
don't
test
for
it.
B
A
Thank
you,
and
I
was
just
saying
before
we
started
recording.
That's
why
I
started
recording
that.
Yes,
I
I
think
personally
that
we
absolutely
should
be
testing
this.
I
know
we
depend
on
it
on
on
this
compatibility,
skew
and,
I
think,
derek
had
some
comments
back
to
that.
C
Yeah,
so
I
definitely
agree
we
should
be
verifying
what
we
presently
claim,
and
I
think
the
the
latest
state
on
this
thread
was
that
a
pr
was
open
to
try
to
meet
that
verification
of
my
memory
servicing
well
from
yesterday,
except
that
the
the
need
is
that
that
test
is
done
through
kind,
and
it's
not
necessarily
requiring
a
true
host
in
the
sense
of
what
you
know,
cubelet
would
say
is
supported.
C
Personally,
I
think
that
that's
a
fine
outcome
in
that
the
skew
testing
is
intending
to
verify
that
hublet
to
api
server
communication
channels
are
valid
and
not
regressing.
More
so
than
saying.
Does
a
cubelet
of
any
n
minus
one
and
minus
two
version
function
as
expected
right,
we
we've
already
verified
that
in
that
particular
version,
specific
thing,
so.
D
C
C
The
the
one
additional
comment
I
would
have
is
that
I'm
aware
that
we
are
debating
the
number
of
releases
that
we
are
potentially
looking
to
support
as
a
project.
C
My
personal
opinion
is
that
if
we
shrink
the
number
of
releases,
I
actually
would
be
open
to
revisiting
this
discussion
and
saying
n
minus
two
sku
is
no
longer
as
critical
and
we
could
go
and
do
n
minus
one.
But
to
me
that's
a
topic
we
should.
C
We
should
evaluate
with
clear
eyes
as
we
potentially
change
the
kubernetes
release
cadence,
but
my
view
as
our
present
cadence
was
because
you
wanted
to
mitigate
workload,
disruption,
boundaries
and
if
we
have
elongated
release
cycles,
there's
a
different
workload
disruption
boundary.
We
could
talk
through
so
I'll
pause
there
and
see
how
those
feel.
A
Well,
I'll,
just
I
I
agree,
we
don't
need
to
test
full
fuel
functionality
just
because
there's
a
skew
and
those
are
separate
things.
We're
testing
that
the
cuban
understands
the
api
server
still
and
the
api
server
didn't
break
something
backwards
backwards,
compatibility-wise.
A
So
that
makes
total
sense.
It
does
look
like
the
nail
that
the
performance
suite
is
run
against
it
with
the
sku,
which
actually
would
test
some
of
that
functionality,
but
it's
probably
somewhat
redundant,
but
still
that
exercises
all
that
machinery.
I
don't
have
a
problem
with
that
really
and
yeah.
I
agree
that
that's
a
separate
discussion
of
whether
we
right
now
we
declare
this
support,
so
we
should
be
testing
for
the
support
and
whether
we
declare
different
support
of
a
different
habit.
D
B
Lot
of
extra
work
that
we
need
to
be
accounting
for,
for
example,
in
like
production,
readiness
review
for
pr
as
we
care
about
like.
Have
you
thought
about
this
skew
testing
and
how
we're
going
to
manage
that
phasing,
and
I
think
it
does.
It
could
potentially
like
reduce
a
lot
of
surface
area
for
testing
and
cases
that
we
had
to
care
about.
A
A
B
Don't
know
what
the
history
was,
but
I
heard
at
some
point
that
part
of
the
issue
with
why
we
stopped
doing
this
was
that
there
were
like
breaking
changes
that
made
the
two
really
span
either
impossible
or
broken,
or
something
like
that
or
or
it
was
just
too
difficult
to
test.
I'm
not
really
sure
what
the
context
was,
because
I
was
not
involved
at
that
time,
but
I
would
be
potentially
worried
about
that
as
a
risk,
but
either
way.
B
E
Yeah
my
ears
definitely
perked
up
when
I
heard
something
about
an
actual
incompatibility.
I
am
not
aware
of
one.
I
I
know
of
test
environment
issues
like
upgrade
tests,
not
like
the
the
test
framework,
not
being
maintained
or
not
working
well.
Ziggov
actually
went
and
did
a
lot
of
work
to
get
that
running
again
last
release,
so
we
could
test,
like
the
service.
Account
token
upgrade
feature.
E
E
I'm
I
was
really
happy
to
see
the
premiere
and
cluster
lifecycle
jump
in
with
that.
I
would
like
to
so.
In
the
past,
we've
had
stuff
get
set
up
and
be
like
in
good
shape
for
a
release
or
two
or
three
or
five.
It
looked
like
those
tests
were
generated,
which
is
good,
so
it'd
be
good
to
make
sure
that
whatever
regeneration
or
bumping
of
that
needs
to
happen
with
each
release
is
like
part
of
the
release
playbook
so
that
that
happens
and
doesn't
sort
of
drift
off
into
oblivion.
E
Once
the
vermeer
gets
pulled
away,
it's
awesome
that
he
was.
F
G
But
it
was
there's
also
the
problem
of
people
not
having
interest
in
maintaining
entrance
jobs,
but
I
would
definitely
try
on
this
effort.
You
know
these
poor
people,
I
I
actually,
these
tests
are
not
generated.
I
write
them
by
hand
every
time,
but
yeah,
maybe
with
one
day
we
can
have
two
link
right
now.
What
we
do
we
ask
kindly
somebody
to
help
with
adding
new
jobs
for
121,
for
example.
We
carefully
review
it.
If
there
are
problems,
we
fix
it,
but
it's
a
manual
process
for
now.
E
A
G
D
G
Up
and
they
help
us
with
updating
all
the
items
in
so
it's
like
a
housekeeping
task
for
every
release
that
we
have
and
the
docs
cover
how
to
update
the
end-to-end
test
as
well.
A
Okay,
that
sounds
good,
something
like
derek
has
our.
C
G
C
Am
I
misunderstanding:
what
we
report,
I
think,
jordan.
You
wrote
that
doc,
but
my
I
thought
we
say
they
version
together.
E
C
And
then
the
other
question
is
like,
since
I
I
did
raise
the
I
am
personally
a
fan
of
if
we
shrink
our
release
cadence,
that
we
could
shrink
our
supported
sku
as
a
matter
of
like
project
governance,
is
that
a
sig
architecture
decision
or
a
a
owning
sig
decision?
A
A
Would
be
something
we
would
want
sig
architecture
to
make
a
decision
on,
because
it
crosses
multiple
cigs.
C
F
C
Yes
and
so
I
agree
and
then
I
guess
I've.
C
E
Cube
proxy,
we
document
that
it
can
be
at
most
two
versions
older
than
the
api
server.
So
the
the
typical
deployment
model
when
this
was
originated
was
cube
proxy
running
as
a
static
pod
on
the
node
and
left
at
the
same
version
for
the
lifetime
of
the
node.
So
if
the
cubelet
was
two
versions,
older
key
proxy
was
as
well.
G
A
D
C
C
This
could
be
the
first
point
where
we
have
that
conversation.
I
don't
know
if
we
want
to
have
that
conversation
today
or
get
more
detail
on
if
it's
a
jordan,
if
you
said
the
deployment
patterns
of
practice
were
one
particular
way
versus
another,
we
could
we
could
see
if
we
want
to
revisit
how
cute
proxy
sku
should
be,
but
it
seems
like
we
have
that
gap
right
now.
G
Did
we
also
have
a
skew
between
the
kubrick
and
q
proxy?
So
if
I
remember
something
about
the
ip
tables.
G
A
A
I
think
jordan,
that's
for
let's
test
our
own
by
cluster
life
cycle
right.
A
A
E
I
think
cube
idiom
deploys
q
proxy
with
a
daemon
set.
Am
I
correct.
G
E
A
So
the
issue
is
that
in
these
skew
tests-
right
now,
it's
done
by
cuban
min
which
deploys
proxy
as
a
demon
set
and
it's
kind
of
difficult
for
them
to
necessarily
deploy
q
proxy
in
a
version
different
than
the
control
plane,
which
they
can
do
with
cubelet
easily,
but
not
with
proxy.
So
the
question
was
whether
it's
really
then
that
sig
network,
given
that
an
onscreen
proxy
should
be
tested
during
the
ski
test
for
key
proxy.
I
And
I
think
that's
a,
I
think,
that's
a
fair
statement.
I
don't
think
we
have
explicit
testing
for
that
in
place.
Okay,.
A
So
let's
create
an
ai,
then
for
sig
network
to
to
I
can
I'll
create
a
network
issue
there,
because
I
that's
fine.
A
G
I
I
A
Thank
you,
everybody.
Next,
then,
it
is
hippie
for
an
update
from
the
conformance
sub
project.
K
K
I
think
we've
gotten
to
the
dock
so
that
link
here
we
just
have
a
few
prs
needing
review.
We
have
our
ineligible
endpoints
yaml,
which
would
be
great
to
take
a
look
at.
K
It
is
rather
than
the
ii
and
api
snoop
team
handling
out
of
tree,
the
endpoints
that
are
part
of
ga
or
alpha
or
beta
until
we
can
get
some
more
metadata
within
the
api
export,
which
is
some
really
cool
stuff
happening
in
that
regard,
and
then
we
have
a
test
in
by
claudio,
which
also
looks
like
it's
ready
for
some
review
and
we
have
the
service
life
cycles
test.
But
that
was
just
a
quick
note
for
that.
K
Overall,
we've
been
working
on
this
project
for
quite
a
while
now,
and
I
just
want
to
note
that
we
have
crossed
the
halfway
part
back
at
115.
We
had
266
operation,
ids
or
api
endpoints
that
were
untested
and
as
of
as
a
right
now,
I
believe
we're
at
133
remaining
our
end
of
year
goal.
Just
so
we're
going
to
get
it
in
contact
is
to
get
below
75.
K
and
we
are.
We
are
on
target
for
that
within
our
121
release.
Let
me
actually
go
back
and
show
you
on
api
snoop,
where
these,
where
this
data
comes
from
our
conformance
progress
area,
this
would
have
been
115
where
we
have
this
260
area
here
and
here's
our
remaining
133.
K
Weird
all
right
give
me
just
one
second:
stop
share,
share,
desktop
one
yes
share,
and
now
I'll
also
drop
the
link.
K
All
right,
I
will
I'll
stop
sharing
I'm
on
I'm
on
a
normal
mac,
nothing's
different
zoom
client.
I
can
still,
if
you'll,
just
pull
up
the.
I
can
just
ask
people
to
click
on
links.
I
can
share
it
with
you
cool
those.
This
is
the
we're
almost
we're
almost
there.
I
think,
probably
mid
next
year
by
half
time
next
year
and
our
goal
right
now
is
we're
for
this
release
trying
to
get
in
our
30
our
30
points.
K
I
think
we're
at
22
right
now,
even
though
the
graph
is
a
bit
strange,
it's
because
we
had
a
couple
of
tests
that
we
modified,
but
we
don't
know
how
to
it's
not
easy
to
try
to
count
some
of
the
points
inside
of
a
test
just
for
the
end
points
we
hit,
but
we
are
keeping
track
of
those
a
big
deal
for
our
apps
endpoints
is
we
have
moved
from
48
to
58?
K
So
if
you
click
on
that
58
tested
link
in
the
in
the
dock,
it'll
show
the
star
the
sunburst
for
specifically
apps
and
see
how
close
we
are
to
to
wrapping
up
apps
and
we're
a
lot
of
our
endpoints.
For
the
next
probably
release
or
so
will
be
focused
on
those
37
remaining
endpoints
and
they've
been
super
helpful
as
a
as
a
sig
and
helping
us
with
that,
we
have
two
more
tests
inside
the
121
that
are
ready
to
go
for
test
grid.
One
is
the
right
service
status
and
that's
four.
K
That's
would
appreciate
an
lg
atm
and
approve
on
that,
if
you're
in
in
the
in
the
owner's
files-
and
we
have
a
new
merge
today-
that
we're
going
to
wait
to
start
soaking,
so
we
get
our
two
week
promotion
in
there
in
this
release
to
get
us
closer,
we
have
a
flaky
test,
that's
been
identified
and
there's
a
bit
of
discussion
on
how
deep
we
need
to
go
with
regards
to
increasing
timeouts
a
lot
of
the
things
we've
seen
when
things
seem
to
be
under
load,
you
can
kind
of
see
the
entire
load
for
the
entire
job
go
up
and
we
don't
understand
why
the
load
changes,
but
it's
it's
consistent.
K
It
started
on
this
date
and
I'm
hoping
we
can
resolve
this
quickly
with
the
timeout
so
that
this
couple
of
endpoints,
you
know
those
plus
four
end
points
as
it
gets
shifted
into
the
next
release.
K
That's
the
that
we
and
the
last
one
oh
yeah,
the
ineligible
endpoints.yaml,
which
I
talked
about
earlier.
We
just
need
that
we
we've
talked
about
it
before.
I
just
need
to
get
the
last
few
folks
to
to
weigh
in
on
that,
and
do
that
merge
and
from
here
on
out
it'll
be
on
manual.
You
can
just
even
go
into
github
and
edit
it
and
add,
or
remove
three
lines.
K
It's
a
pretty
deep
again,
it's
questions
that
are:
why
is
it
stuck
and
we
just
see
a
pattern
of
things
getting
overloaded
and
it
just
takes
longer
for
things
to
happen,
so
we
have
to
increase
those
timeouts
once
they're
increased,
it
tends
to
consistently
pass
so.
A
E
Where
we
jump
over
to
like
scale
tests
and
and
see
if
the
cpu
latency
4
just
jumped
at
the
same
time
that
this
tester.
E
Yeah,
so
there
are
three
possibilities:
right,
like
the
some
other
regression
could
have
caused
cpu
usage
to
spike
and
slowed
everything
down
and
particularly
affected
tests
that
were
close
to
their
timeout,
but
were
sort
of
skating
by
and
then,
as
soon
as
everything
slowed
down
a
little
bit,
five
percent
ten
percent,
the
tests
that
were
close
to
their
timeouts
started.
Failing
so
one
possibility
is
a
regression
on
unrelated
progression
that
would
be
good
to
tag
sig
scalability
and
see
if
the
scale
tests
indicate
anything
has
happened
recently.
E
Another
possibility
is
infrastructure,
so
that's
where
we
normally
tag
in
sig
testing
and
see
if
the
test
pools
are
looking
overloaded,
io
throttling
stuff
like
that,
and
then
the
the
third
possibility
is
a
change
in
the
test.
If
this
test
hasn't
changed
since
120,
then
it's
likely
not
a
new
problem
with
the
test,
but
before
we
bump
up
timeouts,
we
want
we
just
want
to
understand.
Like
are
we
masking
an
issue?
A
L
Yeah,
but
basically
not
I'm
not
aware
of
any,
like
the
graphs
look
pretty
pretty
flat,
at
least
for
the
last
two
weeks,
so
I'm
not
sure
exactly
when
that
changed,
but
when
that
regressed.
But
if
it's
within
the
last
two
weeks
we
don't
see
anything,
I
I
don't
have
the
full
context.
So
I
don't
know
what
exactly
the
feature
is.
So
it's
possible
also
that
we
don't
test
that,
but.
K
The
flake
is
actually
at
the
very
top
there's
a
link
to
the
somebody
noticing
the
flake.
It's
98180,
the
pr
to
address
the.
M
The
test
is
just
waiting
for
the
deployment
for
the
replicas
to
scale
on
the
initial
part.
It's
it's
failing
in
two
separate
parts
of
the
test,
one
at
the
start
and
the
other
one
when
it's
waiting
for
a
watch
around
a
fetch-
and
I
was
looking
a
little
bit
more
at
the
test
grid
and
it's
perfectly
okay
on
the
one
that
I
referenced
at
the
top.
M
But
then,
when
I
looked
at
the
kind
and
the
kind
ipv6,
it's
got
irregular
flanks
turning
up,
but
then
some
of
those
timeouts
they're
quite
low
in
comparison
to
some
of
the
spikes
that
are
happening
for
I
think
it's,
the
normal
performance
test
grid
that
we
use
to
check
that
if
a
test
is
all
okay
before
promoting
it,.
A
The
next
question
is:
is
there
an
infrastructure,
widespread
infrastructure
issue,
that's
causing
test
jobs
to
run
longer
or
flake,
and
so
that
would
be
the
next
question
and
if
you
know
that
looking
in
detail
at
like,
is
this
really
just
it
just
took
a
long
time
to
get
to
running
or
was
there
something
stopping
it?
You
know,
was
it
what
what?
Where
did
it
get
stuck?
Was
it
able
to
schedule?
You
know
all
that
sort
of
thing?
A
K
We
can
try
to
identify
the
infrastructure
thing
and
I
think
what
we've
done
in
the
past
is
try
to
correlate
the
flakes
to
increase
loads
in
the
job
and
taught
like
the
entire
time
for
the
run
to
occur.
And
if
we
can
show
a
pattern
of
that
as
a
reply
here,
then
that
will
help
to
focus
on
it's,
not
a
scalability
problem
but
likely
an
infrastructure
problem.
Because
of
that
correlation.
A
Excellent,
any
any
any
other
discussion
here.
Anything
anybody
needs
or
are
we
all.
A
Done:
okay,
oh
wait:
jim's,
you've,
snuck
one
in
all
right.
You
need
reviews
for
added
on
how
we
want
the
future
gates.
Hi.
J
I
was
trying
to
well
elana
started
this
saying
we
don't
have
a
dock
for
this.
What
do
we
change
in
features?
Dot
go
and
when
do
we
change
it,
and
what
do
we
change
it
from
and
two?
So
I
ended
up
just
looking
through
some
old
pr's
looking
through
the
traffic
in
the
git
history
and
tried
to
summarize
what
I
saw,
but
I
would
like
you
all
to
take
a
look
at
it,
especially.
J
I
think
there
is
one
last
bit
which
I
wasn't
too
clear
about,
so
I
need
some
help.
Wrapping
it
up.