►
From YouTube: 20221201 SIG Architecture Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi,
hello
today
is
December
one.
This
is
the
release
week
for
126,
so
we
are
light
on
agenda
and
we'll
just
be
covering
some
of
the
conformance
related
stuff.
Today,
why
don't
you
go
ahead
and,
let's
talk
about,
you
know
the
items
that
you
have
on
the
agenda.
B
Thank
you
very
much.
If
you
care
you
can
share,
otherwise,
you
can
make
the
co-host
an
icon
share
by
the
way.
A
A
Okay,
this
is
a
new
laptop
I.
Think
I'll,
give
you
share,
give
you
make
your
co-host
no.
B
B
C
C
B
B
First
point:
I
want
to
bring
up
is
the
ineligible
endpoint
that
we
added
to
the
inner
level
inputs
yaml
file.
So
there's
two
two
discussions
out
of
this.
The
first
one
is
this:
PR
is
ready
to
merge.
So
if
you
can
have
an
audit
payment
approve
and
a
hold
I
will
unhold
it
the
moment
the
release
is
cut,
and
then
we
can
update
API
snip
so
because
I
don't
want
to
mess
with
with
the
release
things.
If
that
would
be
helpful.
Thank.
D
B
The
first
the
first
point
on
the
agenda,
so
basically
there's
a
new
endpoint
that
came
in.
You
should
see
it
down
my
screen,
which
is
a
get
resource.
Api
Group
endpoint
it's
in
Alpha,
but
because
it
does
not
have
Alpha
in
its
URL.
It
actually
shows
up
an
API
as
a
GI
endpoint.
So
I
heard
a
whole
discussion
down
here
and
Jordan
agreed.
A
Yeah
I
I
got
yeah
I.
Remember
this
now
so
one
question
for
you:
where
do
we
use
this
information
in
the
ineligible
in
Poinciana.
B
B
That's
another
agenda
point
that
I've
got
so
I'm
going
to
explain
because
Jordan
did
ask
say:
there's
things
that
need
to
come
back
on
the
list
in
here.
So
I
did
do
a
bit
of
explanation
here
and
it's
part
of
our
123
plan.
So
when
I
get
to
that
point
in
agent,
I'll
quickly
run
through
okay,
explanation.
A
So
this
doesn't
have
lgdm
approved,
so
let's
request
Jordan
for
a
LG
TM
and
then
I
can
do
the
approval
on
this.
Okay,
fantastic.
B
A
B
B
A
The
only
one
that
is
using
this
yaml
file,
then
I'm
not
inclined
to
like
you
know,
we'll
try
to
land
at
126..
No.
B
No,
it's
not
it's
really
not
urgent,
and
now
that
I
think
about
it.
Api
Snoop
will
process
it
and
take
it
totally
often
so
that
you've
kind
of
got
that
one
little
end
point
there
at
the
top
and
the
moment
that
get
merged
in
it
gets
removed
out
of
the
data
and
that
endpoint
will
disappear
off
the
operator.
Yeah.
A
Okay,
when
the
when
the
127
opens
up,
you
will
have
one
more
bar
in
the
graph
that
says
1270
right
exactly
exactly
and
it'll
it'll
show
it
will
not
have
any
gray
so
which.
B
It
will
have,
it
will
have
thin
gray
and
gray
endpoints,
because
that's
what's
left
or
when
I
go
through
the
one
2023
explanations
of
okay
do
a
rundown
of
what
we're
planning
for
next
year
sounds
good.
Thank
you
very
much
so
yeah.
So
as
soon
as
the
release
has
been
cut
and
that's
been
approved,
I'll
merge
again.
Okay,
the
next
point
I
think
I'll
skip
over
the
next
one.
B
It's
a
conformance
question
and
we
can
get
back
to
that
in
a
moment
of
because
there's
many
questions
about
123,
so
so,
basically
from
the
information,
Jordan
actually
started
the
conversation
about
a
thing.
They
were
planning
to
present
this
week,
where
we're
going
so
by
the
end
of
126,
we're
actually
going
to
have
10
endpoints
remaining.
So
if
we
go
to
APS
Snoop-
and
we
go
here
we'll
see
here-
we
go
on
test.
It's
only
these
10
endpoints
remaining.
B
So
those
are
the
ones
that
we've
got
to
try
and
get
in
within
the
next
n127
we're
trying
to
trial
and
kill
off
all
these
endpoints
off
the
list,
which
will
then
take
us
to
100,
and
then
Jordan
made
the
very
valid
statement
and
said,
but
there's
some.
If
we
go
through
is
comment
here
is
it
but
this
wins
GA
and
125,
and
this
is
not
optional
since
121..
So
what
happened
in
the
in
initial
stages
of
creating
ineligible
endpoints?
B
There
was
just
a
list
which
was
part
of
the
the
code
of
API
Snoop,
which
discounted
those
points
for
and
then
in
the
beginning
of
foreign
I,
think
2021
yeah
early
2021.
We
basically
moved
the
inalitable
endpoints
into
the
yaml
file
and
consume
it
from
there,
so
it
becomes
public
and
people
can
actually
ask
the
questions.
Why?
Why
this?
And
why
that?
And
we
need
to
bring
them
in,
because
it
felt
to
me
that
we
agree
about
something
in
a
meeting
and
we
put
it
in
there
and
nobody
knows
about
it
again.
B
It's
kind
of
hidden.
Now
it's
out
in
the
open,
yep
and
right
for
Jordan
to
to
comment
on
that.
I
appreciate
it.
So
what
we're
going
to
do
in
123
we're
going
to
create
a
list
of
all
the
ineligible
endpoints
where
we
can
discuss
it,
we'll
bring
it
to
this
meeting
and
then
we
can
discuss
some
people
say,
but
exactly
what
Jordan
did
he
just
did
a
scan
and
found
before
and
I'm
sure
and
those
are
probably
more
yeah.
B
Those
are
some
of
the
very
first
one
that
was
on
the
list
that
did
not
come
in
with
a
PR,
that's
from
like
two
years
ago,
so
great
discussion.
What
we
want
to
do
is
still
the
last
thing
out
of
a
point
where
you
can
just
celebrate
this.
Okay,
we
reach
100.
Now,
let's
see
what
should
come
back
and
then
we're
going
to
work
through
that
list
and
one
by
one
create
the
e2e
test
for
those
endpoints,
then
we're
going
to
promote
them
to
conformance
and
then
we'll
remove
the
endpoints
from
the
list.
B
So
we
don't
we're
going
to
stay
on
100
coverage
as
to
have
a
clear
where
we
are:
we've
killed
all
the
technique,
all
technical
debt
and
then
just
kill
all
the
things
open
and
I'll
find
a
way.
I
already
discuss
with
Zach
to
visualize
that
in
API
Snoop
to
say
these
things
will
be
coming
back
next
to
the
I'm,
not
sure
exactly
how
we're
going
to
do
it
yet.
But
we
will
publicly
announce
these
things
will
come
back
and
discuss
with
the
six
yeah.
B
Then
once
we
once
we've
done
gone
through
that,
hopefully
that
wouldn't
take
all
year
and
there's
not
too
many,
but
whatever
comes
back,
will
deal
with
that
and
once
that's
done,
we're
gonna
review
all
conformance
tests,
because
at
the
moment
there's
a
lot
of
this
and
six
apps
specifically
picked
up
on
this.
When
we
finish
up
lots
of
this,
their
endpoints
where
a
resource
is
created,
one
or
two
endpoints
is
tested
and
the
resource
is
deleted
and
then
the
same
resources
recreated
and
then
some
other
endpoints
are
tested
and
it's
deleted.
B
So
there's
a
lot
of
setting
up.
That's
unnecessary
that
if
we
can
take
all
that
four
or
five
tests
in
a
specific
resource
file
generate
a
single
test
for
that.
That
does
all
the
endpoints
and
we've
promote
that
to
go
informers,
remove
the
old
ones
and
basically
reduce
the
compute
and
the
and
the
overhead
for
this.
B
So
that's
one
of
the
strategy
that
we
have
in
mind
for
for
next
year
and
then
also
what
we
want
to
do
is
there
is
testing
conventions
or
for
testing
Stephen
and
I
did
a
little
bit
of
a
Spiel
here
about
what
we
want
to
change
so
make
sure
that
things
are
on
the
latest
version
of
go
the
best
type
of
testing
that
we
can
that
as
we
developed
over
the
years,
because
some
of
that
test
is
like
five
years
old
yeah,
so
get
more
consistency
across
different
things,
and
specifically
so,
like
city
apps
and
across
all
City
app
resources
have
similar
testing
for
all
resources
to
make
it
more
consistent
and
reduce
the
cost.
B
A
Can
I
ask
you
to
add
at
least
trying
to
figure
out
like
which
tests
take,
how
much
time
and
like
you
know,
see
if
make
it
easier
for
people
who
are
running
conformance
tests
to
you
know,
do
not
waste
as
much
time.
You
see
what
I'm
saying.
B
So
is
it
the
optimization
and
see
in
which
ways
we
can
reduce
the
right?
The
Tesla.
D
Sometimes
this
I
think
from
memory
there's
a
plane
to
change
some
of
the
settings
for
timeout
so
that
they're
manually
configurable.
So
if
people
want
to
look
at
overriding
it,
there's
a
whole
I
think
there's
one
or
two
discussion
points
and
some
other
PR's
that
I've
run
across
in
the
past
and
that's
potentially
part
of
the
process
right
to
remove
those
hard-coded
constants
and
then
bring
in
I.
Think
it's
to
do
with
framework.
A
Yeah
and
like
one
feedback
that
I've
gotten
from
folks
was
like
hey,
we
just
run
the
conformance
test
until
it.
It
is
all
green
right
and
we
need
to
collect.
You
know
flakes
also
from
from
folks
I
think.
E
A
Yeah
interest
rate,
you
can
look
at
the
number
of
seconds
that
each
test
takes
the
granularity
that
is
there.
Yeah
I
just
need
to.
B
What
we
often
trust,
even
if
you
can
grab
a
link,
I'll
appreciate
and
just
throw
it
in
the
chat,
what
we
do
normally,
when
we
bring
a
conformance
test
for
verification
of
one
is
on
e2e
and
on
the
district.
We're
always
there's
action
option
in
this
group,
where
you
can
choose
to
show
the
seconds
that
it
runs,
and
we
also
bring
the
graph
and
you
can
see
the
seconds
and
and
whether
it's
flaking
or
not
so.
A
Yeah
and
that
information
is
surfaced
to
the
approvers
when
a
test
is
being
marked
for
conformance,
but
you
know,
but
the
thing
is
like
after
it
is
approved
like
we
never
go
back
and
check
if
it
is
deteriorating
over
time
or
not
all
right.
So
that
is
basically
what
I'm
asking
us
to
take
a
peek
at
yeah.
B
Okay,
great
we,
we
will
have
a
look
at
that
and
we
will
continually
be
back
at
this
meeting
with
this
is
where
we
are.
This
is
what
we're
thinking
and
I
think
or
throughout
the
community.
That
would
be
very
valuable.
Work
too,
as
cost
is
important
to
reduce
costs,
reduce
unnecessary,
runs
and
also
make
it
more,
as
I
say,
more
convenient
for
the
end
users
of
conformance
testing
to
know
how
much
time
should
happen.
B
Where
right,
then,
the
other
important
thing
that
we're
going
to
need
to
figure
out
the
prices
I'm,
not
sure
exactly
yet,
but
I'm
going
to
figure
out
something
where
we
monitor
the
ineligible
endpoint
list
on
a
periodic
basis,
probably
at
the
start
of
every
release.
Looking
at
tips,
that's
going
to
come
into
into
a
specific
release,
so
Steven
and
I
will
work
out
the
process
where
we
make
sure
we
don't
have
six
releases
down
the
line.
Oh
that
we've
gone
about
this
endpoint
that
need
to
come
back
on.
B
It
came
back
when
nobody
did
the
conformance
test
to
make
sure
that
we
have
a
better
monitoring
format.
Okay,
then,
what's
also
going
to
happen
continually,
which
is
at
the
moment
already
happening.
What
we're
going
to
continue
in
123
is
ensuring,
so
we
have
a
periodic
job
that
runs
a
GitHub
action
to
I,
see
that
there's
no
new
technical
there,
so
API
Snoop
is
updated
over
every
weekend
and
Monday
morning
in
New
Zealand
time,
which
is
still
Sunday
in
the
US.
B
We
actually
monitor
and
see
what
came
in
new,
so
we
always
see
the
things
if
something
sneaks
in
especially
around
code
freeze,
we're
really
diligent
in
checking
us
and
we
do.
Some
extra
manual
runs
to
make
sure
we
don't
get
more
technical
debt
and
then
on
a
demotion,
we've,
obviously
increasing
automation,
so
API
Snoop.
Previously,
at
the
end
of
every
release,
we
had
to
go
and
do
some
tweaks
and
updates
to
make
sure
that
the
next,
as
you
said
earlier
on
them,
said
the
new
release
will
get
added.
B
B
Yes
and
then
also
other
great
thing
that
Stephen
did
for
us.
There
was
an
open
issue
since
2018,
where,
if
we
go
to
always
and
since
guiding
conformance,
yes,
it's
a
sensitive
best
performance
under
there
we
have
dots
and
on
the
dots
we
have
all
the
release,
documentation
where
you
can
get
the
detail
of
all
the
changes
for
for
the
summary
for
the
release,
and
there
was
an
issue
open
since
2018
that
there
was
missing
I
think
there
was
basically
everything
from
2018
was
missing.
B
So
Stephen
added
that
in
for
us
recently
and
then
also
now
he
automated
the
moment,
the
release
get
cut,
we
tested,
it
should
work
just
fine
next
week,
it's
actually
automatically
you
want
to
create
a
PR
bringing
in
the
126
documentation.
It's
well
done
for
Stephen.
There.
B
B
If
we
go
to
the
pull
request
for
those
that
have
not
met
the
CNC
FCI
bot,
if
you
go
into
any
PR,
the
labeling
happens
by
the
cncfci
bot,
and
that
also
now
has
an
automated
update
because
you'll
find
property
within
a
day
after
the
release
has
cut
somebody's
submitting
conformance
results
and
if
we're
not
quick
on
it,
we're
actually
flagged
because
the
release
is
not
available
to
be
checked
by
the
bot.
So
that's
also
automated
with
a
PR
now.
A
So
go:
can
you
go
back
one
sec?
What
is
the
list
of
requirements
and
it
says
15.
B
Okay,
let's
find
one
so
this
has
grown
over
over
a
period
of
time
to
make
sure
that
it
does
all
the
things
that
we
need.
So,
if
you're
using
what
the
PRN
doesn't
pass,
it
actually
tells
you
there's
some
things
missing,
and
it
tells
you
here
you
can
look
at
the
requirements.
Requirements
there's
actually
a
lot.
It
tells
you
what
the
product
handle.
B
So
this
is
the
documentation
for
how
to
you
know
how
to
documentation
for
your
submission
and
then,
under
that
we
created
this
table
and
the
table
tells
you,
okay,
your
PR
title
may
not
be
empty.
Your
submission
contains
all
the
required
Fields.
It
checks
that
you've
got
the
right
amount
of
files
and
check
that
it's
actually
in
so
go
if
you
submit
for
124
that
the
files
actually
go
into
the
124
directory.
So
there's
this
whole
list
and
then
a
add-on
that
we
recently
did
is
in
this
check.
B
Now
it
also
checks
for
email
address
to
address
the
situation
where
we
might
want
to
communicate
with
all
the
submitters
of
performance
requests.
B
Tyler
also
welcome
that
she
said
to
try
to
maintain
a
mailing
list,
which
is
just
too
difficult,
because,
yes,
every
other
release
that
the
persons
are
putting
the
conformance
is
changing
so
yeah
to
simplify.
So
so,
if
there's
any
other
requirements
that
people
think
of,
we
continually
there's
been
a
growing
thing
over
the
last
two
three
years,
we've
from
about
three
or
four
things
that
we
check
now
at
15
and
it's
making
the
quality
and
the
passing
of
getting
it
to
merged
in
is
a
five
second
job.
A
There
was
one
other
longer
term
thing
that
I
wanted
to
talk
about.
You
know
once
you
finish
this
go
ahead,
please,
okay,.
B
Thanks,
okay,
then
we
have
a
PR
like
a
pretty
quickly,
so
so
the
endpoint
that
we
mentioned
up
here,
the
get
resource
IPI
group
endpoint-
we
mentioned-
that's
it
needs
conformance
tests
want
to
come
in
and
Patrick
was
very
proactive
about
it
and
it
added
this
issue
saying
they
need
to
remember
and
I
added
it
to
the
dashboard
for
golf
forwards
as
well
for
the
project.
B
But
it
makes
a
statement
here
that
it
depends
on
portfolk
forwarding,
which
is
a
debug
feature
and
not
part
of
conformance,
and
this
worked
our
interest
because
we
go
to
API
snip.
There
is
actually
the
sport
forward,
endpoint
that
need
to
be
tested,
and
if
that
statement
is
true,
that's
why
we
brought
it
here
to
make
sure.
Is
that
a
an
agree
if
I
can
just
find
it?
If
that
is
an
agreeable
statement
and
port
forwarding
is
not
a
conformance,
we
might
be
careful
about
trying
to
write
a
test
for
that.
A
B
A
We
need
an
official
statement
from
them
saying
hey:
this
is
sick
Network
and
we
think
that
this
you
know
API
doesn't
need
to
be
part
of
the
conformance
and
then
we'll
put
it
in
the
eligible
list.
And
then
we
don't
we'll
close
this
out
this
issue,
saying
hey
the
thing
that
we
depend
on
testing.
This
is
not
available,
so
we
can't
test
this
either
yeah.
B
So
I
will
take
that
to
their
meeting
and
if
they
say,
and
also
if
they
say,
port
forwarding,
because
this
is
a
double
cutting,
if
it
is
port,
forwarding
is
optional
or
is
just
a
debug
feature.
So
we'll
take
port
forward
endpoint
also
to
the
ineligible
list,
as
well
as
their
endpoint
to
the
list,
and
that
is
thank
you
for
all
the
patience
and
listening
to
my
long
story
thumbs
up
with
you.
Okay,.
A
So
one
one
other
thing
that
we
I
wanted
to
poke
at
for
the
longer
term.
For
this
the
group
is
right:
now
we
are
dependent
on
Sono
boy
right
and
most
of
the
people
are
using
Sono
boy
to
run
the
conformance
test.
Do
you
know
of
anybody
who's,
not
using
Sona
boy.
B
A
A
So
if
we
can
document
one
other
way
to
run
conformance
program,
you
know
other
than
solo
boy,
I
think
we'll
be
in
a
better
position
going
forward,
because
you
know
because
Sono
boy
I
don't
know
if
it
is
going
to
be
funded
by
the
folks
who
work
on
it
and
the
last
maintainer
that
I
that
I
used
to
work
with
is
no
longer
there.
So
I
I,
don't
know
the
future
of
sonobi,
so
we
should
have
at
least
one
other
way
of
running
the.
A
F
A
Right
and
the
main
thing
there
is
going
to
be
like
hey:
can
we
just
run
this
specific
conformance
image
that
comes
out
with
kubernetes
and
how
would
I
get
my
logs
that
I
need
to
use
for
filing
the
report?
Basically
right.
B
Okay,
so
we'll
look
in
into
that,
but
also
then
the
issue
of
our
solar
boy,
because
everybody's
mostly
dependent
on
that
is
there
it
on
the
cncs
radar
that
this
is
falling
off,
because.
A
It
is
not
a
cncf
project
right,
it's
a
VMware
project,
so
we
are
using
it
and
it's
up
to
their
best
of
their
ability
that
they
funded.
A
So
thanks
for
bringing
that
up
that
they
are
also
looking
for.
You
know
maintainers.
It
would
help
if
VMware
puts
out
something
in
their
readme
or
an
issue
saying
hey,
we
are
looking
for
maintenance.
Anybody
interested
in
becoming
maintainers
would
be
would
be
something
that
we
can
point
to
and
ask
people
to
go
join
that
project.
F
Yeah
yeah
I
can
I
can
talk
around
and
and
help
spread
the
word
but
yeah.
Thank.
A
B
C
Else
from
you,
no
nothing
else
for
me.
I
am
interested
in
you
know
what
ends
up
happening
for
the
timing
for
ee
tests.
We've
seen
some
issues
with
that
Downstream
as
well
yeah,
but
we
don't
have
a
good
suggestion
at
the
moment.
A
Yeah
yeah
I
think
if
we
can
gather
the
statistics
both
in
our
test
grid
and
as
well
as
like
the
openshift
one,
David
I
think
will
get
a
sense
of
like
where
to
put
in
effort
first
or
you
know
the
priority
list
of
hey.
These
are
the
top
five
time
Hoggers
of
the
test
case
of
the
test.
Suite.
Something
like
that.
C
A
So
I
guess
the
the
one
other
thing
right
there
Rihanna
would
be.
How
do
you
show
this
piece
of
information
about
like
timing
or
for
each
test
average?
You
know
over
a
period
of
time
in
in
the
API
Snoop
itself,
so
you
know
not,
so
you
know
you
can
go
to
APS
Loop
and
take
a
look
at
it,
but
that's
not
going
to
be
enough.
We
because
we'll
need
the
trend
over
time
like
like
David
was
mentioning.
A
So
we
need
to
figure
out
like
how
to
how
to
surface
that
information.
If
there
is
a
PR
change
in
some
in
some
place
and
then
the
conformance
job
starts,
the
test
takes
twice
as
much
much
time,
even
though
they
touch
some
code.
They
didn't
touch
the
test
code,
but
they
changed
something
in
kubernetes
itself,
which
made
made
the
test
take
longer
right.
C
C
Like
admission
plugins,
for
instance,
right
are
right
for
doing
that.
Yeah.
B
That
would
actually
be
a
very
nice
project
to
have
a
statistical
run
every
week
for
all
tests
and
see
kind
of
like
we're
monitoring
in
KTM.
For
the
the
cost,
progress
have
like
a
test
test
progress.
B
You
can
see
data
per
test
over
time
and
overall,
this
run
time
and
I
do
all
the
help
when
we
refactor
this
into
single
life
cycle
tests
to
see
how
much
and
then
you
can
say,
okay,
this
five
test
was
worth
three
minutes
and
if
we
refactor
it
did
we
go
down
by
to
one
minute
to
also
show
the
value
of
the
work
that
we're
doing
to
say
because
I
think
we're
running
quite
a
lot
of
cost
on
the
testing
now.
A
The
closest
analogy
I
can
think
of
is
the
scaling
team.
How,
like
you
see
all
their
graphs
right
and
suddenly,
if
they
see
something,
is
you
know
averaging
at
that
level
and
then
suddenly
drops
or
goes
up?
Then
they
are
like.
Oh
what
happened
here?
Something
changed.
You
know
in
the
in
in
kubernetes.
So
then
you
go
to
the
date
in
which
it
was
run.
A
It
was
the
older
number
and
then
you
put
the
date
where
it
was
higher
number
and
compare
the
PRS
that
that
was
merged
in
between
to
figure
out
like
do
a
git,
bisect
and
figure
out
like
what
caused
the
change
right.
That's
typically
how
scalability
team
runs.
So
if
you
could
do
something
similar,
that
would
be
useful
for
us
yeah.
B
A
E
Good
even
imperfect
solution
that
is
well
documented
is
a
lot
would
be
a
big
Improvement
if
I
knew
a
doc
to
go
to
like
help
figure.
Those
things
out.
It
would
save
me
a
ton
of
time
because
I
often
like
I'll,
be
looking
at
a
PR
I'll
be
like
these
are
a
lot
of
tests.
I
have
no
idea.
If
these
are
good
tests
or
bad
tests,
I
had
a
doc
I.
Could
pop
open
and
start
running
commands
I
would
do
it
absolutely.
A
So
I
mean
we
don't
have
to
go
all
the
way
we
let's
do
documentation
first,
and
then
we
figure
out.
You
know
how
to
do
it
better
over
time.
We
don't
have
to
go
into
automation
right
immediately,
but
let's
see
how
we
can
look
at
the
existing
test
grids.
You
know
the
hours
and
the
openshift
one
and
you
know
start
from
there.