►
From YouTube: SIG Testing Weekly Meeting for 20230306
Description
SIG Testing Weekly Meeting for 20230306
A
A
C
D
E
D
A
If
you're
new,
which
I
don't
know,
if
we
have
anybody
new,
you
can
add
to
the
recurring
topics,
and
please
add
yourself
to
the
attendance.
A
Thank
you.
Everyone
I
think
we
can
go
ahead
and
get
started.
Michelle
you're,
the
only
one
who
can
share
your
screen,
I
think.
So,
if
you
want
to
share
the
screen,
we
can
do
that.
Otherwise
it
doesn't
look
like
we
have
that
much
I,
don't
think
we
have
anybody
new,
but
good
morning,
everybody
good
afternoon,
Wherever,
You,
Are
I,
think
we
can
go
ahead
and
move
on
to
the
open
discussion.
A
D
C
I
would
like
to
know:
what's
the
process
to
add
a
beer
cluster
to
brow,
an
external
one.
I
know
the
tooling
part,
which
is
basically
use
chain,
create
generate
a
cube
config
and
had
that
as
a
secret,
but
from
employer
perspective,
are
you
allowed
to
basically
do
that?
C
It's
it's
an
amazing
question,
so
you
can
you
can
follow
up.
We
can
follow
up
on
slack
if
you
want,
like.
E
Yeah
I
I
think
this
is
one
of
the
things
where
I
probably
need
to
poke
folks
more
directly
I'm
proud,
because
I
think
they
might
have
a
better
idea
of
what
the
policy
is
around
that
but
yeah
I
think
follow-up
after
sounds
great.
E
Yeah
cool
Alanis
child's
not
directly
on
that
anymore,
but
child
may
also
know
but
I
think
polar
Alanis
is
are
good.
People.
C
E
E
E
Yeah,
if
you
have
a,
if
you
have
anything,
to
link
as
well
feel
free
to
send
that
to
me
and
I
can
pass
it
on.
D
What's
the
plan
to
have
mixed
clusters
for
testing.
C
D
C
Not
necessarily
because
we
are
not
going
in
production
by
the
way,
we're
basically
trying
to
establish
what
the
other
to
identify.
What
are
the
blocker
to
move
specific
projects
to
AKs
like
how
the
expression,
storage,
exhaustions
like
and
CPU
memory,
those
kind
of
things
we
are
not
saying.
Oh,
we
plug
an
eks
cluster
and
we
go
in
production
right
away.
C
D
D
D
We
should
have
no
dependency,
and
ideally
we
should
be
able
to
practice
this
from
nowhere.
What
I'm
saying
is,
if
we're
going
to
practice
faster,
we
can
start
with
periodics
or
with
presents
in
this
side
projects
you
know,
and
when
we
have
some
soft
time
we
say
well.
This
is
ready
for
going
for
kubernetes,
but
as
first
thing
adding
to
the
KK
I
think
that
is
very
risk.
C
Yeah
I
think
rollout
strategy
is
not
established
because
we
didn't
do
a
full
inventory
of
all
the
projects
we
want
to
migrate.
But
ultimately
we
need
to
plug
ATS
and
that's
my
first
goal
here.
I
first
want
to
plug
eks
to
the
pro
control
plane
and
later
I'm
gonna
reach
out
to
pre-seek
testing
and
Order
C
to
to
basically
identify
which
Pro
job
we
want
to
migrate.
We
don't
have
to
migrate
everything,
but
we
need
to
establish
a
list
of.
D
C
D
A
Yeah,
thank
you
very
much.
Thank
you
guys
and
Patrick.
We
can
Circle
back
around
to
you.
I
did
replace
your
second
link
because
I
think
it
was
just
a
duplicate.
F
It
is,
it
is,
and
the
first
one
actually
got
emerged.
The
second
one
was
a
fix
for
it
and
then
the
third
one,
the
one
that
is
currently
pending
is
in
kubernetes.
Let
me
just
get
you
the
link,
it's
very
the
subject
is
very
very
golden
lint
I
can
I
can
update
the
document
later
or
because.
F
F
It
would
be
stronger
than
what
we
have
enforced
for
existing
codes,
but
that's
that
makes
sense
because
new
code
might
have
a
higher
might
might
have
to
meet
a
higher
quality
bar,
and
we
do
have
additional
tools
that
we
just
can't
apply
to
the
current
code
base,
merely
because
there
are
so
many
things
in
it
that
just
weren't
found
when
when
adding
the
code,
the
current
status
is
that
we
could
merge
this
one
pending
kubernetes
PR,
and
then
there
is
a
shell
script,
that
users
can
run
to
check
their
own
code,
that
they
are
currently
writing
and
in
a
pull
request.
F
So
this
this
kubernetes
PR
that
that
currently
needs
a
reviewer
I
think
Antonio
looked
at
it
a
while
back
and
this
this
is
now
ready
to
for
merging
The
Next
Step
and
that's
where
help
would
be
needed
would
be
to
make
the
user
experience
nicer.
So.
Currently,
what
happens
is
say
a
developer
or
a
reviewer
remembers
to
start
a
job.
Then
there
is
a
failed
brow
job.
F
The
contributor
needs
to
look
into
the
text
output
of
that
job,
to
figure
out
which
code
lines
need
to
be
updated
and
that's
all
less
convenient
than
in
other
repositories,
where
a
GitHub
action
executes
that
and
directory
posts.
The
result
to
the
patch
few
in
GitHub.
We
can't
do
that
in
kubernetes
or
not
easily,
because
we
need
to
run
this
check
in
Pro.
F
Github
action
doesn't
work
that
well
because
it
wouldn't
it
would
interact
poorly
with
the
merge
robot
and
there
is
an
issue
open
about
supporting
GitHub
annotations
from
brow,
but
it
hasn't
made
much
progress.
So
if
someone
wants
to
do
some
useful
work
here,
there
was
a
volunteer
who
signed
up
for
it
kubecon
Europe
last
year,
but
he
just
hasn't
had
the
time.
So
this
is
for
a
test
info
issue.
The
last
one
17056
is
up
for
someone
to
work
on
that.
F
Okay,
well,
whatever
the
solution
is
I
think
I
think
the
latest
information,
the
latest
design,
was
that
we
in
Pro
we
just
write
some
text
files,
those
get
archived
as
artifacts
and
then
some
post-processing
of
the
result,
the
same
one.
That
posts
commit
comments
on
the
pr
would
also
look
at
these
text
files
and
update
GitHub
I.
Think
that
was
the
latest
status
on
on
how
we
wanted
to
solve
that,
whether,
if
there's
now
a
different
solution
that
would
be
would
be
fine.
Of
course,
too.
B
Because
if
that's
the
case,
that's
that's
a
rough
starter
issue.
Migrating
Pro
to
a
GitHub
app
is
gonna
require
no.
F
No,
no
on
the
running
side
at
all,
not
at
all
I
I
was
referring
just
to
where
we
run
go
along
Sea
Island
So.
Currently,
it
runs
as
as
part
of
pool
kubernetes
verify,
and
if
we
keep
it
like
that,
we
would
just
produce
additional
artifacts
from
that
verify
run
and
those
results
somehow
need
to
get
into
GitHub
annotations.
F
F
Well,
I
guess:
unless
we
have
a
volunteer
to
work
on
that,
it
will
just
well
we'll
just
keep
that
issue
open.
We
can
move
forward.
I
think
we
could
more
move
forward
with
this
kubernetes
pull
request.
I
hope,
that's
just
a
matter
of
approving
it
now
and
then
I
I
can
send
an
email
to
the
cabinet's
death
mailing
list
about
announcing
that
this
functionality
is
now
available,
can
try
it
out
and
if
it
makes
sense
we
can
gradually
run
this
job
more
often
say.
Currently
it's
all
it's
opt-in.
F
B
F
B
So
so
what
we've
done
in
other
places
is
we've
captured
the
output
in
the
Shell
wrapper
and
then,
if
it's
for
files
that
we
know
should
are
not
yet
look
they're
either
in
an
allow
list
or
a
deny
list,
we
compute.
If
we
want
to
fail
or
not
so
so
like
when
we've
started
enforcing
shell
check,
it
was
not
possible
to
get
it
to
pass
on
the
whole
repo.
B
So
we
just
had
a
list
of
like
known
shell
check
failure
files
and
then,
if
your
file
wasn't
in
the
list
of
no
not
yet
fixed
files,
then
we
failed
and
not
by
matching
the
output.
You.
F
F
B
You
can
do
that
per
directory,
because
I
think
I
think
we
might
want
to
consider
something
like
that.
It's
cheaper
to
run,
go
link,
Sea
Island
once
and
people
will
actually
look
at
the
verify
output
and
we
don't-
and
we
get
by
pretty
well
without
bot
comments
with
that
and
then
what
we
do
is
we
ask
people
to
work
on
cutting
down
the
exclusion
list?
Okay
and
new
code
won't
be
in
the
excluded
list.
F
B
F
B
That
I've,
just
there's,
there's
two
things
that
happen:
yeah,
there's
totally
new
files
are
not
permitted
to
fail.
The
new
linters
existing
files
that
are
known
to
fail
are
permitted
until
they're
removed
from
the
exclusion
list,
and
then
we
ask
people
to
work
on
removing
them
from
the
exclusion
list.
Over
time
you
don't
have
files
on
the
exclusion
list
and
then
new
code
in
those
files
also
can't
yeah.
F
So
this
the
current
mechanism
is
more
intelligent
than
that.
You
can't
add
new
chunks
and
you
function
to
an
existing
file
must
pass
valenter
checks,
okay
and
that
works
without
configuration
changes.
It
looks
at
the
base
base,
hash
and
automatically
figures
out
which
code
is
new
and
only
reports,
issues
in
modified
lines.
B
F
There
is
a
very
the
the
verify
golang
CI
lint
sh
script
has
a
minus
a
parameter
for
automatic,
and
that
looks
at
your
current
branch
and
reports.
The
strict
or
you
can,
you
can
say
minus
a
minus
s
and
it
will
check
your
local
branch
with
a
strict
configuration
and
that's
the
same
thing.
That
would
happen
in
the
CI.
F
B
Exit
strategy
to
to
where
we
don't
need
to
run
both
of
these,
and
we
just
have
standardized
on
new
lints
I'm,
also
just
with
my
Kate
semper
hat
on
I'm,
a
little
I'm
not
excited
to
add
another
precipit
that
has
to
independently
run
over
all
of
the
code.
B
We
are
trying
to
remove
Chris
emits
because
they're
expensive.
They
run
on
every
push,
yeah
and
I'm,
not
sure
what
we've
allocated
this
one,
but
typically
go.
Link
CI
lint,
just
with
our
huge
amount
of
code,
has
actually
like
we're
running
that
and
verify
with
a
full
like
Ci
node.
If,
if.
F
F
The
same
way
that
I
just
described
it
would
be
a
new
verify
script
that
runs
golang,
Sea
Island,
again
with
a
different
configuration,
but
it
would
be
in
the
same
pool,
kubernetes
verify
okay
and
it
would
use
for
cached
results.
That's
the
other
beauty
of
golang
Clan.
It
caches
the
the
previous
analysis
results,
so
the
second
one
would
be
fairly.
B
Fast
I
think
we
want
to
do
that.
Another
option
you
have
is,
we
can
make
it
not
default
under
the
make
verify
command,
but
if
we're
not
quite
ready
yet,
but
we
can
have
CI
run
it
there
and
we
can
encode
it
to
be
you
can
we
can
in
the
script
logic
we
can
make
it
like
optional
until
we're
ready
to
make
it
required.
F
E
Yeah
no
I,
don't
know
the
point.
The
only
thing
I
was
thinking
in
terms
of
reporting
is
that
I
know
pro
has
the
link
command,
which
is
obviously
not
the
same
as
this
but
I'm
wondering
if
there's
like
a
way
to
reshape
that,
or
maybe
add
some
additional
functionality
that
would
fit
in
here
like
needs
to
be
triggered
manually,
but
the
I
think,
because
it
is
like
a
proud
Community
does
report
the
results
of
that
to
a
comment
on
the
pr
yeah.
B
Slash
lint
is
kind
of
a
broken
approach.
The
problem
with
Slash
lint
is,
it
actually
runs
in
the
web
hook
process.
So
for
security
reasons,
then
we're
like
not
worried
about
like
letting
arbitrary
code
inside
of
a
proud
job
comment
back
with
the
official
prow
account,
which
could
be
disastrous
right.
B
Actually,
as
is
if
you
had
arbitrary,
if
you
had
like
new
linters
I
could
like
sneak
some
code
in
a
linter
that
made
it
do
things,
but
the
the
real
problem
is
it's
using
the
Go
version
that
was
built
to
compile
the
web
hook
and
not
the
Go
version
under
test
and
all
or
like
the
link
tools
and
things
you
really
want
to
control
the
version
of
your
linter
in
the
repo.
So
you
can
like
roll
forward
and
fix
changes.
B
A
B
In
that
case,
it
has
to
like
clone
the
code
and
then
it
and
then
like
run,
go
lint
over
it
in
process
and
most
projects
are
using
going
CI
lint
instead.
Anyhow.
B
And
it
would,
we
can
have
like
a
secondary
account
or
something
do
this
I
think
we
should
probably
make
that
a
like
enhancement
to
linting
that
doesn't
block
enabling
ratcheting
the
lint
and
we've
done
that
we've
done
that
before
it
sounds
like
we
have
a
couple
of
options
for
that
cool
I.
Think
we've
actually
had
relatively
good
experience,
even
with
the
pretty
dumb
approach,
where
we're
just
like
grapping
against
the
output
versus
like
permission
list,
it
has
the
biggest
problem
we've
run
into.
B
B
This
sounds
reasonable
enough
to
move
in
to
verify
as
soon
as
we
can
confirm
that
it's
working
honestly.
F
That's
that's
the
feedback
part
yeah,
but
I
I
posted
some
examples.
In
that
open,
pull
request
and
I
have
another
job.
Another
pull
request
opened
that
intentionally
will
fail
the
the
linting.
So
we
can
use
those
as
test
cases
and
we
can
wait
for
other
people
to
try
it
out
and
give
us
feedback.
B
I
guess
the
other
question
becomes
so
you're
enabling
a
new
set
of
lenders
now
against
new
code.
I
haven't
dug
in
how
you
can
figure
this.
Will
this
work,
if
we
say
I
want
to
end
say
going
Clint
adds
another
linter
and
I
want
to
add
that
one
do
we
need
to
do?
We
need
a
third
target,
or
can
we
add
it
to
this
one
and
and
that
one
like
there's
different
sets
of?
Is
it
new
code.
F
F
A
new
lender
would
need
to
be
added
to
that
config
file,
and
it
would
have
to
pass
all
new
code
immediately.
I.
B
F
We
can't
otherwise
we
can't
edit
there.
If
we
get
into
the
situation
that
we
want
to
roll
out
a
new
lender.
We
basically
would
need
to
First
fix
existing
code,
I
suppose
and
then
add
it
and
then
say
we
are
confident
enough
that
this
Linder
works,
but
we
enforce
it
immediately.
I,
don't
think
there
is
any
kind
of
credit
where
we
can
make
it
optional,
not
not
without
another
make
Target
or
a.
B
F
B
Yeah
but
who's
going
to
change
that
because
having
the
list
of
we
know
all
these
things
are
failing,
then
you
just
tell
people
like
cut
down
the
list
and
we
get
there
I'm
not
saying
we
need
to
do
that,
but
I
I'm
not
sure.
What's
the
forcing
function,
the
Sim
like
the
equivalent
forcing
function
here
to
stop
permitting
it
in
existing
files.
B
I
think
it
I
I
would
so
suggest.
I
think
it
may
be
easier
to
get
people
to
cut
down
a
list
of
exceptions,
even
if
it's
large
initially
because
you're
then
you're
driving
it
towards
Zero
versus
like
an
ever
expanding
set
of
required
files
and
because
also
then
you
have
to
convince
people
to
to
add
that,
whereas
what
we,
the
other
thing,
that
we
did
that
I'm
recalling
now,
we
could
do
the
inverse
of
this.
B
But
I
think
the
the
missing
part
that
I
forgot
to
mention
is
so
in
the
linters,
where
we've
had
the
exception
file.
We
also
if
we
discover
that
a
file
is
now
passing.
We
require
you
to
remove
it
from
the
exception
file
and
we
tell
approvers
you
shouldn't
be
permitting
adding
things
to
the
exception
file
like
it
would
have
to
be
pretty
it
unusual
circumstances
to
permit
that.
B
So
every
time
you
file
a
PR,
if
you
like,
delete
some
code
and
now
that-
and
that
was
the
offending
code,
and
now
that
file
is
passing
the
the
like
lint
script-
notices
that
that
file
no
longer
has
any
failures.
But
it's
in
the
in
the
like
known,
failing
list
and
you
have
to
remove
it
from
the
known
failing
list.
So
new
new
failures
can't
be
introduced.
F
F
E
Basically,
like
a
long-term
thing
that
does
check
for
exclusions
in
the
exclusions
of
the
stricter
linters
in
the
entire
code
base
and
then,
like
also
in
report,
we.
F
F
As
we
update
code
or
as
people
submit
PRS
to
fix
lint
issues,
we
will
come
to
a
state
where
some
things
automatically
will
not
have
both
issues
anymore,
and
we
just
need
to
tell
people
that
yeah.
We
are
taking
this
serious
and
we
are
welcoming
patches
to
to
fix
these
issues,
and
then
people
will
start
working
on
it.
I
suppose.
B
I
guess
the
exit
State,
then,
is
once
people
have
since
we're
not
permanent
and
and
divs,
and
we
would
ask
people
to
go
make
dips
to
fix
it.
At
some
point.
We
notice
that
if
you
ran
the
tool,
there
is
no
old
code
with
it
and
we
just
add
it
to
the
default
job,
exactly
yeah
yeah.
That
sounds
a
little
trickier
to
ensure
happens,
but
if
we
write
up
an
issue
outlining
how
to
do
it,
I
bet
we
can
get
there
and
like
do
one
for
each
of
the
linters.
E
This
sounds
like
good,
like
initial
contributions
too,
so
hopefully
that.
B
F
F
F
The
main
problem
that
I've
had
in
the
past,
with
wanting
people
to
do
things,
is
that
we
had
no
mechanism
to
enforce
it
for
new
new
code,
so
we
have
had
some
open
issues
against
the
E
to
e-testing
where
I
said
this,
this
is
a
bad
pattern.
Let's
remove
it
and
then
people
start
fixing
something
and
then
new
people
not
aware
of
this
guideline
added
it
in
other
tests.
So
it
was
another
never-ending
battle
against
updating
codes.
F
B
I
think
we
have
a
good
approach
here.
Let's,
let's
test
your
test
drive,
make
sure
it's
working
and
then
let's,
let's
get
this
running
and
verify.
F
A
Yeah,
thank
you,
Patrick,
okay,
I
think
Patrick.
You
may
also
still
need
a
volunteer
for
one
of
those
issues.
If
that
is
something
that
we
still
want
to
pursue,
but
that
looks
like
it's
the
end
of
our
discussion.
If
anybody
has
anything
else,
please
speak
up
or
hold
your
peace
and
I'm
gonna
go
eat.
Lunch.