►
From YouTube: Kubernetes SIG Testing 20180626
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Can
open
chat
come
in
you'd?
Think
at
some
point,
zoom
becomes
second-nature.
Okay
meeting
notes
are
in
chat.
There's
two
topics
up
the
first
one
is
a
review
from
the
breakout
we
had
last
week
on
changes
that
we
want
to
make
to
prowl
or
things
we're
gonna
focus
on
as
we're
making
improvements
in
the
next
nebulous
time
period
and
just
super
quickly.
A
Some
of
the
things
that
we
kind
of
talked
about
was:
we
need
to
make
sure
we
document
the
integration
points
or
test
authors,
so
that
it's
clear
at
what
level
they
need
to
engage
with
us,
whether
you
know
they're
extending
cube
test,
whether
they're
just
adding
another
configuration
line,
whether
they
actually
need
their
in
the
book
cluster
and
whatever
it
is.
We
wanted
to
improve
start
up
so
that
there's
a
template
or
a
deployment
or
some
button
that
will
push.
You
know
a
full
deployment
out,
potentially
also
automate.
A
The
web
book
configuration
just
there's
a
one
button
way
to
start
using
prowl.
If
you
want
your
own
closer
because
right
now,
it's
kind
of
rocky.
We
noted
that
there's
tech,
debt
on
a
controller
and
cre
side
of
things
is
still
some
handcrafted
clients
that
could
be
aligned
with
some
of
the
api
machinery
generators,
there's
elected
on
the
hook
set
of
things.
A
More
structure
for
commands,
so
it's
easier
to
write
more
commands
Ben
had
mentioned.
If
we
could
provide
some
sort
of
caching
solution
to
jobs.
That
would
make
proud
a
lot
more
well-rounded.
We
mentioned
that,
there's
a
couple
gaps
in
the
documentation
and
especially
when
you
focus
on
making
sure
we
have
documentation.
That
is
for
all
three
of
our
main
audiences,
that
is
administrators
that
are
running
on
Cluster
editors,
that
are,
you
know,
changing
job
configuration
and
users
that
are
kind
of
reading
test
output
quickly
was
mentioned:
sort
of
monitoring
for
the
infra.
A
That
would
help
maintenance
and
splitting
out
the
deployment
and
configuration
and
the
code,
and
so
that's
actually
the
second
topic
that
I
put
on
there.
You
know
today,
there's
been
a
couple
conversations
about:
can
we
split
out
proud
the
project
and
the
code
base
from
the
Google
Enterprise
configuration
stanzas
and
deployment
deatils,
just
the
floors
kind
of
open?
For
that
like
do?
We
want
brows
and
so
on
repo?
What
are
the
pros
and
cons
of
you
will
feel
about
that
I
think.
B
We
do
long
term,
but
it's
still
kind
of
related
to
some
of
the
other
testing
for
tools
right
now,
so
I
think
as
a
starting
point,
I've
been
pushing
for
kind
of
just
moving
the
configs
to
a
directory
in
the
repo
for
all
the
Google
stuff
and
then
kind
of
like
once.
We
know
that's
pretty
stable.
It
might
be
more
feasible
to
actually
take
pile
out
of
that
repo.
So.
B
E
F
B
B
But
testing
for
has
been
a
mono
vehicle
right
now,
so
I
don't
know
that
just
like
ripping
it
out
is
going
to
go.
Super
well,
I'm,
also
not
different
mo
yet
like
where,
where
it
should
go
and
like
how
much
of
what
things
should
move
out,
I
mean
right
now,
prowl
on
its
own,
you
still
kind
of
want
something
like
your
nadir.
F
G
Would
think
that
you
know
split
decoupling.
The
kubernetes
deployment
of
prowl
from
the
prowl
source
code
can
happen
without
believing
I
mean
I
feel
like
that's
the
more
important
bit
and
I
feel
like
the
fact
that
things
are
conflated
right
where
we
kind
of
have
the
config
test,
which
is
somewhat
testing
the
mechanics
of
the
configuration
package,
but
also
testing
the
specifics
of
the
jobs
we
have
in
the
kubernetes
org
I
feel
like
decoupling.
G
Those
things
is
definitely
a
good
thing
to
do,
and
that's
the
most
important
piece
and
then
once
we
have
all
that
done,
then
it
should
be
yeah.
I,
don't
have
to
be
a
fundamental
issue
of
been
during.
You
know
proud
back
into
testing,
for
but
I
mean
the
other
thing
is
like
what
pieces
like
that.
Other
pieces
like
label
sync-
and
you
know,
there's
I-
think
there
are
so
there
of
I.
Think
most
things
in
testing
for
it.
B
Think
one
of
the
main
things
would
be
the
conflict
tests
like
we
should
split
those
over
into
that
config
directory
and
figure
out
which
tests
makes
sense
for
just
prowlin,
which
that
tests
makes
sense
for
our
config
circles
and
then
once
we
have
that
we're
going
to
need
to
import
the
prowl
like
conflict
loader
an
agent
and
things
like
that,
all
right
I
also
don't
think
we
need
to
do
like
staging
or
anything
like
kubernetes.
If
we
do
decide
to
move
proud
to
another
repo
someday,
we'll
just
update
the
import
pass
and
vendor
it.
B
A
A
The
one
thing
that
we
do
also
win
on
top
of
that
if
we
were
to
move
the
code
at
a
future
date
as
somebody
who's,
not
interested
in
anything
other
than
proud
that
testing
Federico
there's
a
lot
of
noise
that
comes
through
and
filtering
through
to
like
the
notifications
that
are
actually
useful,
sometimes
is
tricky.
Sure.
A
A
B
Yeah,
there's
a
reboot
there's
a
directory,
the
top
level
it's
configured.
We
should
move
all
the
tests
and
anything
just
anything
in
general.
That's
not
reusable!
That
is
anything
negative
ace.
Some
of
the
deployment
logic
might
be
okay
to
stay
as
long
as
it
has
things
like
you
can
override,
which
which
repo
you're
pointing
at
yeah
I
think
the
actual
deployment
scripts
are
mostly
reusable
right
now.
You
just
said
a
couple
of
em
for
like
which
cluster
using
and
things,
but
some
of
it
should
probably
also
move
out.
We.
F
Have
three
types
of
tests
for
config
right
now
and
we
kind
of
need
to
clear
those
up,
though,
because
there's
a
retest,
the
config
logic
itself,
then
there's
general
testing
of
like
a
config
dot,
Yambol
to
make
sure
that's
valid,
and
then
we
have
kubernetes
specific
configure
all
right.
What
I'm
saying
we
shouldn't
just
like
rip.
A
B
G
Most
fate
about
complaints
are
actually
going
to
be
symptoms
of
other
actual
problems
like
if
there's
a
bunch
of
stale,
TRS
or
issues
that
people
aren't
reviewing
it'll
wind
up,
you
know,
fado
bots
will
leave
comments,
saying
hey
this
thing
is
still,
and
if
you
create
hundreds
of
issues
that
you
don't
ever
pay
attention
to,
that
will
then
you
know
make
you
annoyed,
and
likewise
you
have
a
PR
that
fails
tests.
It'll,
potentially
you
know
every
four
hours
or
something
say
retest,
please
Elysium,
slash
cancel
this.
G
D
So
the
two
complaints
specifically
or
around
closing
inactive
issues,
basically
multiple
times
when
we
ping
it
as
still
too
rotten
too
close,
certainly
open
or
remove
lifecycle
stale.
Every
time
we
do,
one
of
those
transitions
we'd
send
a
notification
out
which
emails
people
and
a
like
they're,
not
a
big
fan
of
closing
the
issues
and
be
not
a
big
fan
of
all
the
notifications.
Just
silently,
we
could
update
the
labels,
but
not
necessarily
post
messages.
D
B
D
A
Good
is
is
very
fairly
frequently,
but
it's
using
the
github
latest
updated
API,
so
any
any
given
issue
will
only
be
updated.
You
know
30
60
90
days
after
activity,
but
if
no
one
commented
together,
if
there's
a
lot
of
issues
in
the
queue
any
one
person
may
get,
a
notification
of
me
thought
you
know
every
day,
yeah.
D
B
Yeah
but
I
mean
it's
also
may
be
an
issue
for
contrib
X
that
it's
probably
like
I,
don't
know
if
it's
really
our
business
I
but
I
think
it's
pretty
unhealthy
to
have
like
eight
hundred
some-odd
issues
that
are
just
going
stale
yeah
and
not
being
closed
because
right,
just
not
addressing
them
as
a
helpful
either
and
having
them
there.
There
needs
to
be
a
way
to
filter
through
and
find
the
ones
that
people
are
actually
trying
to
work
on
it'd.
G
Be
better
if
there
was
a
breakdown
of
that
between
Steele
issues
and
retesting
right,
because
theta
theta
bod
is
essentially
the
random
otamatone
that
has
no
special
rights
to
the
repo
that
will
leave
comments
on
things
that
make
things
happen
and
so
yeah
a
high
fraction
of
PRS
will
actually
you
know
sis
will
experience
flakiness.
That
then
fade
about
resolves
by
leaving
the
slash
a
retest
comment
in
general.
G
They
can
sort
of
have
confidence
that
if
it
passes
testing
and
is
approved,
you
know
it
will
get
merged
on,
but
so
yeah,
but
yeah
that
I
can't,
but
then
and
then
yeah
with
the
sale
issue
thing
you
know,
I
think
the
I,
don't
necessarily
you
know,
I
feel
like
a
lot
of
people
will
wind
up
Kyle,
you
know
ceasing
Brian
on
issues
and
so
I.
Don't
know
that
there
is.
He
I
feel
like
he
I
feel
like
in
his
case
in
particular.
G
B
One
idea
it
I
think
one
of
the
main
reasons
to
close
issues
other
than
just
knowing
that
they're
not
being
worked
on,
which
is
already
somewhat
apparently
no
one
has
interacted
with
them,
is
having
a
way
to
find
issues
that
are
being
worked
on,
or
vice
versa.
By
like
get
up
search,
we
could
do
the
opposite
and
put
a
positive
label
that
an
issue
is
being
worked
on
yeah
silently
and
then
you
could
filter
for
issues
that
are
actually.
F
F
B
A
B
D
B
A
We,
if
it's
clear
to
contributes
that
you
know
each
part
of
the
larger
ecosystem,
can
look
at
this
and
say
you
know.
Is
it
valuable
to
us
to
close
issues
that
haven't
been
touched
in
half
a
year?
If
that's
not
valuable
yeah,
they
can
choose
not
to
run
the
bot
but
yeah,
silently
closing
stuff
or
trying
to
get
around
notifications
kind
of
gets
rid
of
the
point
of
the
robot
right.
B
B
D
A
F
Unfortunately,
I
guess
I
I,
don't
know
that
I
have
many
specific
comments.
The
main
thing
I
guess
would
be
that
most
of
our
processes
that
we
implement
with
a
notification
telling
you
what
you
need
to
do.
We
try
to
remove
the
notification
from
the
PR
once
you've
met
all
the
requirements,
so
once
you've
satisfied
the
process.
Essentially
we
remove
all
the
comments,
but
with
a
milestone
Lunger.
We
still
have
a
comment
that
says:
you're
up
to
date,
like
everything
is
fine
I.
F
B
Well,
I'm,
really
not
seeing
people
respond
to
that
and
actually
fix
anything
and
if
anything,
for
a
lot
of
people
myself
included
you
wind
up
just
ignoring
the
thread,
because,
instead
of
seeing
actual
activity
you're
just
seeing
the
milestone,
bot
and
that's
very
different
from
say
debat,
because
it's
actually
every
single
day,
including
like
weekends
and
stuff,
so
you'll
wind
up,
subscribe
to
a
thread
and
come
back.
And
it's
like
five
comments.
F
B
Also
think
it's
helpful
to
have
some
comment
telling
you
that,
like
you're
not
meeting
the
process,
won't
know
that,
like
being
continued
of
it
by
get
a
comment,
spam
actually
accomplishes
anything
and
I
know
the
release
team
has
said
in
effect
that
the
currently
sleep
is
that
well,
we
would
need
something
else.
I
think
the
question
is
I,
don't
know,
I,
don't
think
there
has
a
better
idea.
I
think
you
think
there
has
to
be
something
better
than
we'll
just
spam.
Everyone
every
day
and
hopefully
they'll,
do
something.
B
I
think
most
of
the
people
on
the
issues.
Don't
actually
have
enough
power
to
do
these
things
and
are
very
familiar
with
stuff
or
they
would
have
already
gotten
all
the
labels
after,
like
maybe
the
first
comment,
oh
and
I,
think
I,
don't
I,
don't
know
what
the
solution
via
I
think.
We
need
a
way
to
raise.
The
visibility
to
people
that
actually
can
fix
stuff
like
cig,
leads
they're,
actually
a
maintainer.
B
A
Sink
with
sink
release
and
determine
like
not
what
their
preferred
solution
is,
but
just
specifically
the
parameters
of
what
they're
trying
to
solve
here
and
then
shred
like
I,
think
that
would
be
a
good
place
to
start
having
a
closed
dish.
But
we
fixed
this
daily
spin,
especially
in
the
context
of
these,
are
still
munchers,
and
that
should
probably
stop
thinking
this
I
think
so.
Right,
yeah.
B
B
H
Would
it
be
interesting
during
that
time
to
maybe
in
prowl,
have
a
like
milestone
requirements
like
section
of
each
job
like
when
we
view
them,
for
example
like
would
that
be
something
that
interests
people
that
way?
Maybe
we
can
avoid
github
spam
and
instead
have
you
know
hey
this
PR
is
failing
for
these
reasons,
I
mean
I'm,
not
I'm,
not
super
familiar
problem.
Is
we.
B
B
B
A
Cool
yeah
definitely
helping
like
yes,
some
sort
of
network
or
some
way
to
help
us
say
we
look
at
things
that
are
related
to
what
they're
looking
at
this
was
the
idea,
but
yeah
talking,
sig,
release,
good
I
guess
for
both
contributes
then
really
slow
go
here
back
what
they
want
to
hopefully
reduce
noise.
Here,
oh.