►
From YouTube: Kubernetes SIG Testing - 2019-07-09
Description
A
Okay,
so
hi
everybody
today
is
Tuesday
July
9th
I
am
Aaron,
is
sick
beard.
You
were
at
the
kubernetes
cig
testing
weekly
meeting
I
will
paste
the
agenda
in
the
doc
Kent
or
defendants
in
chat
again.
If
anybody
wants
to
sort
of
like
mark
their
attendants
or
whatever,
and
so
I
wanted
to
kick
us
off
with
a
discussion.
I
wanted
us
to
talk
this
week
a
little
bit
about
planning
around
prowl.
The
first
thing
I
wanted
us
to
talk
about
was
like
what
it
would
take
to
split
prowl
into
its
own
sub
project
and.
A
Pasting
and
culling
heard
the
issue
there
in
chat.
I
feel
like
the
first
step
would
be
to
break
the
config
up
out
of
the
proud
codebase
and
maybe
Steve
since
you're
on
the
line
and
Daniel
like.
Maybe
you
two
can
share
sort
of
what
your
experience
has
been.
If
you
know,
if
you
have
the
config
embedded
with
the
codebase
and
a
fork
or
if
you're
running
with
the
config
separate
and
what
sorts
of
things
we
might
want
to
watch
out
for.
C
D
For
us
I
think
we
have
it
in
the
same
repo,
we
have
a
config
updater
that
just
updates
the
conflict
as
it's
running
as
well.
Nothing
special
I
think
there's
the
whole
problem
with
having
the
conflict
anywhere.
Okay,.
A
A
A
C
E
A
F
A
Yeah
I
think
it
will
mostly
handle
moves,
though
reconfiguring
the
config
updater,
that's
about
config,
gamal
and
plugins
that
gamble.
There
are
other
things
in
this
checklist.
When
you
talk
about
remaining
config
files,
what
are
those
it's?
Just
config,
dot,
yeah
more
plugins,
not
evil.
Okay!
Oh
yes,.
C
C
The
process
described
here
was
like
start
listening
on
the
new
place
for
updates,
move
the
files
and
then
stop
listening
on
the
old
place,
and
so
like
that's
the
process
that
doesn't
work
right.
So
you
do
we'd
have
to
do
that,
cut
over
pretty
quickly,
okay,
yeah
and
then
the
rest
of
it.
So
I
think
we
there's
a
lot
of
ducks
and
whatnot
that
points
to
those
files,
so
they're
just
gonna
have
to
run
through
it
make
sure
we
update
all
those.
A
So
I'm
super
interested
in
getting
the
first
three
done
pretty
quickly
or
soonish
I'm
interested
in
helping
out
with
these
the.
How
do
we
feel
about
moving
proud
cluster
to
config
cluster
I
feel
like
that?
My
guess
is,
that
might
be
a
little
more
tangled
up
with
things
like
tackle,
assuming
where
the
cluster
fire
files
are
and
stuff
as
well,
but
or
like
canonical
examples
of
what
good
cluster
files
would
be,
but
I
feel
like
this
would
be
the
next
logical
step.
C
A
C
A
A
A
Are
involved
in
our
deployment
of
prowl,
so
I
think
about
like
the
fact
that
gh
proxy
lives
outside
of
the
proud
directory,
but
is
actively
used
in
most
deployments
at
scale,
and
then
the
fact
that
there
are
many
things
inside
of
the
prowl
directory.
There
are
some
things
in
the
proud
directory
like
its
github
client
that
are
used
elsewhere.
I,
don't
know
if
that
is
as
much
of
a
problem
I
just
the
courts
will
just
handle
that
wouldn't.
D
A
That's
very
true,
I
mean,
like
gh
proxy
is
not
included
in
the
starter
deployment
as
far
as
I
know,
like
I
use,
tackle,
to
set
up
an
instance
of
prowl
and
I
didn't
get
GH
proxy
for
free
with
all
of
that.
So
if
I
want
to
add
th
proxy
I'm
gonna
have
to
manually
create
the
cluster
files
that
had
back
to
my
thing
and
that's
fine.
I
just
feel
like
Red,
Hat
and
Google
have
experienced
that
prowl
is
not
very,
very
useful
to
us
without
gh
proxy
and
it
all.
F
C
I
think
the
larger
question
it
could
also
just
be
like
if
we're
moving
this
into
a
separate
repo.
You
know
part
of
its
to
like
make
the
actual
development
process
easier
to
like
co-locate
things
that
are
related,
so
I,
don't
know,
I'm,
not
I,
think
that
semantic
argument
of
like
whether
or
not
it's
technically
part
of
prowl
I'm,
okay
with
it
moving
into
the
repo.
If
even
if
it's
not
are.
A
A
Through
everything
that
it
would
would
be
involved
in
migrating
proudly
to
some
project
like
untag,
like
kick
up
a
bunch
of
tech
debt,
is
this
something
that
we
want
to
commit
to
this
quarter?
Or
do
you
feel
like
committing
to
splitting
up
the
code
base
from
the
config?
It
is
30
enough
of
a
start,
because
that
way
like
people
could,
if
we
wanted
to
point
people
on
how
to
get
started
like
they
can
just
kind
of
get
started
with
a
config,
and
they
don't
have
to
fork
our
repo
or
whatever
yeah.
C
B
Think
we
should
look
I,
think
we
haven't
sufficiently
explored
the
idea
of
moving
the
Figg
into
its
own
rico.
Like
a
you
know,
I
feel
like
this
kind
of
tier
1,
tier
2,
and
maybe
tier
3
right,
like
the
config,
definitely
should
be
not
part
of
the
prowl
development
notifications,
something
like
proxy.
On
the
other
hand,
probably
maybe
should
obviously
something
like
plankin
definitely
should
be
part
of
the
proud
notifications,
something
like
q
tests.
B
You
know
probably
should
live
somewhere
else,
but
it
might
actually
be
easier
to
move
things
like
cube
tests
and
the
config
out
into
its
own
repo
and
sort
of
just
leave
all
of
the
other
stuff.
You
know
in
the
same
repo
like
maybe
not
maybe
so,
but
I
feel,
like
we've
primarily
only
looked
at
the
idea
of
moving
prowl
out
and
it
might
make
the
sense
to
like
move
everything
else
aside
for
prowl
out
and
keep
prowl
where
it
is
today.
A
Yeah
so
I
think
maybe
what
I
would
suggest
we
commit
to
is
at
least
moving
the
config
into
its
own
directory
and
then
maybe
consider
moving
the
camp
aching
to
its
own
repo
as
sort
of
a
bonus.
So
when
I
started
thinking
about
probably
the
config
changes
that
you
don't
care
about,
if
you're
just
working
on
proudly
coming
faces.
A
Test
making
fake
changes
and
I
feel
like
there's,
probably,
you
would
probably
want
to
talk
about
moving
test
period,
config
changes
into
a
separate
repo,
and
then
we
want
to
start
talking
about
how
we're
linting.
All
of
that
making
sure
that
that
agrees
with
her
brows
that
stops
I
feel
like
moving
the
config
into
it's.
Only
that
might
be
also
more
of
a
scope
creep,
but
maybe
a
more
reasonable
scope
creep
than
moving
crown.
A
B
Of
the
config
outside
of
the
place
where
the
binary
is
is
a
useful
thing
to
try
and
commit
to
this
quarter
like
I.
Think
it's
not
good.
As
we
are
doing
an
increasing
amount
of
tester
development
in
the
test
grid,
folder
I
think
it
will
make
increasingly
less
sense
to
have
the
test
grid
configuration
live
alongside
that.
B
The
same
reason
that
doesn't
make
sense
to
have
the
prowl
codebase
live
alongside
the
prowl,
kubernetes,
config
and
so
yeah
like
I,
know,
Michelle
and
Shawn
were
actually
sort
of
talking
about
the
fact
that
that's
confusing
they
were
thinking
about
doing
that
today.
So
I
think
there
will
be
general
interest
in
moving
all
the
config
into
its
own
location,
whether
that
is
and
I
think
its
own
directory
is
a
good
first
start.
The
best
way
to
separate
them
into
different
repos.
To
get
better
get
up.
Notifications
I
think
we
can
design
as
we
go.
Okay.
A
I
think
I
captured
something
I
hear.
I
will
watch
it
off
like
me
up
what
make
you
all
watch
me
type
up,
github
issues
for
this
if
there
aren't
github
issues
that
exist
already
for
this,
so
secondly,
I
wanted
to
talk
about
was
proud.
Epics
is
maybe
some
good
fodder
for
ideas
of
what
we
want
to
commit
to
this
quarter.
So
how
do
I
pronounce
your
names
in
tomorrow,
I.
A
Okay,
so
I
went
ahead
and
created
a
tracking
issue
for
this
I
am
under
the
impression.
This
is
something
we
are
actively
pushing
to
to
iterate
on
this
quarter.
It's
unclear
to
me
what
is
feasible
to
land
and
it
feels
like
we
are
currently
in
the
process
of
hashing
that
out
so
I
just
made
a
tracking
issue
and
dropped
it
in
our
mana
stone.
With
links
to
the
cap
that
you
have
written
as
well
as
sort
of
the
sketch
PR.
B
A
A
E
We
went
completely
in
agreement
with
regards
how
much
complexity
this
will
introduce
and
people
are
fine
with
that.
That
is
the
business
case.
Fee
I
was
created
because
my
opinion
is
that
it's
too
little
defeated
amount
of
complexity,
Cole
didn't
completely
agree
on
that,
and
that
was
sort
of
a
demonstration
and
yeah.
The
next
step
is
to
like
have
someone
go
over
things
and
yeah.
F
A
Well,
so
what
I
was
thinking
is
like
we
can
drop
this
into
the
current
milestone.
To
show
that
like
we're,
kind
of
this
is
something
we're
looking
at.
This
quarter
doesn't
necessarily
mean
we're
guaranteed
to
close
out
literally
everything
in
our
Boston.
Historically,
we,
but
at
least
like
if
I,
have
to
tell
people
like,
generally
speaking,
what
we're
working
on
this
corner.
I
can
just
point
them
at
this
milestone.
A
Okay,
the
next
thing
I
didn't
get
as
far
as
making
a
tracking
issue
for
this,
but
enabling
open
management.
The
proud,
Cates
thought
I.
Oh
so,
I
think
splitting
up
the
config
from
the
binaries
will
sort
of
help
with
that.
I
initially
had
approached
this
thinking
that,
as
a
prerequisite,
proud
needed
to
live
over
in
CN
CF
own
infrastructure,
which
I'm
trying
to
push
over
that
area,
but
I
think
regardless.
C
C
C
B
So
I
think
that
yeah
I
think
it
would
be
good
to
sort
of
you
know,
clarify
I.
Think
internally,
we've
done
a
little
bit
of
iterating
on
what
the
you
know.
Requirements
are
and
sort
of
documenting
how
we
respond
to
things
and
whatnot.
I
would
say
that
we
should
definitely
figure
out
the
way
that
we
can.
You
know,
create
a
on-call
schedule.
Yeah
like
we
don't
really
yeah
so
I
think
both
those
things
make
sense.
A
The
way
the
way
I'm
sort
of
phrasing,
this
too,
to
myself
as
I,
want
to
enable
more
community
members
than
Steve
to
go
on
call
for
prowl,
because,
like
I
trust
that
Steve
has
been
deep
inside
of
the
crown
codebase
for
a
long
time
and
generally
has
a
big
sense
of
how
to
read
the
tea
leaves
of
crown
operationally.
Given
that
he
runs
his
own
crown
instance.
But
there
are
no
real
documents
that
describe
how
to
do
that
and
what
the
expectations
are
and
all
of
that
and
so
I
think
we
could.
A
F
A
A
C
You
know
I.
This
needs
to
be
a
there's,
a
lot
of
pieces
to
seamless
stuff
service,
like
we've
already
committed
to
some
of
them.
I
think
you
know
really.
Good
work
is
being
done
on
that
for
that
for
triggering
stuff
from
the
web,
for
instance,
but
there's
a
lot
of
tiny
pieces
to
this
and
I
think
we
should
probably
at
some
point
choose
which
ones
to
commit
to
they're,
not
all
necessarily
intertangled,
and
so
that's
kind
of
a
I
look
hard,
but.
C
That
being
said,
a
couple
of
these
do
sort
of
rely
on
like
if
you
want
to
provide
people
changing
the
configuration
files
with
feedback
on
whether
their
changes
are
going
to
work
in
production.
There's
not
really
a
way
to
do
that
with
high
fidelity
without
actually
running.
You
know
the
jobs.
So
if
the
security
stands
of
proud
a
case,
that
IO
is
that
people
checking
in
job
config
can't
be
trusted
to
check
in
job
config
for
security
reasons.
A
A
Purely
because
there
was
a
Delta
between
I
know
how
to
use
this
tool
that
that
Erica,
called
Fae
know
which,
like
given
a
proud
job,
can
synthesize
the
docker
command
to
run
a
container,
but
that
doesn't
accurately
simulate
the
behavior.
Once
you
run
it
on
a
pro
instance
that
is
configured
to
inject
caught
utils
and
so
the
parts
of
pocky
tails
that,
like
munched
the
go
path,
kind
of.
C
C
C
A
So
I
don't
have
great
thoughts
on
how
to
commit
to
fixing
any
of
these
things,
but
I
do
think.
I'm
gonna
be
kicking
in
a
couple
of
things
that
enable
better
self
service
to
config
changes,
and
my
father
hope
would
be
that
we
could
use
like
linting
or
something
against
config
changes
to
sort
of
use,
pre
submits
to
enforce
that
people
can
or
to
not
do
certain
config
changes,
but
by
starting
things
up
more
we
wouldn't
empower
people
to
iterate
with
their
jobs
or
like
so
for
like
credentials
or
secret
stuff.
F
A
Yes
and
that's
the
sort
of
thing
where
it's
not
clear
to
me,
I
think
once
you
get
into
how
tightly
you
want
to
mandate
conventions
around
jobs
is
maybe
more
in
a
project.
Specific
thing
like
I,
think
maybe
the
way
that
open
shifts
trusts
like
open
shift
I
believe
trusts
their
developers
to
use
some
kind
of
staging
cluster
or
something,
whereas,
as
the
is
a
large
open
source
project,
we
can't
necessarily
extend
that
that
same
level
of
trust
to
the
audience
that
we
serve
yeah
I.
C
Mean
I
think
like
yeah.
We
need
to
do
some
serious
thinking
about
that
specific
statement,
because
there's
about
a
million
ways
to
hook
a
hall
into
the
current
security
posture-
and
it's
unclear
to
me,
if,
like
Linton
presubmit
stuff,
doesn't
like
just
reduce
to
the
same
thing
right,
because
we
right
now,
if
I
meet
a
PRA
KK
and
got
an
okay
to
test
label,
I
could
change
the
scripts
that
are
being
run
and
I
know
the
entry
points
that
the
jobs
are
using.
So
it's
kind
of
a
moot
point.
A
I
agree
with
most
of
that
except
I.
Don't
think
this
is
a
thing
that
the
steering
committee
needs
to
be
involved
in
per
se,
I'm,
going
to
look
more
towards
like
the
project
security
team.
Who
cares
about
the
security
of
the
product
and
whoever
owns
the
bug
bounty
program
when
that's
put
together,
but.
A
A
So,
let's,
let's
make
me
consider
having
more
of
a
discussion
around
some
self-service
items.
I'm
gonna
kick
in
a
couple
things
into
the
milestone
that
maybe
word
around
this
and
you
can
make
a
noodle.
A
A
Talk
about
this
next
week
that
sounds
fair
to
people,
I
thought
on
best
practices
for
CR
DS,
an
API
machinery
is
maybe
to
wait
a
little
bit
I
feel
like
our
Sagara.
Texture
is
currently
discussing
what
even
those
best
practices
are
and
what
level
of
review
is
required
for
things
like
CR
DS,
which
may
impact
what
direction
we
want
to
have.
Potentially.
C
F
B
I
would
also
say
that
maybe
we
could,
you
know,
focus
it
around.
So,
for
example,
I'm
you
know
like
right
now.
Tide
is
fairly
slow
at
recognizing
when
a
PR
has
the
right
status.
Context
or
I
mean
that's
a
bad
example,
but
they're
various
things
that
are
kind
of
slow
cuz.
We
use
pulling
right
now
and
you
know
yeah
so
I
mean
we
could
just
focus
in
on
making
prowl
federate
faster
and
the
most
efficient
way
to
do.
That
is
to
adopt
good
practices
for
Dirty's.
Or
is
this
something
else
that's
a
subset
of
it.