►
From YouTube: Kubernetes SIG Testing 2019-04-16
Description
A
I
will
go
ahead
and
kick
us
off
in
the
soon
you
will
be
showing
up
eventually
to
talk
about
his
stuff.
So
for
a
cut
point.
If
I
actually
do
edit
this
hi,
everybody
today
is
Tuesday
April
16th
I
am
Aaron
of
cig
beard
and
you
are
at
the
kubernetes
sig
testing
weekly
meeting
you're
all
being
publicly
recorded,
and
you
can
watch
yourselves
on
YouTube
later,
as
you
adhere
to
the
kubernetes
of
code
of
conduct
by
not
being
jerks.
We
have
two
things
on
today's
agenda.
A
B
Hey
so
I've,
given
some
good
updates
on
the
note
itself,
we
have
a
lot
of
jobs
and
Conflict
jobs
that
are
hand
written
mostly
of
hand
written.
We
have
two
sets
of
generated
jobs
and
we
during
the
least
time
we
actually
for
some
of
those
jobs.
We
actually
create
a
new
job
conflicts
for
the
new
release.
Whenever
we
cut
out
a
new
release
branch.
This
amounts
to
about
100
plus
files
which,
during
release
time
frame,
reviewing
and.
B
Making
some
and
generating
the
pull
request
is
very
cumbersome.
So
about
two
years
ago
we
decided
to
automate
this,
and
we
have
been.
We've
worked
on
a
plan
to
make
sure
we
are
able
to
automate
this
recently.
We
so
we
decided
to
use
rotations
in
the
past,
so
say:
ok,
let's
pick
up
something
from
1.13
and
make
it
as
john
3:4
1.14.
That
was
the
task
for
automating
this,
but
in
the
last
release
we
had
learned
that
we
don't
want
to
do
that.
We
want
to
do.
B
We
want
to
get
the
1.14
conflicts
from
the
master
branch,
and
so
the
strategy
we
were
using
earlier
was
to
user
again
regular
expressions
and
replace
things
safely
in
hundred
and
ones
as
much
as
possible.
Now,
since
we
need
to,
we
can't
do
that,
we
need
to
identify
the
conflicts
for
master
and
master
configs
are
in
two
different
formats.
One
is
where
specifically
explicitly
you
define
saying
this
as
master
or
you'd
say
that
this
job
is
by
default
for
every
branch,
except
for
these
release
branches.
Those
are
the
ones
which
have
skipped
branches.
B
So
when
you
have
a
very
detailed
pattern
of
trying
to
recognize,
etc,
regular
expressions
are
not
a
best
way
to
use,
so
you
probably
have
to
load
this
configs
and
then
figure
out
which
conflicts
are
masters
and
generate
new
ones,
and
the
problem
with
doing
that
is,
if
you
just
go,
then
go
doesn't
have
a
great
Yama
support.
So
when
you
load
and
unload
you
would
have
issues
with
reproducing
the
same
Yama's.
This
were
handwritten,
so
they
have
comments.
B
They
have
structure
so
that
cannot
be
reproduced
and
if
you
use
yeah,
so
if
you
use
go
and
then
go
amal
and
try
to
load
the
config
stand,
you
you
face
that
problem
and
another
one
is
also
described
how
we
can
template
this
new.
How
do
we
different
data
mind
which
conflicts
needs
to
be
pulled
automatically
from
and
how
to
determine?
Where
is
the
master
conflict
and
what
to
do
when
you
don't
have
master
country
good.
B
I
think
many
of
the
months,
sometimes
the
jobs
are
different
and
each
job
owners
put
some
context
and
some
in
comments.
So
if
they
add
a
new
option
that
is
beta
alpha,
sometimes
those
comments
have
those
contexts.
Sometimes
the
con
contexts
are
trying
to
explain
like
skip
branches,
have
had
the
comment
that
saved
skipping
these
branches.
Because
of
this
and
I
remember
in
1.11
1.12
we
actually
had
bad
convicts.
B
Those
conflicts
were
fixed
in
1.13,
1.14
release
timeframe
and
all
of
those
were
those
came
out
when
we
were
doing
this
particular
task,
and
we
looked
at
the
comments
and
I
spoke
to
sin
and
I
met
and
during
that
time
frame
to
say,
hey
these
conflicts
are
not
doing
skip
tranches
properly.
So
the
comments
help
in
context,
they
add
if
they
are
handwritten
and
they
are
Yama's,
you
get
a
benefit
of
hiding
em,
adding
comments,
I
think.
D
D
C
We
run
a
pre
submit
on
our
config
repo
that
just
loads
all
the
config
and
then
dumps
it
using
the
camel
library
there's
any
diff,
the
developer
just
regenerates,
and
then
that
basically
solves
all
of
this,
and
we
haven't,
like
we've,
been
running
like
this
for
about
a
year,
and
there
was
a
lot
of
there
was
worries
at
the
beginning
that
not
having
comments
and
not
having
manual
structure
would
make
things
harder
to
read.
But
a
year
literally,
we
don't
have
any
issues
of
it.
So.
A
B
B
This
is
this,
and
if
you
reproduce
that
comment
in
the
you
know,
if
you
use
that
as
a
template
and
regenerate
the
new
version,
the
release
1.14
string
is
bad
there
and
needs
to
be
called
release
1.15.
This
could
also
apply
to
flag
names
which
are
not
easily
definable.
So
we
have
comments,
it's
very
difficult
to
reproduce
them.
So
one
of
the
ways
you
could
solve
is
you
create
templates
and
generate
have
a
generator
or
yama
that
we
depend
on
and
leave
the
hand
written
as
is
and
then
generate
from
them.
A
We,
how
we
so
I'm
just
trying
to
figure
out
how
much
of
this
is
worth
talking
about,
because
I
feel
like
there's
a
lot
of
hashing
out
in
a
proposal
document
that
is
yet
to
be
written,
about
alternatives
considered
and
and
what
you're
thinking
of
doing
so
at
a
high
level,
though
there's
this
pattern
of
we
have
this
like
really
big,
easily
machine
possible,
dumb
config
that
are
our
prowl
jobs.
That
are
a
really
great
map
to
like
the
pod,
spec
and
stuff,
and
then
there's.
E
A
Much
more
meaningful,
domain-specific
language
that
you
would
like
to
write
to
describe
how
you'd
like
to
generate
a
bunch
of
jobs
from
the
temp
folder.
Do
we
do
we
check
and
generated
files,
or
do
me
how
those
generated
files
magically
created
on
the
fly?
Today
we
live
with
a
pattern.
We're
generated
files
are
checked
in.
Are
we
ok
continuing
that
pattern,
I
mean.
C
B
C
C
A
Another
question
I
guess
I
would
have,
but
this
group
is,
you
me
feel
really
strongly
that
golang
has
to
be
used
to
do
this.
You
win
a
lot
by
using
like.
C
B
Yep
and
one
of
the
feedbacks
multiple
times
I
received
on
reviews
and
design
doc,
which,
for
the
previous
design
implementation
we
had
was
to
use
golang
and
that's
why
I
brought
this
up
saying
it
okay,
cool
and
has
these
issues.
Ayten
is
another
option.
I
have
not
experienced
a
failure
with
reproducing
their
handwritten
yawns
as
the
same,
but
from
the
documentation
of
the
Yama
library
that
Python
is
using
to
generate
the
generated
jobs
and
it
and
a
fixed
s,
grid
script
that
you
just
mentioned.
B
A
B
A
E
E
C
So
yeah
background
I
guess,
like
I,
think
we
have
a
lot
of
big
ideas
floating
around
as
two
directions
that
the
proud
project
can
move
and
you
know
maybe
feature
sets
or
improvements
that
developers
are
asking
of
us
I've
put
together
a
dock
where
I
put
down
some
of
the
things
that
I'm
excited
about.
Personally,
there
is
a
little
like
a
tiny
little
template
in
there.
I
would
love
it
for
people
to
put
in
things
that
they're
excited
about
like
the
topic.
C
Sort
of
target
for
these
items
are
things
that
might
take
like
a
couple
of
months
and
a
couple
people
a
couple
months,
so
sort
of
big
picture
ideas
of
like
things
that
you'd
like
to
see
the
crowd
moves
so
yeah.
These
are
just
like
some
of
the
ideas.
I
had
just
a
little
proposal,
I'm
very
interested
in
seeing
what
everyone
else
is
thinking.
I'll
do
a
quick
run-through
of
the
ones
that
I've
been
here.
C
But
like
this
is
something
that
I
think
would
be
super
cool
for
us
to
hopefully
achieve
in
2019
and
together.
I
think
we
need
to
start
defining
what
are
the
requirements
that
actually
share
access
control
between
people
across
companies?
What
are
the
prerequisites
for
people
that
want
to
be
on
the
on-call
rotation?
What
are
their
duties,
but
what's
expected
of
them?
You
know
what
what
time
zones
I'm
looking
for
like
that
sort
of
thing,
one
of
the
other
things
that.
E
C
Personally,
pretty
excited
about
using
quote,
unquote
best
practices
for
some
of
the
series,
an
API
machinery,
the
way
that
we
interact
with
kubernetes
api
server,
so
I
think
prowl
is
super
highly
visible
as
a
project
that
runs
controllers
and
has
a
bunch
of
micro
services,
but
I
think
we
have
a
long
way
to
go
between
where
we
are
today
and
sort
of
the
best
practices
for
how
those
other
SIG's
expect
us
to
be
touching.
The
API
server.
C
C
C
That's
using
it
and
so
I
think
there's
a
Kol
gaps
here
still
for
a
vanilla
crowd,
employment
that
make
updating
your
own
config
excuse
me
a
little
bit
hard
or
a
little
bit
piece
of
tech,
and
so
you
know
from
a
job
tree
during
UI
on
the
web
that
helps
people
push
the
buttons
and
stuck
their
own
jobs
to
maybe
better
sharding
for
other
types
of
configuration,
locally
testing
jobs
before
they're,
actually
pushed
up
for
review
or
potentially
even
rehearsing
the
jobs
in
the
target
cluster
that
they
would
be
in
after
they
merge
a
lot
of
these
sort
of
tiny
gaps,
make
the
process
of
vetting
here
on
jobs
taking
part
and
the
last
one
that
I
put
down
here
is
providing
metrics,
alerting
and
some
sort
of
sorry
playbook
style
thing
for
an
administrator
who's
running
I
thought
how
to
plan
so
I
think.
C
C
So
these
are
some
of
the
epochs
that
I've
got
I.
Guess
in
the
time
that
we
have
I'm
happy
to
hear
some
feedback
on
them,
but
in
general
I
guess
this
is
more
just
a
call
for
people
to
put
their
own
information
in
there
I
think.
As
a
city
later,
once
we
have
a
couple
more
ideas
in
here
we
can
start
thinking
about
some
sort
of
privatization
or
determining
where
we
think
our
focus
is
best.
A
Personally,
like
just
off
the
top
of
my
head,
this
last
one
to
me
seems
really
really
similar
to
the
very
first
one
up
top
about
enabling
the
community
to
manage
prowl
I
would
want
to
give
them
a
playbook
and
a
set
of
recipes
to
successfully
manage
prowl.
So,
just
like,
we
have
a
reference
deployment
to
prowl.
Okay,
like
a
reference
set
of
best
practices
on
how
to
manage
that
deployment
of
prowl
would
be
really
useful.
A
I
personally
am
trying
to
push
in
the
Katyn
for
working
group
and
what
is
preventing
us
from
creating
a
crowd
cluster
in
there
today
or
like
this
quarter
and
understand
what
the
migration
path
looks
like
and
I.
Think
a
lot
of
the
requirements
and
prerequisites
are
similar
issues
that
the
kids
in
for
working
group
has
dealt
with
or
is
dealing
with
when
it
comes
to
a
consistent
set
of
ien
policies
and
what
it
is
that
we
expect,
as
we
vet
people
to
help
maintain
this
infrastructure.
A
What
level
of
commitment
we
expect
from
people,
things
of
that
nature,
so
I
feel
like
there's
a
good
amount
of
overlap.
There
I
have
time
during
tomorrow's
kate's
in
for
a
meeting.
If
you
want
to
show
up
and
talk
about
this
there,
we
could
do
that
or
I
think
that's
the
direction
that
has
the
most
overlap
for
me.
Personally,
I
would
aspire
to
trying
to
break
this
work
down
into
issues
in
the
testing
for
Rico
that
we
could
chunk
up
by
milestone.
A
A
But
beyond
that,
I
haven't
had
time
to
review
this
stuff.
Super
in-depth,
so
I'm
also
curious
to
hear
what
other
folks
perspectives
are
because,
as
always,
I
struggle
with
the
balance
of
I
want
the
right
stuff
to
happen.
I
want
to
reduce
the
amount
of
friction
necessary
for
that
to
happen,
but
it's
also
really
useful
to
be
able
to
like
call
our
shots
before
we
take
them
and
for
people
who
show
up
and
want
to
help
out
give
them
a
list
of
like
here's.
C
Yeah
I
think
getting
a
little
bit
more
ideas
and
visions
from
other
people,
I
think
would
be
a
good
first
step
and
then
once
we
feel
like,
we've
got
a
critical
mass
of
ideas
in
the
doc,
which
is
scheduled
sometimes
in
breakout
sessions.
That
show
you
think
about
which
of
those
want
to
focus
on
and
then
start
making
some
issues
or
some
actual
deliverables.
E
C
E
E
A
Can't
tell
if
the
lack
of
response
is
a
lack
of
interest
or
not
having
had
time
to
engage
with
this.
No
to
me,
if
I
look
at
this
and
I
overlap,
it
was
what
the
ball
was
just
talking
about.
It
can
mean,
looks
an
awful
lot
like
trying
to
document
what
it
is.
You
do
around
here
and
I'm
most
interested
in
the
parts
that
describe
what
is
the
work
necessary
for
the
continued
healthy
function
of
the
kubernetes
project
and
it's
CI
both
around
what
it
makes.
A
What
makes
it
easier
to
deal
with
the
toil
and
burden
of
managing
the
sundry
jobs,
as
well
as
all
of
the
infrastructure
that
runs
and
manages
those
jobs
and
all
of
the
display
of
the
results,
and
how
can
we
do
that
in
the
most
self-service
manner
possible
like,
for
example,
we
live
in
a
world
today
where
self-service
of
job
configs
is
very,
very
possible,
except
that
we
also
have
a
presubmit.
That
makes
sure
if
you
have
a
proud
job,
you
know
a
repo.
A
So
that's
something
I
didn't
explicitly
see
in
that
talk
and
I
will
work
to
add
that,
but
like
I'm
trying
to
think
in
terms
of
that
direction,
I
really
like
the
concept
of
self-service
and
like
making
reducing
the
toil
of
job
configs,
has
been
often
discussed.
It's
usually
discussed
in
the
lens
of
like
we
should
make
proud
config
easier,
and
then
we
talk
about
the
main
specific
languages
or
we
talk
about
inverting
it.
So
what?
If
we
had?
The
config
files
live
in
the
different
repos
or
whatever.
C
Now,
I'm,
just
really
I
think
one
thing
that
I
like
potentially
we
could
mention,
and
maybe
there's
some
feedback.
I'm
sorry
I
think
I
had
brought
up
at
some
point
like
the
idea
of
when
a
developer
is
a
new,
proud
job
in
a
PR
to
test
it
for
whatever
configuring,
though
having
some
mechanism
by
which
any
crowd
jobs
that
any
proud
job
configurations
that
they've,
edited
or
added
would
have
like
a
canary
run
at
the
time
that
they're
proposing
the
change
as
a
pre
Sigma.
C
So
they
can
get
feedback
on
whether
or
not
you
know
does
every
config
map
that
they
reference
actually
exist
like
that
sort
of
thing,
and
so
I
think
Eric
and
I
were
talking
about
this
and
it
seems
like
a
feature
that
is
extremely
useful
for
the
OpenShift
employment.
We've
got
like
a
DSL
version
of
that,
but
for
product
kits
that
I,
oh
I,
guess
the
open
question
is
whether
or
not
the
security
model
would
allow
it
or
not.
C
A
A
Well,
how
would
we
feel
about,
like
two
weeks
from
today
reviewing
like
everything
that
is
in
a
milestone
that
corresponds
to
milestone
called
115,
and
that
lays
out,
like
everything
we
think
we
can
accomplish
here?
Let's
see
if
we,
how
much
of
that
we
can
tie
back
to
things
described
in
this
stock
and
then
I
could
take
a
crack
at
populating
a
milestone
for
what
we
think
we
would
do
for
the
quarter
after.
C
Yeah
and
I
think
hopefully,
if
we
get
sort
of
everyone
gives
opinions
on
a
doc
and
what
they
think.
The
next
step
should
be.
We
figure
out
like
together
what
we
feel
is
most
urgent
or
like
what
we
could
benefit
from
the
most
of
the
beginning,
and
if
we
get
that
consensus
on
like
partisan,
they
might
be
easier
to
be
more
effective.
I.
F
One
of
the
things
we've
been
looking
at
within
the
conformance
working
group
is
doing
some
automation
on
the
on
the
boards
and
we've
been
trying
to
write
a
problem
finish
up
some
problems
to
within
the
ticket
being
able
to
do
queries
to
populate
the
the
board
itself
and
that's
that's
coming
along
and
it
might
be
interesting.
I
know
where
you're
there's
mention
about
doing
milestones
as
the
as
a
time
thing
of
when
we
can
get
stuff
done.
The
one
other
approach
we
have
taken
is
taking
making
very
small
milestones.
F
They
pretty
much
look
like
features
that
have
a
bunch
of
tickets
inside
of
them.
So
that's
a
milestone.
So
if
we've
gotten
we've
accomplished
doing
this
thing
and
then
using
something
like
tags
or
boards
to
to
say
when
we
when
we
hope
we
get
it
done,
but
that
allows
you
to
move
that
group
of
that
feature
set.
You
know
forward
in
a
release,
so
I
guess
is.
A
Selfishly
really
really
helpful
for
me
to
bucket
things
into
a
quarterly
basis
and
that's
for
two
reasons:
one
because
kubernetes
releases
get
around
every
corner,
and
so
it
helps
me
understand
if
there's
something
that
needs
to
be
done
to
assist
with
the
release
going
out
the
door,
let's
put
it
in
a
milestone
named
after
that
release,
I
work
for
a
company
that
thinks
quarterly.
So
it's
really
helpful
me
from
a
planning
perspective
to
have
those
timelines
later
I.
A
So
yeah
I'm
open
to
any
any
and
all
options
that
make
this
stuff
easier
to
keep
track
of
like
if
I
were
to
go
and
look
at
the
114
milestone
right
now
on
the
test
in
for
Rico.
I
would
probably
be
very
sad
because
I'm
not
sure
how
much
of
that
stuff
actually
happened
or
got
done
and
it'd
be
nice,
not
to
repeat
that
exercise
this
time
and
on
that
wonderful
motivating,
note
we're
about
to
lose
our
room
and
I'm
out
of
time.