►
From YouTube: Kubernetes SIG Testing 2018-10-30
Description
A
Today,
cloud
alright
hi
everybody
is
Tuesday,
October
30th
and
welcome
to
the
cig
testing
meeting.
I
am
your
host
Aaron
of
sig
beard,
since
this
meeting
is
being
publicly
recorded
and
will
be
posted
to
YouTube
later,
please
keep
in
mind
that
we
have
a
code
of
conduct
which
basically
boils
down
to
don't
be
a
jerk.
Today,
we're
gonna
have
a
discussion
from
avi
Condor
Eddie
I
should
have
asked
you
how
to
pronounce
your
last
name
before
I
started
this
on
proposal
for
K
UNEP
resubmits
with
prowl,
and
then
we're
gonna
have
a
bunch
of
questions.
A
B
Part
of
our
effort
is
to
also
drive
presubmit
and,
since
the
linux
kernel
does
a
lot
of
their
work
on
mailing
lists,
we
were
thinking
of
having
a
pre
submit
service
that
would
receive
patches
from
mailing
lists,
apply
those
patches
onto
the
kernel,
run
tests
and
email
back
to
these
mailing
lists.
I
think
I
shared
a
design
doc
on
the
yeah
I'm.
D
A
E
F
B
I've
been
talking
with
this,
what
sent
and
we
were,
we
were
considering
two
different
possibilities.
So
if
you
go
down
to
the
design
ideas,
so
one
possibility
is
to
have
a
kid
server
that
would
that
would
receive
these
patches
apply
them
to
the
kernel
in
a
new
branch
past
that
branch
over
to
a
prop
and
then
that'd
be
fairly
simple,
because
you
would
do
everything
before
the
Hat.
E
Notes
here
maybe
you
said
this
and
I
missed
it,
but
what
what
was
the?
What
was
the
value
prop
for
this?
What
problem
is
it
solving,
so
I
can
add
that
at
the
top
yeah.
B
E
There
was
a
conversation
going
on
on
on
the
chat
what
this
had
to
do
with
kubernetes,
but
I
think
somebody
answered
it.
This
is
to
extend
proud
to
support
ingesting
patches
via
a
mailing
list.
Linux
kernel
mailing
list
being
one
of
those
right
as
a
possibility.
Yeah,
let's
try
to
genera
size
it.
So
you
know
thank
you.
B
Oh
yeah
and
the
the
other
possible
solution
we
were
thinking
of.
Since
there
was
some
discussion
about
having
a
mail
server
in
fro.
We
could
find
a
way
to
have
that
mail
server,
receive
these
patches
and
then
have
them
accessible
somehow
for
the
clone
rafts
container
to
to
possibly
apply
these
patches
after
cloning,
a
cloning,
the
the
kernel,
and
that
has
more
implications
because
we
would
have
to
have
series
for
these
patches
and
some
might
get
out
of
hand
and
we'd
have
to
change
some
of
the
logic
for
clone
rifts,
so
not
too
comfortable.
So.
C
Real
quickly,
so
what
are
the
major
pieces
of
the
parameters?
Not
sure
that
you're
trying
to
make
use
of
here
like
a
lot
of
these
proposals,
you
know
whether
you're
setting
up
the
the
git
server
as
a
standalone
thing,
whether
you're
changing
clone
rafts
like
they
seem
like
fairly
large
changes
to
the
way
that
the
the
larger
infrastructure
works
right
now.
So
what
are
you
trying
to
reuse?
What
are
the
things
about
probably
interesting
to
you?
So
it's
just
a
different
way
of
triggering
a
job.
C
G
If
you
look
at
the
proposal,
the
actual
design,
that's
out,
doesn't
involve
things
like
changing.
Clean
rafts,
hey
like
the
stuff
that
maintains
get
will
live
outside
of
prowl.
It
will
just
do
it
in
a
way.
That's
move
the
prow
and
I
think
that's
actually
the
gist
of
it
is
like
they
won't.
They
won't
need
to
change
that
by
adapting
to
it.
They'll
probably
just
need
good
enough
support
for
sending.
E
E
A
A
I
think
either
way
a
lot
of
different
facets
of
this
line
up
nicely
with
our
intent
to
abstract
away
the
reporting
for
prowl,
because
we
definitely
want
to
be
able
to
report
job
results,
job
statuses
in
different
places
in
ways-
and
we
also
want
proud
to
be
triggered
by
different
things
like
github-
is
not
the
only
change
management
system
out
there.
So
I
think
this
is
really
great
and
encourages
us
to
you
think
along
a
different
dimension
where
like
what?
If
not
everything,
was
a
web
service,
but
there
was
also
mail
involved
here
and
I.
A
Think
I,
like
the
first
approach
in
that,
in
that
context,
a
lot
better
where
we
talk
about
just
maintaining
another
git
server,
I
know
in
the
cons
it's
called
out
as
like.
Well
now
we
have
to
maintain
another
piece
of
infrastructure,
but
I
think
any
interest
of
keeping
prowl
well
scoped.
We
kind
of
don't
want
proud
to
become
its
own
change
control
system.
It
seems
like
having
that
very
distinct
or
abstract
like
receive
things.
Do
things
based
on
them
and
then
report
back
out
is
makes
it
a
lot
more
reusable.
A
So,
as
curious
may
be
how
how
strongly
you
feel
the
con
of
maintaining
additional
external
infrastructure,
maybe
it
to
go
back
to
your
original
scope
when
you're
talking
about
unit
testing
from
the
kernel?
Is
this
for
a
specific
small
audience
or
is
the
intent
to
grow
this
to
the
entire
Linux
kernel
mailing
lists?
The.
B
Intent
is
eventually
to
grow
this
to
the
entire
kernel
community
of
hope.
First,
where
we're
planning
to
use
it
internally
in
our
team,
or
so
we
have
a
mailing
list
for
4k
unit
and
once
K
unit
gets
being
streamed
into
the
up
straight
into
the
kernel.
We
were
thinking
that
at
that
point
it's
such
a.
If
it's
accepted
into
the
kernel,
it's
it's
there
would
need
to
be
support
for
it
in
general,
for
the
Linux
community
and
maintained
errs
would
probably
be.
B
Encouraged
to
use
the
service
too
and
I
just
want
to
clarify
earlier,
so
we
we
really
do
want
reporting
back
to
the
mailing
list
in
either
case
partially,
because
not
not
having
it
report
back
to
the
mailing
list
would
would
be
as
like
a
departure
from
how
developments
done
for
the
kernel.
So
if
you
know
so.
C
I
think
you
know
if
we
do
like
when
we,
when
we
do
take
the
current
github
reporting
logic
outside
out
of
plank
having
a
deployment
of
some
a
deployment
of
a
service
analogous
to
trigger
that
creates
the
projects
here
at
ease
from
mailing
lists
and
the
deployment
of
a
controller.
That
reports
back
would
be
two
standalone
ways
for
you
to
make
use
of
the
plank
controller
and
actually
run
your
jobs.
C
Could
you
talk
about
it,
like
you
said
that
you
know
some
patches
dependent,
other
patches
and
stuff
in
so
like
if
you
were
to
host
an
intermediate,
get
server
with
pull
refs
for
all
the
things
that
you
wanted
to
actually
test?
How
much
processing
are
you
doing
to
generate
those
drafts
because
it
it
almost
like
right
now?
D
G
F
H
This
seems
like
a
very
Google
specific
effort,
because
they're
Googlers
in
the
same
room
talking
about
this
now,
not
saying
that
you
can't
apply
this
set
of
solutions
to
this
domain,
but
I
also
want
to
make
sure
that
as
a
sig,
that
it
belongs
to
the
kubernetes
or
that
we
are
doing
what's
in
the
best
interests
of
the
community's
work
and
the
whole
encompassing
space.
So.
A
I
agree
with
that.
I
think
that
at
some
point
here
we
are
long
overdue
for
splitting
prowl
out
into
its
own
sub
project
and
I
view
this
more
as
a
design
discussion
and
a
real,
quick
sanity
check
that
this
proposal,
it's
in
line
with
where
the
community
as
a
whole
would
like
proud
to
go
making
sure
that
the
stuff
we're
proposing
doesn't
necessarily
turn
prow
into
something.
It
is
not
for
the
purposes
of
kubernetes
testing,
so
I
agree.
A
We
can
totally
have
this
as
a
breakout
discussion
or
something
elsewhere,
but
right
now,
when
we
tell
people
about
prowl
and
they
need
some
forum
to
come
bounce
ideas
off
of
this
is
the
meeting
where
they
do
that.
So
we
definitely
like
talk
about
plans
to
move
prowl
into
its
own
sub
project,
its
own
repo,
etc,
etc.
There's
kind
of
a
lot
of
heavy
lifting
to
be
done
there,
so
in
as
much
as
this
is
wasn't
asked
for
a
quick
sanity
check
of
this
proposal.
I
feel
like
that
job
has
been
accomplished
here.
A
G
For
this
discussion,
super
quick,
basically
I,
don't
think
we
necessarily
need
to
sign
up
to
say
own.
The
in
just
email,
part
like
that
should
probably
live
somewhere
else,
but
the
be
able
to
send
email
part
would
be
nice
to
have
and
any
kind
of
cleanup
that
comes
about
of
like
how
the
triggering
interface
works,
and
things
are
things
that
we're
kind
of
already
doing.
G
We
should
review
this
so
that
we
can
benefit
from
some
of
the
effort,
but,
like
I
would
probably
say
something
like
the
controller
that
it
handles.
The
emails
should
just
be
another
component,
so
that
you
know
kubernetes
as
a
sub
project
is
not
trying
to
handle
kernel
mailing
lists
itself.
So
then
the
way.
C
Yeah
I
mean
like
a
first
blush,
I
I
agree
in
like
the
way
that
this
is
factored
right
now
in
the
proposal
like
assuming
we
take
the
reporting
out
of
plank
I,
think
it
does
address
Tim
Tims
issue
as
long
as
the
components
that
they're
building
import,
RC,
Rd
client,
they
should
have
no
problem.
Writing
those
components
entirely
separately
from
brow
and
just
if
you
could
play
them
on
the
same
service
and
they
interact
with
the
prowl
deployment
by
CR
DS.
C
A
C
C
C
G
The
other
things
that
need
to
happen
like
sins
been
working
on
making
reporting
a
separate
component,
and
that
has
some
nice
properties.
We
can,
probably
you
know,
get
them
to
help
us
with
that
as
part
of
this,
which
will
benefit
kubernetes,
but
we
can
possibly,
if
weak,
you
don't
want
to
keep
scope
down.
We
can
do
that
without
taking
on
the
actual
scope
growing
stuff.
So.
E
What
I
was
trying
to
say
earlier
in
terms
of
the
notes
and
I
think
this
really
expected
what
you
all
are
saying:
I'm
saying
that,
like
everyone
agrees,
I
think
this
problem
should
be
couched
as
or
discuss
a
custom
ingestion
reporting
and
not
specific
to
this
use
case.
But
that
would
satisfy
this
use
case
because
Aaron,
what
it
sounds
like
is
that's
a
way
that
you
weren't,
proud
to
evolve
anyway,
not
to
be
completely
dependent
upon
github
for
either
so
I
sure,
but
I'm
I'm,
a
huge.
D
A
So
I
guess,
like
Mike,
we've
hit
the
the
time
box
here.
I
want
us
just
to
give
us
time
for
Patrick's
discussions,
avi
I,
guess
when
I
ask
to
you
is,
if
you
feel
like
you
got
what
you
needed
out
of
this
discussion,
if
not
I
think
we
could,
we
could
certainly
do
like
a
breakout
discussion
or
follow-up
discussion
woody.
B
D
A
I
Yeah,
so
so,
basically,
we're
still
on
track
to
get
Windows
stable
for
version.
One
point:
thirteen,
and
so
it's
critically
important
that
we're
tract
to
get
the
last
tests
online.
That
need
to
be
done.
The
first
issue
I
had
on
the
list
was
this
five
one:
five,
four
zero
asking
about
a
unit
test
machine
and
so
I
went
back
and
tried
to
figure
out
what
this
actually
means.
I
think
what
the
original
filer
suggested
was
that
we
would
want
to
be
able
to
run
the
test
task
in
Basel
on
Windows
is.
A
So
we
have
this
like
written
requirement,
although
it's
really
more
of
a
loosely
enforced
suggestion
that
unit
tests
should
pass
on
every
single
operating
system,
including
Windows,
Mac
and
Linux,
but
I,
don't
think
we
actually
continuously
exercise
them
in
that
manner.
Other
than
developers
laptops,
but.
A
G
I
I
I
G
I
Okay,
can
we
get
a
one-hour
break
out
on
this
sometime
this
week,
because
I
think
at
this
point
I
mean
the
next
step
here
is
if
we
were
to
try
to
do
something
like
this,
for
thirteen
would
really
need
to
understand
how
that
scheduling
works,
because
if
it
means
that
we
have
to
have
Windows
nodes
in
the
same
cluster
that
Prowse
running
on
that's
fine
GC
supports
Windows,
but
we
just
sort
of
need
to
figure
out.
Is
this
a
version
13
or
not?
I
G
Would
have
to
be
a
different
cluster,
because
it's
it's
a
gke
cluster
which
does
not
have
it
was
yet
but
had
saved
us
some
maintenance
costs,
but
you
can
run
in
different
clusters
an
example
of
that
there's,
a
security.
It's
a
project,
that's
private,
and
we
have
a
different
cluster
that
runs
that
we
have
another
cluster
that
does
some
more
trusted
builds
yeah.
G
I
G
A
What
my
gut
is
telling
me
as
long
like
I,
don't
view
this
as
something
that
would
impede
the
progress
of
Windows
there's
moving
to
stable
right.
So,
okay,
as
long
as
we
can
agree
on
that,
I
would
be
comfortable
punting
this.
If
it
turns
out
this,
this
is
blocking
that
I
can
figure
out
how
to
help
you
work
through
that
yeah.
I
I
Okay.
So
next
it
item
on
the
list
was
we
had
taken
a
proposal
to
say
architecture
and
conformance
talking
about
how
to
deal
with
tests
that
need
to
run
on
windows
versus
what
needs
to
run
on
Linux.
Currently,
sig
architectures
recommendation
is
that
we
should
go
ahead
and
test
in
a
hybrid
cluster
and
use
node
selectors,
so
that
way
can
be
done
so
that
way,
those
applicable
OS
can
be
choosen
for
that
test.
So
there's
basically
two
different
ways
to
do
that
that
I
have
linked
there.
A
H
Of
a
conformance
test
is
not
something
that
we
explicitly
support
in
any
of
the
deployment
tooling
right,
because
you
need
to
then
label
your
entire
cluster
and
do
the
other
different
types
of
matching
in
order
to
support
that.
So,
as
a
result
like
we
in
sync,
lesser
lifecycle,
we
explicitly
state
we
do
not
support
hybridized
clusters
of
Windows
and
Linux.
You
have
to
be
homogeneous.
H
E
A
Is
I,
don't
know,
I'm
just
gonna
stop
us
here
and
say
these
are
all
examples
of
discussion
that
needs
to
happen
more
in
sega
architecture.
Testing
cares
less
about
this
sort
of
stuff
we're
happy
to
enforce
whatever
it
is
that
sega
architecture
mandates
its
policy,
but
we
don't
care
about
defining
what
is
or
is
not
kubernetes
from
a
conforming
perspective
right.
We're
here
to
help
like
run,
run
the
things
so,
okay,
okay,.
I
G
A
G
A
That
appear
to
require
privileged
behavior.
The
privileged
thing
where,
like
there
could
be
some
clusters
that
would
deny
that
behavior
through
pod
security
policy
and
would
thus
fail
conformance
and
so
we'll
have
to
tackle
the
tricky
question
of
whether
we
or
not.
We
think
that
should
or
should
not
be
considered
conformance,
but
so
we're
kind
of
at
the
labeling
stage
rather
than
though.
What
do
we
do
about
at
stage?
Because
that's
going
to
be
the
architects
decision.
A
G
G
I
G
A
That's
that's
something
like
I
would
love
for
us
to
solve,
but
that's
why
I
don't
think
it's
a
critical
blocker
like
we
like
I,
said
we
write
down.
Unit
test
should
pass,
but
we
don't
enforce
it,
and
so
shockingly
it's
up
to
a
developer
on
Mac.
Maybe
you
know
one
of
those
to
like
bump
into
these
problems
and
then
fix
them
whenever
they
come
up.
So
it'd
be
great.
A
If
we
avoided
that
situation
on
Windows
as
well
and
they're,
just
like
kind
of
jump
forward
to
your
other
item
there
Patrick
about
you
need
review
on
some
PRS.
These
are
all
conformance
related
PR
is
pinging
me
with
the
appropriate,
and
I
can
start
pushing
them
through
I,
already
lgt
and
reproof
the
ones
that
look
obvious
and
raise
the
others
that
seemed
like
they
might
be
changing
behavior
that
might
change
the
definition
of
conformance
to
architecture,
and
you
got
some
movement
on
this.
Okay.