►
From YouTube: Kubernetes SIG Testing 20180814
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
A
E
E
D
D
Initially,
my
focus
was
on
Windows
testing,
but
we're
planning
on
using
it
to
enable
you
know
both
PR
testing
and
daily
testing
on
test
grid
for
the
user
cloud
provider
as
well
as
conformance
tests.
Is
that
sort
of
the
same
type
of
thing
you're
you're
talking
about
Nishi?
Yes,.
C
Something
similar
but
I
also
went
through
with
Matt.
There
are
some
there
range
of
both
submit
and
pre
submit
tests
that
are
listed
out
and
there
are
around
thousand
tests
and
pretty
much
everything
seems
to
run
on
GC
p,
and
I
would
like
to
change
that.
So
is
there
a
preference
in
terms
of
which
tests
we
should
focus
on?
First?
Are
there
other
things?
We
should
look
at
as
we
plan
to
do.
This
I
wanted
some
form
of
common
approach
here
or
some
guidance
to
do
that.
Yeah
I
think.
E
The
one
question
needs
to
be
answered,
or
are
we
moving
the
test
to
AWS?
Are
we
or
are
we
enabling
the
test
to
be
run
on
multiple
clouds
and
do
we
want
it
to
run
on
all
clouds
and
then
do
some
kind
of
like
consensus
on
pass
or
fail,
or
do
we
want
to
kind
of
like
randomly
choose
which
cloud
it
runs
on,
so
that
we
get
some
coverage
on
our
clouds
all
the
time.
C
That's
a
good
question,
I
think
as
of
now,
given
everything
runs
properly
on
Google,
we
should
try
to
take
subset
of
the
tests
and
try
to
run
it
on
AWS
and
look
for
parody
and
then
in
the
future.
Perhaps
what
we
could
do
is
share
the
scope
of
the
best,
but
given
we
don't
have
anything
right
now,
it
might
not
make
sense
to
tell
Google
stopped
running
certain
tests
before
we
have
it.
E
F
F
F
There's
some
interesting
policy
stuff
with
that
I
know.
Eric
has
has
talked
about
in
the
past,
we're
in
a
super
ideal
world.
It
would
be
nice
to
not
block
the
koribo
on
clouds
like
we
move
the
cloud
providers
out
and
then
like
right
now
we're
running
DCE
tests
and
that
kind
of
makes
sense,
since
the
provider
is
still
in
tree.
But
if
the
brighter
gets
out
of
tree,
then.
F
One
idea
that
we've
tossed
around
has
then
running
real
cloud
clusters
in
postman
and
in
presubmit
only
blocking
people's
PRS
like
hard
blocking
on
like
simulated
clusters,
local
clusters,
things
like
that
and
unit
tests,
I
build
and
then
maybe
have
like
optional
tests
that
don't
hard
block
the
repo
testing
on
a
provider.
E
G
So
I
don't
know
what
that's
gonna
end
up
being,
but
if
it's
conformance
tests
that
can
maybe
signal
that
a
release
shouldn't
occur,
I
think
this
is
bigger
than
an
actual
test
discussion,
though,
but
really
more
in
terms
of
compatibility
and
what
should
actually
potentially
block
a
release
and
probably
a
discussion
that
needs
to
be
had
at
a
bigger
level.
Unpacking.
F
That
a
little
bit,
though,
there's
a
big
difference
between
blocking
and
release
and
blocking
a
PR,
no
absolutely
Lisa's,
already
block
on
things
like
scalability
testing
that
are
difficult
to
replicate
and
potentially
flaky,
and
we
have
to
have
some
human
Luke
this
decision
around
like
do.
We
continue
to
block
on
this
test.
But
of
course
you
not
block
you
know
my
PR
wanting
to
change
things
that,
like
ogee
CP,
isn't
scaling
well
enough
or
something.
G
Sure
and
I'm
sorry
I
had
my
mic
turned
down
a
bit,
so
man
I
may
not
have
heard.
We
said
it
was
specific
to
PRS
yeah
yeah,
although
I
was
under
the
impression
that
the
pooled
basil
Aaron
has
said
in
the
past
that
you
don't
know
how
to
identify
unit
tests
with
the
sig
that
owns
them
and
that
those
are
just
run
for
everything
and
they're
currently
blocking
or
listed
as
pre
submit
blocking.
So
isn't
it
true
that
all
unit
tests
block
all
pr's
at
the
moment
in
tree
or
is
that
not
true
unit.
G
F
They're
not
provided
like
the
test
locker
room.
These
are
mostly
meant
to
provide
confidence
for
the
for
the
core.
There
is
this:
the
unfortunate
side
effect
that
we
don't
want
to
release
with
any
of
the
entry
providers
broken
either,
because
the
way
you
fix
those
is
you
make
a
new
release
as
soon
as
those
are
out,
then
we
there's
no
reason
to
be
interested
in
that
I.
G
F
F
So
there's
a
different
discussion:
that's
the
the
into
in
tests,
which
are
a
lot
more
little
problem,
because
those
are
the
ones
that
are
much
more
likely
to
fail,
especially
for
reasons
that
are
unrelated
to
the
PR,
like
we're
having
quota
issues
or
something,
and
we've
tried
very
hard
to
avoid
that.
But
you
know
it's
still
problematic
and
we
like
I,
do
like
I
get
consensus.
F
Could
work
pretty
well
because,
if
we're
actually
just
likely
going
to
independent
tests
from
each
provider,
then
we're
like
we're
going
to
increase
the
flight
chance
like
if
I
could
I
would
love
to
kick
out
even
like
I'm
at
Google.
I
would
prefer
to
kick
us
out
as
well.
Just
because
like
we
have
to
make
sure
that
that's
not
flaky
imprisonment
and
it's
it's
a
much
bigger
concern
when
it's
holding
up
people's
work
on
the
project
versus
like
if
a
little
bit
in
CI,
it's
not
as
urgent
to
fix
it.
G
Maybe
I'm
more
concerned
with
this
as
a
testing
being
the
implementation
detail
with
a
larger
discussion
of
I,
don't
really
think
or
understand
that
there
is
a
plan
in
place
to
handle
compatibility
issues
between
providers
and
the
kubernetes
releases
going
forward
right
now,
that's
handled
partly
through
testing
and
I
think
it
will
continue
to
be
handled
partly
through
testing,
but
once
things
go
out
a
tree
I
think
it's
a
larger
discussion
and
you
know:
I
have
not
seen
a
plan
in
place
for
that.
I.
G
E
So
I
guess
coming
back
to
the
initial
issue:
are
there
any
tests
that
are
currently
being
run
on
GCE?
That
would
make
sense
to
move
to
AWS,
even
if
it's
not
like
a
necessary
like
technical
thing,
but
just
you
know
sharing
the
cost
of
running
you
build
and
stuff
like
that.
So
make
sense
to
kind
of
distribute
that
possibly.
F
I
think
so
so
like,
for
example,
the
basal
build
one
of
the
things
that
would
make
that
tricky
or
more
problematic.
Is
that,
like
we
run
a
cache
to
make
that
more
efficient,
which
is
part
of
the
reason
we
like
using
basil
for
CI
at
least
and
getting
that
to
work
over
in
+1
clusters,
is
going
to
be
interesting
right
now
we
can
leverage
a
number
of
things
off
of
being
in
the
same
cluster,
so
we'll
need
some
rework
there,
so
like
sand
and
I
think
so.
F
F
F
F
G
F
F
E
D
On
Windows,
yours
yeah
sure
this
is
much
along
the
same
topic
here.
Yes,
of
course,
I'm
here
to
represent
mostly
sig
windows
must
be,
of
course,
some
of
my
colleagues
are,
you
know,
running
signature
as
well,
but
the
main
thing
that
I'm
looking
at
is,
you
know
it
went
through
and
because
we
had
the
right
Windows
versions
available
on
a
juror.
We
stood
up
all
of
our
windows
and
then
testing
there
first.
D
Tests
will
be
tagged
with
we're
gonna
mark
things
as
a
feature
test
owned
by
sig
windows,
so
some
of
the
windows
specific
stuff
will
be
run
there.
So
that
way,
you're
not
running
at
a
Linux
VM,
but
you
know
super
fluid
mechanics
along
with
tests.
We
still
need
to
get
gets,
get
some
velocity
here
in
terms
of
being
able
to
get
our
pr's
around
cube,
test,
prowl
and
test
quit
merged.
So
that
way,
we
can
make
these
results
public
for
anyone
to
see,
but.
F
D
Yeah,
so
we're
gonna
pick
pick
those
up.
They
don't
necessarily
have
to
be
in
the
same
PR
I'm
having
a
discussion
with
that
with
that
person
later
today,
because
we
started
off
on
Windows
first,
but
then,
of
course,
you
know
they
wanted
to
add,
add
Linux
tests
in
it
as
well.
So
behavior
can
FPR
number
76
25,
okay,.
D
F
F
F
F
Question
I,
think
one
of
the
people
I'd
go
to
for
that
is
Aaron
Smith
XP
who's
been
working
on
sorting
that
I
would
say
there
is
some.
There
is
some
precedent
for
the
OS
thing.
Lately
we
have
tried
pretty
hard,
for
example,
to
make
sure
that
the
building
is
still
going
to
work,
even
though
we're
not
executing
on
it.
Yet
in
the
past
we
would
run
the
cross
to
the
boss,
but
that's
pretty
expensive.
F
Possibly,
we
might
need
some
way
to
have
them,
like
certain
tests
be
not
blocking
for
unit
tests
as
a
features
being
developed.
I
think
that's
an
area
where
kind
of
week
on
right
now
for
the
into
any
tests.
If
you
have
tests
in
there
than
we
can
say
like
you
know,
mark
you
about
a
feature
and
don't
don't
make
that
part
of
the
blocking
sweep
yeah.
D
F
D
All
right
sounds
good
yeah,
as
you
see
in
the
chat
window
sounds
like
me.
She
was
falling
on
following
a
pretty
similar
approach
there
if
getting
the
stuff
up
in
test
grid
first
and
then
it
once
it's
once
once,
we've
got
the
results.
There
then
won't
discuss
it
more,
so
I
thought
well.
Thank
you.
I
think.
H
The
team
wants
to
start
doing
some
kind
of
work
around
communities
downstream
testing
the
links
on
the
city
repository
and
this
might
belong
to
a
sick,
scalability
but
I
know
for
not
wearing
not
only
educating,
run,
skip
mark
to
decide
like
like
when,
like
this
is
safe
to
operate
SCD
in
kubernetes
codebase,
but
testing
I
feel
like
the
testing
is
more
about
the
performance
of
the
whole.
Humanity
is
control
plane
and
it
doesn't
really
test
the
SAT
like
the
server
side
like
the
performance.
H
So
we
want
to
improve
there
and
then
it's
all
manual
process
and
the
only
ones
on
top
of
GCP.
So
we
want
to
automate
this
and
then
we
want
to
support
like
AWS,
so
I'm
planning
to
write
some
like
a
roadmap
or
design
Docs
and
working
with
like
job
X
from
GK
team
and
then
now
else
like
I
want
to
know
who'd
be
the
good
person
to
work
with
and
then
to
get
this
reviewed
and
then
maybe
I
get
some
help.
I
mean.
F
H
H
G
Sorry
I
was
trying
to
kick
myself
off
me
yeah,
something
I'm
dealing
with
actually
today
and
it's
something
that
I've
been
thinking
about,
and
maybe
this
isn't
an
appropriate
thing
for
us
to
do,
but
so
we're
galaxies
providers
right
and
for
do
conformance
testing
anyway,
and
we
need
to
do
a
couple
of
previous
releases.
But
there's
always
we
want
to
test
against
head
building.
Kubernetes
just
is
a
lot
of
fun
and
it's
it's
really.
No
small
thing
necessarily
and
I'm
wondering.
Is
there
enough?
Are
there
enough
people
that
think?
G
Maybe
it
would
be
useful
to
have
a
continuous
build
of
master
with
some
type
of
expiry,
because
that
could
get
big
pretty
quick
blitz
of
that
you
know
ahead
of
doing
conformance
tests.
We
you
know
it,
could
we
could
check
some
known
URL
or
some
them
site
for
for
those
bits.
This
is
supposed
to
having
to
block
ourselves
on
it.
This
exists.
It.
F
F
G
Thank
you
very
much,
I'm
sure
that
was
documented
I.
Just
hadn't
found
it
yet.
I
am
right
now
on
handling
low-hanging,
fruit
and
I'm,
just
using
Travis
to
start
some
conformance
testing
and
then
working
in
parallel
on
having
proud
provide
signal.
But
for
the
moment
since
for
the
auditory,
I
can
use
its
to
provide
signal.
I,
don't
need
to
necessarily
go
through
prowl
for
that,
so
I
just
want
to
get
conformance
tests
up
and
running.
For
that.
G
You
got
disconnected
I
found
that
switch
yeah.
That's
that's
what
I'll
probably
be
doing
that
in
parallel
I'm
trying
to
augment
cube
test
so
that
we
can
integrate
with
prowl
for
signal,
because
there's
still
that
nasty
injury
cloud
provider
still
wanna
do
conformance
test
for
it,
but
for
auditory
Travis
is
a
pretty
quick
fix.
B
We
have
time
to
talk
about
the
rapture
success
and
tide
or
student
who,
yes,
just
a
little
background,
we're
trying
to
get
tied
to
be
suitable
for
the
we're
going
to
be
a
significant
industry,
though,
and
the
main
thing
that
we
have
left
is
to
add
work
for
run
after
success
jobs
or
remove
the
need
for
that
altogether,
and
those
are
jobs
that
just
get
triggered
when
the
apparent
job
links-
and
we
only
use
this
for
the
qbb
of
job
right
now.
I
was
just
recently
like
couple
hours
ago.
B
I
was
looking
through
this
config
for
them,
and
I
noticed
that
the
cube
idiom
jobs,
the
the
children
that
actually
run
after
the
build,
are
they're
all
sent
to
skip
report.
So
we
shouldn't
actually
be
reporting
any
of
them
to
github.
There
is,
however,
a
status
showing
up
on
github
because
of
some
book
related
to
grant
charted
jobs
with
children
that
are
skip
report,
which
is
the
whole.
That's
like
a
great
example
of
why
I
hate
run
after
success
in
general
and
why
we
should
not
really
have
it
at
all.
A
B
F
C
F
Committees
a
year,
so
it
runs
after
the
war
and
I
difficult
it
be
to
flatten
those
jobs.
I,
don't
know,
I
think
the
biggest
problem
is
we
don't
have
good
enough
ownership,
there's
kind
of
a
back
and
forth
we're
like
sick
cluster
lifecycle
wants
changes
to
how
the
artifacts
are
published
and
if
we
change
the
binary
I
know
that's
kind.
This
job
has
kind
of
been
long-standing
ly,
broken
with
poor
ownership,
but
at
the
same
time
no
one
wants
to
be
responsible
for
saying
we
have
no
Q
beta
presubmit.
F
B
B
G
F
So
it
might
also
not
be
quite
as
bad
to
just
build
in
both
now
they're,
actually
cash
a
little
bit
more
because
there's
a
few
items,
I
think
that
are
completely
deterministic.
Unfortunately,
but
it
should
be.
A
lot
were
a
lot
better
than
how
it
used.
We
I
believe
when
he
that
was
first
set
up.
We
had,
we
didn't,
have
proper
caching
and
it
could
take
as
long
as
a
half
an
hour
to
build,
whereas
that's
a
pretty
rare
event
for
a
pair
of
these
days.
A
F
F
A
I
was
thinking
if
we
do
that
and
then
just
get
KK
working
on
untied
and
then
I
think
we
need
to
do
a
survey
if
everyone
is
turning
a
cluster
around
cluster
and
figure
out,
if
they're
using,
are
they
that's
what
they're
using
it
before
just
so,
we
can
figure
out,
like
the
larger,
like
business
problems
that
they're
solving
with
that
feature
and
figure
out.
If
we
can
support
of
it.
Yeah.
B
B
B
Like
the
fact
that
it's
not
doing
anything
right
now,
because
it's
always
just
writing
at
green
status,
we
don't
need
to
be
like
tied,
can
just
do
it
or
that
yeah,
it's
not
actually
doing
anything
useful
now,
so
for
the
time
being,
tied
doesn't
actually
need
to
support
right
after
success,
and
this
is
just
incidentally,
because
there's
kids
yeah,
basically
their
skipper,
skipper
part
right
now.
They're
only
actually.
D
B
A
F
F
A
J
It's
is
that
it
I
think
that's
all
the
internet
I
think.
That's
all
we
have
on
the
agenda.
Do
you
know
what
else
have
something
before
again
Bishop.