►
From YouTube: Kubernetes SIG Testing 2018-01-23
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Yeah
thanks
a
lot
of
to
questions.
The
first
one
is
revolving
around
a
PR
to
make
the
lease
reconciler
in
the
API
server
the
default
and
right
now,
there's
not
any
H
a
into
end
tests
in
the
tree
at
all.
As
far
as
I
know,
I
was
wondering
what
everyone's
impression
is
on
how
to
get
the
lease
reconciler
and
the
API
server
to
beta
and
eventually
the
GA.
C
There
are
no
formalized
blown-out
AJ
configuration.
We
are
working
in
that
and
stay
clustered
lifecycle.
That
is
part
of
the
default
blocking
job.
Skew
I,
don't
know
if
Justin
wants
to
chime
in
there.
He
might
have
something.
That's
blocking
in
cops
to
update
the
configure
if
they
wanted
to,
but
that
would
open
up
a
thorn
a
can
of
worms
there
I
don't
know
if
anyone
wants
there's,
no
one
that
wants
to
own
this
right.
Like
say,
cluster
lifecycle
would
have
to
own.
C
C
D
This
have
to
have
like
real
HEA
deployments,
but
you
could
still
validate
that.
You
know
you
did
failover
properly
I'm,
assuming,
like
maybe
I'm
I
was
looking
for
someone
who
is
better
sort
of
equipment
to
answer
that
question
from
an
actual
deployment
perspective,
whether
it
was
possible
to
isolate
behavior
in
question,
I
think
that
was
the
case.
Then
at
least
we
could
have
some
degree
of
confidence
that
it
would
work.
I
think.
B
B
D
So
there
I
mean
there's,
there's
some
work,
ongoing
I,
think
no,
that
I
mean
I,
think
there's
a
lot
of
integration
to
us
today
that
use
fcd.
So
if
ed
CD
is
the
requirement,
that's
not
a
blocker
I,
just
wasn't
sure
whether
you
could
get
a
representative
test.
That
would
be
useful
if
you
can
find
someone
with
enough
hae
knowledge
to
answer
that
question.
At
least
you
can
close
off
the
possibility
of
integration
versus
having
to
go
full-blown
H
a
which
implies
a
lot
more
work.
Okay,.
E
I
mean
I'm
a
little
lost,
but
we
we
do
not
have
a
che
tests
for
cops
weed
and
there
is
a
disruptive
tests
which
I
don't
think
we're
running
today.
So
I
don't
know
that
you
can
combine
those
two
things
in
a
like
whether
you
create
another
job.
Obviously
we
don't
want
more
cops
blocking
jobs,
I'm
sure
we
can
all
agree
on
that.
E
D
You
know
if,
in
a
reasonable
amount
of
time,
I
mean
really
you're
just
talking
about
the
masters
and
that
city
participating
in
that
you
don't
necessarily
need
a
full
cluster
unless
I'm
missing
something
I'm,
not
saying
you
don't
want
to
do
the
high,
like
the
more
involved
testing
I'm,
just
saying
as
a
base
case,
you're
testing
that
something
is
working.
Maybe
not
you
know
you
still
need
to
do
the
larger
level
test,
but
that
could
be
influenced
by
all
kinds
of
things.
Whereas
this
is
just
is
my
failover
mechanism
reliable.
E
E
A
E
New
okay,
wasn't
that
couple
hours
old
couple
days:
well,
yeah!
It's
like
it's!
It's
alright!
So
it's
good!
We
have
it!
It
there's
old
yeah
that
hit
a
different
bug
in
cops,
which
we
have
a
PR
for.
It's
blocked.
It's
also
blocked
behind
the
cops
jobs
like
everyone
else,
and
we,
when
we
get
more
caste
in
AWS
I,
will
create
the
equipment
job
in
AWS.
Okay,.
E
E
A
Okay
and
the
alternate
tax
ounds
like
if
we
wanted
to
not
have
to
go
through
that
problem
tests
and
integration,
who
would
be
the
appropriate
person
to
for
Ryan
to
talk
to
to
get
expertise
on
how
the
integration
tests
work
today.
D
B
G
B
A
H
B
I
C
That's
a
downstream
constraint
like
I'm,
not
speaking
from
experience,
but
when
I
had
to
deal
with
this
on
atomic,
we
had
separate
testing
infrastructure
that
would
basically
do
bleeding
edge
on
atomic
and
run
all
of
the
tests
and
produce
results
from
that
side.
But
it
was
a
downstream
constraint.
We
never
tried.
C
There
was
a
point
in
time
where
we
tried
to
push
that
upstream,
but
then
we
just
we
raised
our
hands
because
the
integration
with
test
grid
wasn't
all
there
and
no
one
was
watching
the
signal
on
the
other
side,
so
we
just
stopped
so
the
it's
totally
a
downstream
constraint
for
your
environment.
I
would
not
push
that
constraint
on
upstream,
okay,.
H
A
J
K
J
See
the
easiest
would
be
to
do
it
through
time
if
the
room
the
room
is
VC
enabled.
So
that's
all
done
through
time.
We
shoot
I've,
had
trouble
hooking
up
a
camera
to
my
computer,
but
I'm
like
one
of
the
few
Linux
users
here
so
or
getting
my
computer
to
use
the
camera
and
stuff
I
might
be
able
to
track
down
like
a
second
laptop.
We
could
use
as
a
as
a
video
input,
so
we
could
use
zoom
I.
L
L
J
J
Parking
will
be
validated
at
the
end
of
the
day,
so
bring
your
parking
ticket
and
we
can
validate
that
at
the
front
desk.
Before
we
leave
we'll
have
lunch,
we
also
have
a
team
happy
hour
or
social
event
thing
every
Friday
from
4
to
5,
so
we
could
crash
the
night
works
that
if
we
want
to
start
off
there-
and
then
somebody
I've
mentioned
a
social
afterwards
for
whoever
wants
to
come
to
that.
So.
J
A
You
have
somebody
invading
your
space
there.
You
should
remind
them
of
the
cone
of
personal
space
yeah.
So
like
I,
you
know
I'm
happy
if
we
just
sort
of
show
up
wow
I
managed
to
close
my
browser
window,
so
I
can't
look
at
it.
I'm
happy
if
we
just
show
up
and
whatever
continue
to
hack
on
whatever
we're
acting
on,
but
it
also
seemed
like
it
might
be
a
good
time
to
get
everybody
in
a
room
kind
of
agree
on
a
couple
things.
A
So
if
anybody
has
any
suggestions,
I
think
we
should
like
keep
the
conversation
going
on
the
mailing
list
or
add
to
the
the
meeting
notes.
But
the
things
I
have
seen
come
up
lately
are
like
mostly
around
the
idea
of
blocking
jobs.
So
a
while
back
I
kind
of
was
trying
to
define
what
sort
of
criteria
should
we.
If
we
don't,
let's
be
clear,
we
don't
actually
have
anything
documented
for
what
a
blocking
job
is
and
what
we
should
do
with
it
and
I
think
we
should
do
that.
A
I
think
we
should
document
that
I
think
we
should
describe
like
the
criteria
for
a
blocking
job
and
I
think
we
should
have
documentation
on
like
how
do
we
act
if
blocking
if
a
job
that's
blocking
merges,
starts
to
fail
for
so
long?
How
do
we
act
if
a
job
that
is
blocking
the
release
is
failing
for
so
long,
so
I
have
a
proposal
linked
in
the
doc.
It
was
a
Google
if
a
Google,
Doc
and
then
I
have
a
a
cig
release
issue.
A
Part
of
the
problem
is,
since
this
defines
release
blocking
jobs
as
well
as
virtual
blocking
jobs.
It
kind
of
spans
these
two
SIG's
and
I,
don't
know
where
to
pull
request
Docs
anymore,
but
I'm
trying
to
I
want
to
like
hash
that
out
in
front
of
people
and
see
if
we
can
come
to
consensus
there,
I
think
sort
of
related
to
that.
You
should
then
develop
a
process
for
how
can
people
get
bad
new
jobs
and
get
them
to
the
point
where
they're
blocking
so
I
I
was
thinking
for
our
gracious
oppose
its
benefit.
A
If
we
could
walk
through
the
use
case
of
how
could
we
get
eks
based
jobs
roughly
to
the
same
place
that
gka
gke
based
jobs
are
today,
but
with
passing
where
we
sort
of
described,
you
know
what
what's
the
infrastructure
that
you
have
to
hack
on
to
add
support
to
the
right
things
in
the
right
places
too?
How
can
we
get
to
the
point
where
crowd
was
actually
running
this
stuff
and
block?
Based
on
this?
A
I,
walk
through
I
also
think
that
collectively
we
could
stand
to
talk
about
good
and
bad
habits
when
it
comes
to
proposing
and
rolling
out
automation
changes.
People
are
doing
a
pretty
good
job
of
talking
within
this
group.
I
think
we
need
to
work
on
phasing
that
out
through
contributes
and
the
rest
of
the
community
on
the
whole,
and,
it
sure
seems
like
we've,
had
an
awful
lot
of
discussions
around
dependency
management
and
basil
and
depth
and
stuff
lately.
So
those
are
just
some
ideas:
I
had
to
give
us
some
structure
but
I'm
totally
fine.
A
If
it's
a
free-floating
day,
I
think
the
main
thing
for
me
is
face
to
face
is
cool.
It's
really
awesome
to
have
high
bandwidth,
but
I
want
to
make
sure
that
whatever
decisions
we
come
to
are
documented,
so
I'm
happy
to
like
volunteer
myself
as
note-taker
and
make
sure
that
there's,
like
a
meeting
notes
thing
that
comes
out
of
this
similar
to
how
the
contributor
summits
have
happened
in
the
past.
J
J
L
J
I
L
J
H
From
my
side,
oh
good
also,
so
we
are
from
something
the
other
side
and
probably
we
cannot
make
it
this
Friday,
but
we
are
also
interested
being
like
our
discussions.
You're
going
to
have
so
maybe
during
like
from
11:00
to
5:00.
Whenever
you
are
going
to
have
a
like
open
discussion
session,
we
can
like
maybe
make
a
separate
stick
testing
event,
something
so
that
we
can
hopping
from
remotely
I.
Think.
J
A
H
J
A
A
C
C
Enough,
that's
too
early
or
not
7:30
will
give
us
plenty
of
time.
Eight
might
be
a
little
crunched
because,
because
there
are
some
folks
in
the
UK
who
want
to
attend
and
folks
in
other
time
zones,
so
I
will
I
will
send
out
and
do
updates
we're
trying
to
hash
out
the
details
on
that
PR
and
there's
it's
just
hard
coordinating
people
in
different
time
zones
across
the
world.
A
C
A
C
A
A
A
For
sure,
and
and
then
just
to
dovetail
off
of
that,
I
think
I
basically
have
most
of
the
automation
set
up
on
the
kubernetes
sig
testing
org
and
the
frameworks
repo.
Thank
you
to
everybody
who
helped
me
out.
There
I've
managed
a
document
most
of
what
I
have
done.
If
we
want
to
follow
that
pattern,
going
forward,
Ryan.
A
Okay,
Tim
did
I
cover
your
rant
about
cops
in
my
rant
about
describing
blocking
jobs,
kind.
C
A
E
There
was
the
some
small
issue
with
Intel
chips
or
something
I,
don't
know
we're
sort
of
forced
everyone
to
like
do
a
lot
of
updates
and
no
one
was
able
to
really
test
them,
and
so
we
had
a
weird
thing
where
this
image
we
had
a
bad
image
and
then
the
image
a
week
later
seems
to
work.
But
we
didn't,
we
didn't
have
a
procedure
for
testing,
we
don't
have
a
solid
procedure
for
ete
testing
of
the
images
and
that
I
think
has
been
addressed
by.
E
We
now
have
a
job
that
tests,
what
we
call
our
alpha
channel
and
we
have
a
job
that
tests
our
stable
channel,
so
promotion
from
alpha
to
stable
will
be
gated
on
passing
alpha.
The
blocking
job
will
use
the
stable
channel,
hopefully
magically
like
the
break,
although
it
was
flaky,
so
it
is
still
possible,
but
hopefully
we'll
have
like
consistent
test
run
for
a
week
and
then
we'll
promote
it.
E
E
There
was
another
issue
which
is
like
we'd
like
to
go
to.
There
was
another
unrelated
issue,
as
it
turns
out.
There
is
like
the
for
our
current
just
like
test
for
nine
kernels
changed
the
connection
tracking
proc
interface
in
a
way
that
broke
one
of
our
tests,
but
I
think
we
merged
that
PR.
But
that
was
sort
of
very
unrelated
to
everything
that.
A
Helps
I
mean
from
from
my
perspective,
you
know
we
put
in
place
a
health
check
that
should
alert
when
things
haven't
merged
within
some
amount
of
time,
and
there
are
still
things
in
the
queue
and
that
alert
firemen,
so
that
part
of
the
system
worked.
I.
Think
the
open
question
which
I
want
to
see
us
hashed
out
is
what
do
we
then
do
like?
What's
the
documented
policy
there,
because,
personally
speaking,
like
I,
get
really
answering
when
people
are
like,
it's
been
broken
for
24
hours,
kick
it
out
of
the
queue
I'd.
A
Rather,
it's
been
broken
for
24
hours
and
here's
what
we
are
doing
to
fix
it
and
see
some
active
communication
there,
because
it's
blocking
for
a
reason
I'm,
just
kicking
it
out
of
the
queue,
because
it's
failing
means
that
burning
stuff
can
still
slip
in
the
meantime
just
in
the
abstract.
Maybe
we
could
have
behaved
differently
in
this
release,
but
like
actually
documenting
what
we're
gonna
do
there.
We
don't
I
want
to
see
that
happen,
because
we
don't
have
that
all.
C
Right
I
think
if
there
are
some
blocking
jobs
which
are
absolutely
critical
to
block
the
queue
and
then
there
are
other
blocking
jobs
that
are
on
the
list
that
I
don't
necessarily
know
our
clitter
are
critical
to
block
the
merge
if
they
go
out
of
whack
for
some
level
of
time
right.
Obviously,
if
it
goes
for
gone
today,
then
there's
something
fundamentally
wrong
or
broken,
but
the
if
there's
a
cow
issues
or
if
there
are
flakes
on
a
provider.
C
If
there
are
other
constraints
that
occur,
it
seems
it
seems
like
our
definition
of
blocking
might
be
a
little
rigid
for
some
use
cases
like
so.
The
signal
that
we
have
should
either
have
a
retry
interval
with
a
failover
of
some
kind,
or
you
know,
some
change
in
policy.
I
think
makes
a
lot
of
sense
here.
Yeah.
A
And
this
is
why
I
want
us
to
even
document
the
policy
we
have
to
begin
with
before
we
start
changing
it,
because
it's
all
kind
of
verbal
and
organic
right
now
on
a
provider.
The
problem
is
like:
there
is
a
provider
right
now
and
it's
Google
right.
That's
that
Google
is
also
a
provider.
You
have
to
have
some
kind
of
provider
in
the
first
place.
A
E
That
is
all
these
things,
so
that
we
can
turn
off
the
cops
job
temporarily
and
still
feel
comfortable
that
we
have
coverage
of
you
know
a
on
a
bun
operating
system
or
a
core
OS
operating
system
or
AWS
or
something
you
know
more
signal
that,
like
the
problem
right
now
is
we
have
it
takes
me
like
two
baskets
and
we
turn
off.
We've
said
we're
not
driven
off
the
GCE
basket,
but
if
M
we
turn
off
the
other
basket.
C
G
C
G
I
Seems
reasonable,
like
in
the
abstract,
so,
for
instance,
with
the
unit
tests,
if
they
start
100%
of
failing
like
they
should
be
hermetic
enough
to
fix.
There
is
a
revert
to
code
that
went
into
the
repo,
whereas,
like
with
this,
it
could
be
like
an
API
problem
on
a
provider
or
something,
and
so
when
that's
outside,
like
I,
still
feel
like
the
the
first
action
should
be
like
a
revert
of
some
sort
or
a
turn
off
myself
or
I
shouldn't
be
a
positive
action.
A
I
A
Like
I
do
have
this
there's
a
Google
Doc
linked
here
that
Jason
swore
he
was
gonna,
do
some
more
words
but
I
think
on,
but
if
he
doesn't
get
it
done
by
Friday,
I
will
I
think
it's
important
for
us
to
have
something
to
talk
over
and
yeah
like
clear
ownership
and
flakiness
and
signal
and
stuff
as
part
of
it
Eric.
You
had
something
you
wanted
to
say:
I
mean.
L
I
can
talk
about
it
more
on
Friday,
but
like
I
really
feel
like
I'm.
You
know,
part
of
the
problem
is
I
feel
like
the
fact
that
we
run
so
many
jobs
on
TCP
is
more
of
a
is
more
of
a
limitation
than
a
desired
state
like
I
would
like
to
get
to
a
world
where
we
have
mini
cube
that
handles
most
of
our
pre
submits,
and
so
then
it's
easy
for
people
to
rerun
them.
L
I,
don't
have
to
like
it's
I
would
not
like
to
get
to
a
world
where
there's
like
20
different
providers
and
now
I
need
to
run
on
GCP
and
AWS,
and
this
other
thing,
and
this
other
thing
and
that's
gonna-
be
super
flaky
and
also
really
hard
for
me
to
get
all
20
of
those
accounts
to
like
potentially
debug
bugs
so
I
think
we
should
you
know,
and
ideally
yeah,
but
I
think
that
maybe
I
don't
know
if
that's
going
to
be
controversial
and
I.
Think
that's
a
good
conversation
to
continue
talking
about
on
Friday.
A
A
Then
Steve,
you
had
something
about
long
term
plans
for
testing
for
as
a
repo
which
I
think
might
also
be
a
good
conversation
for
Friday,
but
just
real
briefly,
like
the
use
of
a
separate
kubernetes,
sig
testing.
Org
I
think
is
something
that
we're
piloting
I'm,
not
sure
how
hard
we
actively
want
to
lean
into
it.
Until
we
see
more
concrete
details
about
the
different
orgs
and
repositories
and
whatnot
from
the
steering
committee.