►
From YouTube: Kubernetes SIG Testing 20180710
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
B
B
A
B
No,
actually,
it
is
a
change.
Sorry,
it's
a
change
that
will
impact
a
lot
of
people.
So
the
idea
was
to
to
ask
first
yes
AG
testing,
group
and
then
forward.
This
may
be
to
otherwise
IG,
like
user
experience,
maybe
or
I
don't
know,
because
it
will
require
to
update
a
lot
of
documentation
and
yeah.
We
will
need
to
educate
the
the
non-members,
because
today
is
pretty
straightforward.
B
C
A
C
The
one
other
thing
I
was
wondering,
and
currently,
if
once
you're,
given
an
okay
to
test,
you
don't
need
it
again
for
changes
to
the
P
R.
Is
that
right?
Okay
right,
so
we're
essentially
just
trusting
once
you've
written
code?
That's
reasonably
good,
once
subsequent
changes
we'll
still
be
okay
to
test
yeah.
B
A
D
Future
I
think
one
of
the
main
things
is
just
the
tide
can
potentially
have
a
better
idea
of
if
it
was
actually
running
its
head
yeah
and
that's
like
if
I
could
fix
things,
I
would
say
we
should
change
how
a
reported
github
statuses
works,
and
then
we
can
build
around
that.
That's
not
reasonable.
Well,
I
mean.
A
Instead,
because
like
if,
if
a
PR
has
tested
successfully
against
an
outdated
master
or
an
outdated
target
branch,
Ref
that's
equivalent,
like
the
only
difference
is
we
want.
We
trigger
retesting
against
the
latest
of
that
branch,
but
it
doesn't
actually
take
it
out
of
the
queue
right.
I.
D
A
A
I
guess
what
I'm
saying
is
like
the
the
statuses
they
record
the
commits
in
your
PR,
and
then
they
record
these,
like
the
symbolic
ref,
the
branch
are
merging
into
so
either
the
select
there's
two
cases:
either
the
status
is
still
valid
or
the
symbolic
graph,
like
master,
has
changed.
The
status
is
invalid,
but
in
either
case,
if
you
tested
successfully
against
an
older
master
like
its
equivalent,
because
tied
will
just
silently
trigger
a
retest
before
merging
him.
So
from
a
user's
perspective,
those
two.
D
C
A
That's
amazing,
that
is
some
of
the
concrete,
some
of
the
concrete
cases
that
we're
seeing
because
we
actually,
we
had
a
sinker
interval
period.
That
was
super
short,
so
we
saw
like
if
your
sinker
period
is
short
enough,
like
a
pro
job
will
have
been
created.
The
test
runs,
you
report
back
to
github
and
then
sinker
deletes
the
proud
job,
ERD
and
then
you're
stuck
because
that's
just
a
complete
wedge
and
there's
no
way
to
get
out
of
that
hole
right
now
and
that's
kind
of
frustrating.
A
A
And
then
also
like
from
an
administrative
perspective
when
somebody
says
hey
Steve,
why
is
my
PR
not
merging
and
I
go
to
github
everything's
green
I
go
to
the
PR
dashboard,
there's
no
problems
there
and
then
I
have
to
go
like
read,
non-existent
ID
logs
from
the
cluster
to
try
to
recreate.
Why
some
things
in
the
pool
like
that
isn't
that's
invested.
D
A
A
Cuz
I
feel
like
we
started
with
it's
only
gonna
use,
proud
jobs
and
then
oh
crap.
We
need
to
use
github
statuses
and
now
we're
in
this
weird
intermediate
state
I'm,
not
really
sure
anyway,
like
I
guess,
my
phone
is
like
I
think
we
need
to
sit
down
and
think
about
this
cuz.
It's
like
super
confusing
right
now
and
I,
it's
very
hard
for
me
as
an
administrator
to
say:
oh
no
developer.
Let
me
help
you.
This
is
why
your
PR
isn't
merging,
because
I
have
no
idea.
Alright,.
D
A
D
D
I
do
think
that
also
figuring
out
better
titles
for
those
states
and
cleaning
up
that
page
a
bit
could
help
yeah.
Our
Status
page
is
already
in
such
better
shape.
That
I
think
we
have
a
quicker
win
to
like
finish
figuring
out,
what's
supposed
to
be
the
requirements
and
make
sure
they're
reflected
on
that
page
yeah.
A
D
Actually,
the
peer
status
page
is
kind
of
one
of
them,
just
because
yeah
ugly
is
the
submit.
Queue
is
internally.
The
submit
queue.
Dashboard
is
reasonably
useful
for
figuring
out
what
is
going
on
and
he's
a
lot
more
helpful
than
anything
on
the
prowl
for
like
I,
don't
care
about
what
tests
are
running
I
just
want
to
know
like
is
my
dragon,
a
technical
name,
and
is
it
gonna
get
ready?
This
is
way
more
useful
for
that,
and
then.
A
Yeah
and
I
think
one
of
the
other
questions
that
I
came
up
with
this
week,
while
I
was
trying
to
book
some
stuff.
Is
this
none
and
entirely
abused
to
me
how
badges
are
picked
like
we?
This
was
obvious
to
me
this
week
because
we
have
somebody
who
submitted
a
FOIA
request
like
two
years
ago
and
it's
still
open
and
it's
just
merging
now.
D
Yeah
I
mean
I
also
ran
into
something
we're
like
just
in
general.
The
batch
baking
like
piece
murder
with
our
tools
I'm
those
to
submit
q
over
the
weekend
was
like
if
it
ran
like
a
verify
test
in
the
batch
against
the
same
batch
like
six
times
before
it
gave
up
and
tried
another
batch
yeah,
and
then
it
still
included
the
offending
people.
D
I
think
the
answer
there
was
that,
because
for
kubernetes
there
is
a
pre
limit
that
runs
differently.
If
it's
in
post,
submit
or
batch
mode
versus
on
some
peers
that
may
skip
some
things
because
they're
expensive
checks
that
it
can
optimize
out
and
so
was
skipping
it
and
thinking
it
was
fine,
and
then
it
was
not
passing
in
batch
mode.
I.
D
A
D
We're
also
having
some
new
people
I
will
see
if
I
can
maybe
push
them
now
as
well.
I
think
I
think
Cole
is
kind
of
doing
almost
all
the
pro
stuff
right
now
and
like
maybe
baby
needs
a
hand
there
yeah
all
right
he's,
not
not
not
all
in
open
source,
but
all
the
market
knows
a
little
bit
from
send
on
dirt
and
things.
Okay,
cool.
C
So
since
we've
started
taking
a
bigger
piece
of
say,
AWS,
we've
recently
set
up
a
few
sub
projects.
They're
the
AWS
I
am
Authenticator
and
I.
Think
alb
ingress
controller
were
added
and
I'm
just
wondering
what
is
the?
What
is
the
recommended
way
of
testing
sub
projects
within
the
community?
Like
do
we
run
our
own
prog
cluster?
Do
we
use
the
community
sprout
cluster?
What
does
that
look
like
this
came.
A
Up
last
week,
actually
as
an
action
item,
but
as
a
community,
we
need
to
figure
out
the
answer
to
that
question,
but
I
think
in
order
of
integration
like
it's
easiest
to
make
use
of
the
existing
private
cluster
and
make
use
of
the
existing
build
clusters.
So
all
you
do
is
provide
a
job
config
at
some
point,
you
might
need
to
extend
queue
test
if
you
have
like
provisioning
means
right.
A
D
Far
even
some
projects
that
are
not,
you
are
just
kind
of
running
on
that.
How
because
long
term,
or
possibly
even
a
shorter
term,
we're
probably
going
to
need
to
forge
that
prowl
a
bit
and
split
things
up
a
bit
more
and
then
it
might
be
more
of
a
question
of
like
well.
He
should
be
running
what,
but
right
now
we
kind
of
just
run
things
on
there
as
long
it's
really
not
completely
ridiculous.
Bitcoin
mining
or
something.
C
D
Fixed
because
a
bunch
of
the
ways
you
do
things
like
we
used
to
just
have
a
file
to
head
all
the
jobs
in
it.
One
didn't
last
and
those
are
kind
of
being
fixed
right
now,
so
I
would
say.
The
best
thing
is
actually
service.
Catalog
has
been
through
through
onboarding,
and
there
is
an
issue
with
a
thread
following
I
think,
once
we're
kind
of
fun,
onboarding
them
starting
out
some
things
with
splitting
out
there.
D
B
C
D
D
Right
now,
we
kind
of
have
the
old
way
of
like
how
the
jobs
are
like
bootstrap
for
checkout
and
stuff,
and
then
we
have
the
future
way
which
we're
starting
to
move
things
into.
That's
a
lot
cleaner
and,
unfortunately,
most
things
are
still
kind
of
built
around
the
old
way.
So
like
the
can
be
easier,
but
it's
more
arcane
and
tribal
knowledge.
The
newer
stuff
is
a
little
bit
better
time,
but
quite
up
to
parity
for
everything
we
can
provide
with
older,
tooling,
okay.
D
A
Of
the
complicated
things
is
like,
if
you,
it
really
depends
on
how
you
want
to
integrate
like
there's,
maybe
like
five
or
six
different
ways
to
actually
integrate
with
the
system.
Depending
on
like
what
level
of
control
you
need,
you
know,
there's
a
difference
if
you
like
you
know,
I
want
to
run
tests
on
a
very
specific
set
of
hardware.
D
A
Also
said
where
I'm
boarding
a
huge
number
of
teams
as
well
since
oh
I,
think
we're
also
try
think
through
how
do
we
best
like
what
onboarding
stuff
can
prow
self
host
that's
documentation
on
the
on
maybe
on
Dec
website,
but
then
also.
Where
do
we
draw
the
line?
And
you
know
what
is
more
deployment
specific
versus
like
what
is
actually
proud,
specific,
so
yeah,
some
documentation
might
be
just
hosted
on
trial
later
yeah.
D
Basically,
my
plan
soon
is
once
I
get
a
little
bit
more
time
to
get.
This
I
want
to
write
one
where
it's
meant
for
everyone,
but
when
it
has
to
come
down,
it
can
create
examples.
I'm
gonna,
like
list
some
communities,
issue,
things
and
I'm,
hoping
that
will
like
find
a
good
enough
balance
there
I
do
you
think
we
have
enough
job
setup
where
we
might
even
need
a
separate
dock
for
a
few
pointers,
but
I
want
to
try
to
people.
D
Don't
click
through
links
a
lot
so
I'm
going
to
try
to
avoid
meaning
that
as
much
as
possible
and
just
have
like
this
is
how
you
do
a
potty,
Tools,
job
and
I.
Think
like
when
we
were
onboarding
service.
Catalog
came
out
really
well
like.
We
should
just
have
an
example
for
something
like
go
test,
and
that's
generic
enough
that
like
if
we
have
a
pretty
good
example
of
how
to
do
that
for
a
repo,
it
doesn't.