►
From YouTube: Kubernetes Sig Testing 2018-03-13
Description
A
B
So
I
don't
quite
understand
the
dependency
graph
between
some
of
the
tests,
there's
a
cross
builder,
which
publishes
a
bunch
of
artifacts
into
their
separate
GCS
bucket
and
then
some
other
tests
leverage
it
that's
part
of
what
we're
seeing
on
side
of
the
cuvette
DM
release
blocking
jobs
for
the
one
nine
to
one
10
upgrade,
but
the
it
doesn't
look
like
they've
actually
been
working
for
a
long
time.
So
I
don't
exactly
know
who
owns
it
or
maintains
it.
But
I
do
know
that
the
problem
that
we
are
facing.
B
The
reason
why
we
have
this
weird
one
person
shuffles
a
bit
off
to
the
other
location
that
the
other
person
depends
upon.
It's,
because
all
the
build
artifacts
aren't
produced
from
a
single
canonical
location,
and
so
there's
two
parts
to
this
question.
One
is
who
owns
the
cross,
see
I,
build
jobs,
and
the
second
one
is:
where
is
basil
at
four
cross
builds.
C
B
C
C
B
C
B
So
how
are
you
going
to
track
resolution
on
this
and
does
it
we
can
test
asynchronous
of
this
right
like
we
can?
We
can
do
this
by
hand
to
validate,
and
we
will
probably
do
that
anyways
just
to
be
careful.
But
ideally
this
automation
looks
like
it's
been
broken
for
a
really
long
time
and
no
one
was
paying
attention
until
we
got
to
release
so
I.
B
C
B
Crazy
yeah
and
then
my
plan
in
in
the
1:11
cycle.
It
ain't
talking
with
the
cluster
API
focus
long
term.
Google
wants
to
get
rid
of
all
of
the
cluster
directory
yeah
in
getting
rid
of
all
of
the
cluster
directory
and
adding
cluster
API
into
the
mix
and
I
would
love
to
have
folks
tag-team
in
setting
up
blocking,
build
jobs
that
get
rid
of
qu,
better
communities
anywhere
and
use
the
cluster
API
and
the
111
cycle.
A
D
We
have
the
will
that'll,
give
it
some
time
to
soak
I
guess
so
that
could
be
a
good
thing
to
do
just
because
we
just
rolled
out
the
new
status
changes.
That
being
said,
they're,
probably
not
a
particularly
volatile
thing
or
something
that
could
cause
problems
really,
so
we
we
could
roll
it
out
soon
and.
A
So
I
guess
the
one
question
I
had
it
and
this
kind
of
touches
on
the
last
point
on
the
list
as
well.
There
was
a
conversation
about
how
many
tokens
we
were
about
to
be
using
after
codes
law
and
then
also
I,
know
that,
like
what
the
status
change
implementation
for
tide
there's
a
lot
of
conversations
about
whether
or
not
that
would
use
too
many
tokens
like.
Are
we
worried
about
running
out
of
quota
there
yeah.
D
So
I've
actually
been
monitoring
this,
like
just
over
the
past
couple
hours
because
and
just
rolled
out
the
the
status
updates,
and
it
really
should
not
be
using
that
many
tokens
or
using
a
few
hundred
tokens
I
guess
the
first
time
to
set
all
the
statuses.
And
if
you
look
at
the
the
API
token
usage
graphs
on
Velodrome,
you
can
see
that
there's
a
small
spike,
but
it
is
it's
not
super
significant.
D
When
we
do
deploy
for
the
main
kubernetes
repo,
it
will
only
set
statuses
for
some
of
the
PRS
at
first
and
then
it
will
stop.
So
we
have
several
rate
limiting
for
that
now.
So
we
shouldn't
run
into
problems.
It'll
just
be
it'll,
take
a
couple
hours
essentially
for
all
the
statuses
to
be
set
for
kubernetes,
but
we
won't
run
out
of
tokens.
So
I
think
we
should
be
pretty
okay,
like
I,
think
everything
will
token
wise,
I
think
we'll
be
we're
ready
for
kubernetes.
D
C
A
They're
not
also
a
tracker
issue
which
is
way
old
in
the
meeting
notes.
There
was
a
couple
other
things
that
I
don't
know.
If
we
want
to
consider
blockers
for
the
actual
implementation,
there
was
a
request
to
be
able
to
ignore
specific
services
that
came
in
a
couple
weeks
ago
and
then
I
can't
quit
all
I.
Think
then
there's
another
requester
with
the
tight
status
has
to
be
required
and
so
I
guess.
If
you
want
to
also
say
hey,
let's,
let's
make
sure
we
can
link
the
PR
status.
A
C
D
It's
not
actually
a
important
for
the
tied
status
to
be
required
at
all,
there's
no
race
condition
or
anything
there.
It's
yeah
it
that's
more
like
a
UI
thing,
and
it
almost
makes
sense
for
not
to
be
required.
If
people
are
going
to
manually
merge,
you
know
if
there's
gonna
still
be
a
few
people,
but
I'm
not
sure
mother
doesn't
make
a
big
difference.
Okay,
those
other
points,
those
should
be
quick
fixes
like
making
or
allowing
for
other
context
to
be
skipped.
That's
a
that's
an
easy
one
to
add.
A
A
A
C
Actually
quick
note
there,
son
and
I
have
been
talking
about
the
fact
that
Communities
is
proud.
Config
is
still
this
yeah
and
it
it's
difficult
to
do
things
like
make
breaking
changes
to
the
config
I'm
hoping
kind
of
in
the
next
quarter.
Here
we'll
put
out
a
plan
for
some
proposals
around
redesigning
the
config
fit.
There
have
been
some
other
interesting
ideas:
okay,
Joe
kicked
around
the
idea
of
inverting
it
and
having
repos.