►
From YouTube: Kubernetes SIG Testing - 2020-10-06
Description
A
Hello,
everybody
today
is
tuesday
october
6th
and
you
are
at
the
kubernetes
sick
testing
bi-weekly
meeting.
I
am
one
of
the
sick
testing
chairs
and
I
am
your
host
today,
aaron
of
sig
beard
for
spiff,
xp,
slack
and
github.
This
meeting
is
being
recorded
and
will
be
posted
to
youtube.
A
It
will
be
publicly
viewable,
so
we
ask
that
you
all
adhere
to
the
kubernetes
code
of
conduct
by
being
your
best
selves,
not
being
jerks,
and
if
you
have
any
concerns,
please
reach
out
to
me
or
if
the
concerns
are
about
me
or
any
of
the
other
chairs.
You're
also
welcome
to
reach
out
to
conduct
at
kubernetes.com.
A
So
with
that
we
have
a
pretty
busy
agenda,
I'm
gonna
try
and
keep
us
to
10
minutes
per
item
and
I
will
hand
over
to
rob
to
talk
about
ci
policy
updates
also,
if
anybody
needs
to
screen
share.
Just
let
me
know
and
I'll
make
you
a
co-host,
so
you
can
do
that.
B
Okay,
so
so
the
the
the
main
thing
that
I
just
want
to
note
here
is
that
over
the
past
week
or
two,
I've
been
working
on
a
ci
single
status,
update
report
and
that
the
primary
use
case
for
this
is
for
the
ci
signal
and
team
lead
to
be
able
to
automatically
generate
a
report
that
goes
through
goes
to
test.
B
Grid,
gets
a
list
of
a
list
of
summaries
for
the
jobs
on
sig
release
and
forming
and
sig
release,
blocking
tab
groups
and
goes
through
those
tests
and
that
are
flaking
and
lists
out
the
tests
that
are
actually
flaking
on
a
per
job
basis.
B
And
then
so
at
the
moment,
I'm
throwing
up
or
producing
csv
that
gives
the
flaking
tests
per
job
per
sig
and
and
the
intention
in
next
week's
development
is
just
to
add
what
issues
have
been
logged
for
for
the
flaking
tests
reported
there
in
github.
B
So
if
there's,
if
we
have
issues
great
I'll,
put
them
against
the
the
role,
and
if
I
don't
have
issues,
then
I
just
I
will
flag
that
as
oh,
we
have
a
flaking
test
that
isn't
that
isn't
being
tracked
by
ci
signal
so
and
that's
the
update,
so
that
is
kind
of
blocking
me
from
doing
the
that's
blocking
me
from
doing
the
ci
policy
signal
or
sorry
the
ci
policy
updates.
But
I
expect
to
be
able
to
return
to
that
work
from
hopefully
next
monday.
B
B
That
and
the
other-
and
the
other
thing
is-
is
that,
although
I
was
thinking
about
putting
this
into
the
experiments
folder
in
test
infra,
I've
gotten
feedback
from
sig
release
that
they
have
a
repo
that
I
can
drop
this
into
and
they
that
whole
sig
release
team
will
be
able
to
give
me
feedback
on
that,
because
it's
more
of
a
it's
more
of
a
a
report
for
for
ci
signal
and
sig
release,
rather
than
it's
about
tests.
But
it's
tracking
the
ci
signal
work.
B
Now,
and
and
and
that
has
been
a
useful
reference-
and
there
certainly
is,
from
a
code
point
of
view,
overlap
from
and
and
in
actual
fact,
on
the
when
I,
when
I
get
when
I
when
I
go
to
test
grid
and
get
the
job
status,
the
the
job
status,
data
structure
down,
there
are
fields
there
that
where
they
filled
out,
I
wouldn't
need
to
do
this.
This
report,
and
specifically
that's
the
fields
linking
test
flakes
to
bug
reports
are
presently
banks.
B
So
I
could
very
much
see
this
tool
been
being
an
interim
tool.
You
know,
but
but
yeah
the
feedback
for
the
feedback
from
steven
was
to
put
into
decay
release.
You
know
how
much
of
this.
B
Oh
none
really
like
I
mean
so
so
so
it
have
to
be
you.
You
have
to
walk
in
the
shoes
of
a
ci
signal
team
lead
to
experience
the
pain
of
doing
that
report
manually
on
the
monday.
It
consumes
your
monday,
you
know,
and
so
so
so
I'm
just
click.
You
know
turning
this
into
a
single
click
button
action
to
to
get
that
status
update.
B
There
are
potential
future
enhancements
of
this
in
terms
of
maybe
running
it
as
a
proud
job,
maybe
generating
time
series
data
that
could
be
consumed
by
prometheus
and
and
grafton
grafana.
B
But
but
that's
that's
down
the
line
sort
of
thinking
for
this
you
know,
so
I
would
say
where
whatever
repo
it
lives
in
just
keep
an
eye
on
it
because
it
may
be
useful
and
it
could
be
that
we
might
look
at
this
in
a
few
weeks
time
and
go
actually
no,
let's,
let's,
let's
take
you,
could
might
look
at
this
and
go.
Let's
take
this
code
and
let's,
let's
use
it,
it's
a
bit
of
a
tricky
one.
B
C
I
do
think,
though,
you
know,
while
sick
release
may
be
stepping
people
to
like
look
at
the
ci
dashboard
and
whatnot.
The
like
infrastructure
of
testing
is
kind
of
the
main
thing
that
is
actually
in
this
sig's
wheelhouse.
A
B
A
Would
like
to,
I
would
like
to
see
us
have
all
of
the
tools
that
make
test
results.
More
actionable.
A
I
want
people
to
feel
empowered
to
go
forth
and
iterate,
so
if
that
is
where
you
feel
safest
doing
that
yeah,
I
am
okay
with
that
at
the
moment
and
possibly
reevaluating
my
one.
The
other
little
like
engineer
thing,
that's
going
on
for
me
is
yeah
just
duplication
of
effort,
so
it
sounds
like
you're
using
test
grid
as
your
source
of
truth
for
what
flaking.
A
D
B
So,
where
we
log
issues-
and
we
have
it
on
the
board
going
from
you-
know,
waiting
response
to
triage
to
under
investigation
observing,
so
so
so
there's
opportunities
there
for
ci
signal
to
track
the
ci
signal
effort
in
in
terms
of
logging
issues
and
and
aging
those
issues,
etc,
etc.
You
know.
A
A
And
I
think
it
you
know
it
sounds
like
it
might
be
worth
trying
to
to
get
a
conversation
going
about
if
and
how
we
could
link
things
like
github
issues
to
test
grid
failures.
I
believe
we
had
you've
talked
about.
A
I
was
talking
about
something
called
like
etymologist,
I
think
a
while
back
that
was
sort
of
supposed
to
run
as
a
component
in
kubernetes
cluster.
That
would
kind
of
connect
issues
that
had
a
sort
of
a
well-formed
syntax
to
the
the
data
structure.
What
test
group
reads
to
populate
those
fields,
so
that
might
be
worth
examining
is
some
prior
art
for
linking
those
together.
B
Yeah
sorry
to
go
across
you,
but
what
I
would
say
is
is
that
this
is
almost
a
pet
project.
It's
almost
by
me
and
for
me
and-
and
I
am
not
precious
about
putting
in
this
effort
and
putting
it
wherever
and
having
people
look
at
it,
evaluate
it
see,
what's
useful,
see,
what's
not
useful
and
not
have
it
as
a
true
piece
of
prototype
report
that
we
then
come
back
and
circle
back
to
and
go
right.
B
A
I
I
hear
what
you're
saying,
which
is
why
my
operable
message
is
like.
I
want
you
to
feel
empowered
to
go
forth
and
iterate.
I
think
I'm
just
trying
to
sort
of
it's
it's
helpful
for
us
to
remind
ourselves
what
prior
art
exists.
If
you
want
to
look
at
it,
so
the
only
other
thing
I'll
put
out
as
an
idea
is,
you
know,
all
of
our
test
results
end
up
going
through
a
data
pipeline.
A
They
land
in
gcs
and
then
there's
a
thing
called
kettle
that
takes
them
and
transforms
them
into
sort
of
more
structured
data
that
lands
in
a
bigquery
database
that
is
publicly
queryable.
We
have
proud
jobs
that
run
queries
against
this.
One
of
those
is
to
identify
for
every
single
job.
A
What
are
the
tests
that
both
pass
and
fail
for
the
same
commit,
and
so
I
think,
there's
there's
like
a
json
file,
that's
linked
off
somewhere
from
the
metrics
directory
in
testing
that
may
be
restricted
just
to
the
pull
request,
jobs,
but
the
the
query
that
runs
actually
does
this
for
all
the
jobs,
not
even
just
the
like.
A
So,
even
more
than
the
release
informing
and
blocking
ones,
so
it
could
be
that
you
could
key
off
of
the
job
names
which
you
can
get
from
test
grid.
I
think,
and
you
could
link
with
that
json
thing
and
you
could
get
flakes
that
are
happening
that
are
not
yet
identified
as
flakes
according
to
test
or
like
persistent
failures
according
to
test
grid,
okay,
and
to
take
that
one
step
further.
A
We
we
also
had
that
same
data
set
is
used
in
driving
the
triage
dashboard
and,
at
one
point
in
time,
had
a
bot
that
was
responsible
for
looking
at,
like
the
top
flakes
on
that
dashboard
and
creating
issues
according
to
those
and
then
trying
to
use
the
sick
name
in
there
to
automatically
tag
it
with
the
sig
and
tag
it
with
a
test,
failure
and
whatnot.
A
So
the
I'm
just
throwing
them
out
there
as
things
you
you
could
use,
but
I'm
in
favor
of
having
done
ci
signal
myself.
Having
also.
E
B
Yeah,
so
well,
then,
what
I'd
ask
if
you
get
a
chance
if
you
get
a
chance
to
review
the
code,
it's
it's.
You
know
I've
separated
out
into
data
collection
and
then
data
formatting
and
I
think
you'd
be
able
to
to
go
through
it
fairly
quickly.
I
think
I
want
to
end
our
my
time
box
now,
but
I
might,
I
might
have
a
quick
chat
with
you
and
get
some
references
on.
B
I'm
I'm
interested
in
doing
running
queries
against
that
bigquery
database
that
that
could
be
useful,
okay,
a
little
bit
more
robust
as
well.
I
think
you
know.
A
Appreciate
that
and
yeah
yes,
I
would
say:
let's
let's
hand
off
to
el
nico.
I
don't
know
what.
F
Your
actual
name
is
so
my
name
is
mike
or
michael,
but
el
miko's,
fine
as
well
nice
nice
to
meet
you
yeah,
so
I
guess
yeah,
I'm
mike
mckeon.
I
work
for
red
hat
and
I
prepared
a
little
slide
deck
just
to
help
talk
about
kind
of
the
issue
I
wanted
to
talk
about
here.
So
I'm
going
to
attempt
to
share
it
and
hopefully
that'll
work.
F
Okay,
I
guess
is
everybody
seeing
this
slide
deck,
okay,
cool,
so
yeah,
so
in
I'm
kind
of
relatively
new
to
working
directly
on
kubernetes,
I've
only
been
working
on
the
auto
scaler
for
about
six
months
now
and
working
in
cluster
api
as
well,
and
we've
integrated
cluster
api
with
the
auto
scaler.
F
But
one
of
the
problems
that
this
has
created
for
us
is
that
we
would
like
to
now
run
end-to-end
tests
using
the
cluster
api
provider
with
the
auto
scaler.
So
some
of
the
goals
that
we're
looking
to
get
out
of
this
is
you
know
we
would
like
to
be
able
to
add
cluster
api
as
a
provider
to
those
tests.
F
We'd
also
like
to
be
to
be
able
to
enable
more
automated
tests
for
the
cluster
auto
scaler,
because
currently
I
think
these
tests
are
just
run
periodically
and
they're
only
run
on
gce
and
gke.
I
think,
hopefully,
by
doing
this,
we
kind
of
open
a
pathway
for
more
contributors
to
create
cluster
auto
scalar
tests.
You
know
for
their
providers.
So,
like
you
know,
part
of
the
goal
here
is
to
pave
a
pathway
for
people
to
create
more
providers.
F
For
these
tests-
and
then
you
know,
I
think
a
stretch
goal
would
be
like.
Maybe
we
could
demonstrate
how
other
tests
that
have
providers
linked
to
them
could
do
the
same
sort
of
transformation.
F
So
you
know
with
that
said.
This
is
kind
of
like
a
really
10
000
look
10
000
foot
view
of
what's
going
on
currently
in
the
kubernetes
repo.
The
cluster
auto
scaler
tests
are
there
in
the
ede
testing.
The
provider.
Interface
is
also
there
and
we
have
what
are
essentially
three
concrete
implementations
of
that
interface,
gke,
gce
and
kubemark.
F
Now
you
know
so
far.
Ben
moss
from
vmware
has
been
putting
together
a
proof
of
concept
for
us
and
what
we
wanted
to
do.
The
line
of
thinking
we
had
was.
F
So
you
know,
example,
one
is
kind
of
like
this
is
this
is
how
one
way
we
could
go
forward
with
this,
we
could
propose
a
cappy
implementation
of
the
provider
interface
to
the
kubernetes
repo,
but,
as
I
linked
this
issue
that
tim
sinclair
pointed
me
at,
it
seems
like
I
don't
think,
there's
good
acceptance
for
adding
more
providers
into
the
kubernetes
repo.
F
F
Another
example
of
what
we
were
thinking
was
to
keep
the
cluster
auto
scalar
tests
in
the
kubernetes
repo
to
improve
the
interface
they
there
for
the
cluster,
auto
scaler
providers
and
create
a
small
library
as
well,
and
this
library
would
help
individual
providers
do
like
the
setup
and
tear
down
that
the
tests
expect,
and
so
in
this
model
you
know
the
providers
would
be
outside
of
the
kubernetes
repo,
and
you
know
individual
providers
would
just
create,
as
they
you
know,
felt
necessary.
F
So
you
know
the
third
example
here
would
be
like
what,
if
we
take
the
tests
out
of
the
kubernetes
repo
and
we
take
the
providers
out
as
well
and
just
allow
the
test
to
live
in
a
another
location,
perhaps
next
to
the
auto
scaler
and
the
providers
could
live
in
their
own
repos
as
normal.
So
these
are
kind
of
the
three
ideas
we're
thinking
about
currently
on
how
we
could
integrate
with
the
rest
of
the
testing.
F
Now
I
think
each
one
of
these.
You
know,
example,
one
kind
of
seems
undesirable
because
you
know
we're
just
adding
more
provider
specific
code
into
the
kubernetes
repo
example.
Two,
I
think
we
have
some
acceptance
from
the
auto
scaling
group.
They
like
the
idea
of
keeping
the
tests
in
the
kubernetes
repo
and
adding
more
of
a
library
there
and
then
allowing
the
providers
to
live
in
an
external
manner,
but
this
kind
of
brings
the
question
in
you
know:
how
would
we
link
a
provider
to
the
test
during
deployment?
F
That
would
have
to
be
some
sort
of
dynamic
linking
step-
and
you
know
I
mean
go-
doesn't
have
great
support
for
this,
so
we
would
have
to
work
with.
You
know
the
testing
community
and
the
testing
frameworks
we
have
to
do
that.
F
Example.
Three
is
kind
of
the
easiest
approach
from
a
breakage
standpoint,
because
we
can
just
start
afresh,
but
I
think
it
represents
a
shift
in
kind
of
thinking
for
how
the
tests
would
be
deployed.
You
know
from
the
kubernetes
repo
how
they'd
be
run,
how
they
would
fit
in
with
the
rest
of
the
the
structure.
F
So
what
I'd
you
know,
kind
of
like
to
talk
with
about
the
group
here
is
go
what
ideas
do
you
have?
Has
anybody
thought
about
this
idea
of
provider
separation?
What
what
would
be
the
most?
I
guess
what
would
be
the
most
copacetic
way
to
do
this
with
the
testing
group
in
mind
and
then
what
you
know?
What
should
be
the
next
approach?
Do
we
need
to
write
a
cap
about
this?
Do
you
want
to
see
a
proof
of
concept
or
what?
What
what
would
the
group
like
to
see?
C
I
mean
a
quick
note.
First
of
all
is
that
we're
certainly
not
we're
not
introducing
providers.
Currently,
that
is
definitely
the
case.
In
fact,
the
goal
is
to
remove
all
of
the
existing
provider
stuff.
It's
a
little
unclear
exactly
how
that's
going
to
work
for
the
test.
There's
been
a
lot
of
work,
put
into
how
to
remove
it
from
the
core
components,
so
we're
encouraging,
essentially
everything
to
use
the
fake
skeleton
provider
with
possibly
some
circle
back
on.
That
ought
to
be
the
default
and
perhaps
a
confusing
name.
C
I'm.
We
also
have
some
precedent
for
things
that
kind
of
like
don't
use
core
apis
having
their
own
test
binaries.
C
It's
like,
for
example,
the
node.edu
tests
are
doing
all
kinds
of
things
that
you
might
not
necessarily
consider
actually
kubernetes
like
ssh'ing
to
the
actual
underlying
node
and
poking
around,
and
so
for
those
they're
in
their
own
test
binary,
and
that
might
be
something
to
consider
here,
is
like
splitting
these
off.
It
isn't
super
clear
that
there
will
be
a
like
definite
like
core
api
standard
for
this
right
like
this
is
a
this
is
an
add-on
right.
C
Yeah,
like
anything
the
test
is
gonna
interact
with,
is
not
a
like.
It's
not
an
api
that
comes
with
kubernetes
core
correct.
F
F
C
Yeah,
I
think
that
should
at
least
be
considered
when
we're
writing
this
up.
There
certainly
are
tests
today
that
do
terrible
things
like
exactly
test
a
specific,
not
just
like
cloud
provider
or
something
but
specific
cluster
up.
Implementations
like
firewall
rules,
for
example,
is
one
of
the
more
ridiculous
ones,
so
we
haven't
purged.
Yet
we
definitely
don't
want
to
introduce
more
of
that.
We
want
to
figure
out
what
to
do
to
sort
of
purge
those.
It
kind
of
hasn't
been
super
high
priority
or
momentum
to
delete
them.
C
Most
of
them
aren't
you
know,
causing
problems
by
existing
they're,
not
default
tests,
and
they
aren't
themselves
introducing
new
dependencies
on
things,
but
we
still
kind
of
centrally
have
the
like
core
kubernetes
cloud
provider,
abstraction
concept
tied
into
things-
that's
a
bit
difficult
to
remove,
but
I
think
you
know
we
want
to
move
away
from
that
where
possible
and
get
to
a
place
where
the
ew.testbinary
just
works
on
clusters
and
doesn't
doesn't
doesn't
know
about
cloud
providers
and
cloud
providers
have
their
own
tests.
F
Yeah,
so
this
like
I
originally,
I
had
started
a
document
talking
about
how
we
might
do
this,
and
that
was
my
original
thinking
was
that
we
would,
you
know,
just
have
the
tests
externally
separate,
allow
the
providers
to
kind
of
bring
the
test
in
as
a
library
and
then
let
each
provider
build
a
binary
that
they
would
just
run
against.
You
know
like
a
cube
config,
and
it
would
just
do
the
right
things
for
their
test.
F
I
guess
the
question
there
is:
if
we,
if
we
created
that
much
separation
between
where
the
different
code
pieces
are,
is
there
an
acceptable
way
for
us
to
bring
that
into
the
automated
testing
that
happens?
Like
you
know,
that's
triggered
by
prow
on
on
like
prs
and
whatnot.
Is
there
a
way
we
could
fold.
C
We're
we
we've
for
the
sort
of
the
glue
that
we
use
in
ci.
There's
this
project
cube
test2
and
it's
headed
in
the
direction
of
like
itself
having
each
of
the
tests
be
a
binary
that
essentially
expects
to
receive
the
cube
config
in
a
few
additional
niceties
and
then
it
similarly
has
sort
of
a
standard
contract
but
separate
binaries
for
a
deployer
binary.
That
is
aware
of
how
to
manage
a
cluster
but
gets
but
imports
a
library
with
a
bunch
of
code
that
does
sort
of
like
ci
integration
and
life
cycle.
C
Glue
like
we
want
to
make
sure
it
gets
torn
down
on.
You
know
a
sig
term
from
the
pod
that
we're
running
a
test
in
and
that
sort
of
thing.
So
actually
the
model.
There
is
pretty
much
what
you
described
like
you're
gonna
have
a
tester
binary
and
it's
gonna
receive
like
credentials
for
the
cluster,
and
there
will
be
some
wrap
around
it.
That
kind
of
handles
bridging
things
to
ci.
F
Yeah
I
mean
that
that
sounds
exactly
like
what
we
want.
So
I
guess
like
last
question.
This
has
been
really
productive,
but
I
do
want
to
be
sensitive
about
the
time
here.
I
guess,
if
we
go
with
the
kube
test,
2
model
of
things,
should
we
just
write
our
our
plcs
and
kind
of
build
it
all
or
is
that
is
there
a
cap
that,
like
the
testing
group,
would
like
to
see
to
talk
about
what
we're
going
to
do?
C
C
If
we
go
write
this
new
thing,
but
otherwise
I
think
it
would
be
totally
reasonable.
Just
kind
of
go.
Do
your
own
thing
and
reach
out
for
any
hope.
You
need,
and
it
seems
like
something
that
might
be
reasonable
to
ship
sort
of
like
with
the
cluster
autoscaler
project.
I
think
that's
the
direction
we
we're
trying
to
encourage
is
that,
like
you
know,
each
project
ship
their
own,
I
mean
further.
C
Some
of
the
people
on
my
side
at
google
that
are
working
on
this
tooling
are
also
supporting
projects
that
are
not
actually
even
part
of
the
kubernetes
or
so
it's
kind
of
same
thing
like
we're
not
going
to
put
any
of
their
like
tester
stuff
into
core,
but
like
leaving
a
path
where
they
can
ship
their
own.
Okay,
because
they're.
F
F
You
know
to
like
the
google
platforms,
and
so
you
know,
they're
good
tests,
but
they
run
very
infrequently,
and
I
think
I
was
looking
at
them
recently.
They've
been
failing
for
the
longest
time,
so
I
think
the
value
that
we're
getting
back
from
those
is
very
low,
and
you
know
the
concern
that
we
have
from
the
cappy
side
of
things
is
that
we
would
like
to
make
you
know.
F
Cappy
is
iterating
a
lot
and
it's
integrating
with
the
auto
scaler
the
rest
of
the
auto
scaler
providers
are
kind
of
stable
at
this
point,
but
we
want
to
make
sure
we're
not
going
to
break
the
auto
scaler
as
we
start
to
ramp
up.
You
know
higher
versions
of
cluster
api,
so
long
term
we'd
like
to
get
to
a
place
where
we
can
run
these
automated
on
commits,
and
I
just
want
to
make
sure
that
what
we're
doing
will
fit
in
line
with
that.
If
it's,
if
it's
accepted
eventually.
C
Yeah
I
mean
I
mean
this
is
not
the
sort
of
thing
that
we
would
have
and
pre-submit
anyhow,
so
we
can
run
and
see
any.
We
don't
really
we're
not
going
to
be
super
concerned
about
like
what
repo
it
is.
I
mean
if
it
were
something
that's
completely
outside
of
the
kubernetes
org.
C
That
may
raise
some
concerns,
but
you
know
if
it's,
if
it's
some
kubernetes
project
and
and
you
want
to
run
ci,
that's
not
a
problem,
and
then
you
know
getting
like
the
release
team
or
something
to
look
at
is
sort
of
its
own
discussion
and
again
it
doesn't
really
depend
on
where
it
is,
and
I'd
actually
say
that
you'll
probably
have
an
easier
time.
It's
currently
a
little
bit
expensive
to
it
to
iterate
in
kubernetes.
Repo
you'll
probably
have
a
much
smoother
time.
F
A
Good,
I
appreciate
your
time
and
the
presentation
to
answer
your
question
about
poc
or
cap.
I
hate
to
give
you
the
answer
of
why
not
both
if,
if
I
were
going
through
this
process,
I
feel
like
a
poc
is
probably
going
to
uncover
some
of
the
gnarled
details.
A
That
would
help
create
a
more
fully
formed
cap
and
I
agree
like
the
challenge
for
me,
is
when
you
use
the
word
provider,
there's
the
cluster
api
provider,
the
cloud
provider,
the
credentials
provider,
the
storage
provider,
there's
there's
a
bunch
of
stuff
and
I
think
we're
we're
trying
to
sort
of,
unfortunately
be
the
bad
guys
and
say,
like
look
this
provider
thing
that
all
of
our
tests
rely
upon,
we
need
to.
A
We
need
to
just
stop
building
on
that,
and
we
need
to
come
up
with
a
better
solution,
so
I
think
iterating
out
of
repo.
You
know
your
option.
Your
third
option
of
like
the
least
amount
of
breakage,
is
probably
also
going
to
be
the
best
way
for
us
to
figure
out
what's
a
more
extensible
model
going
forward.
So
thanks
for
your
time,.
G
A
C
This
is
have
to
see
here.
I've
got
my
little
distraction
this
morning.
He
just
you
know
something
about
10
o'clock
meetings.
He
gets
all
worn
out
yeah,
so
I
think
probably
many
of
you
have
heard
about
and
pending
change
from
docker
inc
they're
going
to
change
some
of
their
policies
around
docker
hub.
This
is
mostly
sort
of
a
psa
to
get
people.
Thinking
about
this
a
bit.
C
I
have
a
few
particular
thoughts
in
mind,
but
at
a
super
high
level,
they're
going
to
start
rate
limiting
a
bit
how
many
poles
you
can
have
as
a
client
if
you're
not
authenticated,
and
even
if
you
are
authenticated
and
they're
going
to
start
garbage
collecting
images
in
our
pool.
The
latter
doesn't
seem
like
a
problem.
C
C
You
can
always
have
a
copy
backed
up
there,
but
the
changes
where
we're
going
to
be
limited
on
how
many
poles
we
can
do
seem
like
something
that
are
potentially
pretty
problematic
for
a
lot
of
the
ci
throughout
the
project
where
who
knows
how
many
images
we're
pulling
from
docker
hub?
Even
if
we're
building
and
shipping
our
own
images
somewhere
else
is
pretty
likely.
We
have
base
images,
we're
pulling
or
random
images
in
the
e.
C
So
I
think
we
probably
need
to
look
at
minimizing
our
dependency
on
this
for
live
polls
and
things
like
the
kubernetes
end-to-end
tests,
and
we
probably
also
need
to
look
at
getting
the
ci
authenticated
on
some
non-elevated
credential
to
docker
hub
or
possibly
a
pool
of
them,
and
look
at
running
a
cache
for
pull
through.
C
I'm
a
bit
concerned
that
this
is
coming
in
about
a
month
or
so,
and
I
don't
think
anybody's
taking
a
super
strong
look
at
this
for
kubernetes
and
that
we're
probably
gonna
have
some
all
sorts
of
ci
issues
crop
up.
So
yeah.
I
think
my
main
proposals
are.
We
should
look
at
getting
a
like
sort
of
official
kubernetes
ci
pull
credential
and
we
should
look
at
like
running
a
pull
through
cache.
C
But
I'm
not
sure
if
there's
anything
else,
we
should
be
thinking
about
there
and
I
would
like
you
know
people
in
the
sig
to
if
they
can
take
a
look
at
this
before
the
changes.
Land
luther
landing
early
november.
C
D
C
C
A
D
Yeah
in
our
scenario,
we
we
basically
have
to
merge
the
requests
that
are
sent,
which
adds
windows,
support
to
the
rest
of
the
conformance
images
that
we
are
using,
and
then
that
would
basically
mean
that
the
image
builder,
the
kubernetes
image
builder,
is
gonna,
build
those
windows
images
and
then
they
are
going
to
be
hosted
on
the
kds
gcr
io
registry
and
that's
going
to
be
problem
solved
for
for
us
as
well.
D
Basically,
I
have
two
pull
requests,
which
add
adds
support
to
the
rest
of
the
performance
images,
windows,
support
and,
of
course,
we
are
gonna
need
the
build
expo
request
which
I
have
listed
in
the
agenda
down
below.
I
guess
we
can
talk
about
that.
Then,
okay,.
H
Ben,
do
you
have
any
plans
or
thoughts
to
send
out
a
note
to
the
rest
of
kdev
or
a
warning,
so
I
can
bring
this
up
in
six
cli
tomorrow.
C
Yeah,
I
didn't
have
any
concrete
plans,
yet
I
was
kind
of
hoping
to.
You
know,
give
feedback
on
that
sort
of
thing
here
that
that
really
sounds
like
something
we
should.
C
A
Okay
cool
in
the
interest
of
time
boxing.
Let's
move
on
to
jay
to
talk
about
the
net
policy
framework
cap,
that's
not
really
a
framework.
G
Yeah
yeah,
I
shouldn't
call
it
a
framework,
I'm
sorry.
Okay,
so
can
I
share
my
screen?
Let
me
make
you.
G
Okay,
cool
all
right
yeah,
so
for
folks
that
aren't
this,
this
goes
back
a
ways:
we've
had
a
lot
of
tech
debt
in
the
network
policy
tests
for
a
while,
and
so
we
I
started
working
on
this
while
ago,
with
with
as
sadaf
I
actually
mentioned
so
that
I
think
sadaf's
here
should
just
join
also
she's.
She
worked
on
a
few
cni's,
so
the
overall
problem
is
that
we
actually
and
a
lot
of
people
don't
know
this.
Is
we
don't
actually
run
these
in
ci
at
all?
And
that's
that's
okay!
G
For
now,
that's
not
the
problem
I
wanted
to
solve
today,
but
they
aren't
running
ci
and
I
think
one
of
the
reasons
they're
not
running
ci
is
because,
as
they
stand
there
they're
very
slow
and
there's
a
there's
been
issues
with
them.
We
fixed
some
of
them
here
over
at
vmware,
but
you
know
there's
just
the
fact
that
these
have
been
broken
for
so
long
means.
G
Cni
providers
also
haven't
been
running
them,
so
I've
been
working
with
different
cni
providers,
both
calico
and
andrea,
to
really
like
put
together
a
way
to
run
these
tests.
That's
kind
of
fast
and
sane
and
easy
to
maintain
so
the
number
one
thing
I
know
the
concern
is
framework.
When
I
wrote
the
cap
with
me,
sadaf
a
few,
a
bunch
of
other
people.
G
What
we
did
is
we
took
a
lot
of
the
concepts
in
the
whatever
they're
going
on
the
calico
ci
stateful
pods,
and
we
took
the
pulling
and
probing
of
those
pods
and
we
merged
them
with
other
things
that
we
needed
in
terms
of
performance
and
test
coverage,
and
I
just
I
asked
sean
if
it
was
okay.
G
Not
only
is
it
bad,
but
in
slow
clusters.
This
is
really
bad
because
you
have
to
spin
up
so
many
pods
to
test
all
the
policies
and
the
problem
gets
worse
and
worse
and
then
every
new
test
that
we
add
is
about
100
new
lines
of
code.
So
it's
already
to
the
point
of
unmaintainable,
really
bad,
so
anyways,
so
the
new
the
way
the
new
tests
were
so
anyways.
There's
issues
about
this
that
have
been
around
for
a
while.
G
This
was
february
5th
that
came
out
and
then
every
so
we
can
go
back
into
the
history
but
anyways.
So
so
we
have
this
cap,
and
I
think
I
I
so
then
I
updated
it
to
to
clarify
that.
You
know
the
goal
is
not
to
push
this
cap
on
sig
testing.
That
was
never
the
goal,
but
I
think
maybe
it
was
read
that
way,
because
probably
the
way
it
was
worded
when
I
was
talking
to
bowie
about
it
originally
because
we
had
this
idea
that
maybe
every
test
could
eventually
be
a
diagnostic.
G
So
what
does
the
new
test
do?
Well
it?
Basically,
it's
not
it's
an
input.
It's
you
know
it's
not
adding
technical
debt,
it's
removing
technical
debt,
so
I
did
the
math
today
just
to
make
sure
so.
We've
separated
these
into
files.
If
having
extra
files
is
not
a
good
thing,
we
could
combine
them
all
into
one
file,
but
the
net
change
is
a
net
decrease
in
code.
Okay,
if
and
it's
a
net
decrease
in
code
and
it's
a
net.
G
More
importantly,
from
a
technical
perspective,
it's
a
net
increase
in
coverage
and
it's
a
way
net
increase
in
readability
right.
So
this
is
how
you
write
a
new
test,
so
it's
about
10
lines
of
code
to
write
a
test
and
because
the
whole
thing
is
probed
like
concurrently.
These
tests
run
in
about
a
tenth
of
the
time
or
less
than
the
original
test
run,
and
so
there's
I
added
there's
a
demo
of
this
in
the
sig
network.
G
A
year
ago,
where
I
mean
where
it
was
generally
approved
to
to
finish
to
finish
working
on
this
intermittent-
and
I
made
the
mistake
of
you
know
not
actually
you
know
considering
the
fact
that
this
is
an
end
to
end
test
in
the
e2e
test
binaries,
but
I
just
assumed
that
the
reason
I
thought
I
didn't
need
to
really
do
this
in
the
sig
testing
world
was
because
there's
kind
of
one
of
the
rules
for
caps
is
that
if
you're
writing
a
new
test,
which
I
think
from
sig
tests
abstraction
that's
all
this
is
it's
just
fixing
an
old
test.
G
You
don't
need
to
write
a
cap,
so
it
I
felt
like
the
kept
was
for
sigma.
It
was
a
sig
network
kept
because
the
whole
point
of
it
was
working
with
andrea
and
calico
and
the
other
cni
providers
to
get
a
test
that
we
could
actually
feasibly
run
in
ci.
So
I
would
like
to
run
this
in
sig
testing
ci,
but
I
think
that
problem
is
kind
of
orthogonal
to
the
cap,
which
is,
let's
get
these
tests
working
in
a
way
that
the
providers
can
use
them.
G
So
there's
the
net
change
and
oh
there's
the
this
is
the
new
output
for
these
tests.
So
this
solves
the
original
issue
that
was
was
kind
of
put
up
here,
which
is
that
nobody
can
really
understand
these
when
people
are
trying
to
figure
out
what
these
policies
are
doing
so
the
way
they
work
now
is
every
policy,
and
actually
I
think
this
would
be
cool
if
we
could
do
this
in
more
e2e
tests
over
time.
G
Every
policy
shows
the
exact
policy
in
json
and
then
the
exact
matrix
of
connections
that
worked
and
didn't
work.
So
these
can
be
in
a
matter
of
seconds.
You
can
figure
out
exactly
why
and
what
was
happening.
G
So
so
that's
the
overall
context
for
everything
I
did
also
create
a
pull
request
into
enhancements,
so
that,
like
this,
doesn't
happen
again
because
I
guess
there's
some
confusion
in
in
the
kept
template
where
it
says
you
know,
check
the
impact
on
testing,
but
it
doesn't
say
if
you're
improving
tests,
so
I
added
a
little
type
like
a
little
change
there.
That
says
like
even
if
you're
just
improving
tests
make
sure
it
checks
the
impact
of
everything.
G
So
that's
my
explanation
of
all
this
stuff,
so
sorry
for
alarming
you
guys
the
other
day.
I
know
it
was
kind
of
came
out
of
nowhere.
C
A
ton
for
that,
no,
I
I
yeah
that
chain
sounds
pretty
reasonable.
The
thing
that
got
me
was
someone
pointed
me
to
this
pr
asking
questions
about
it
in
the
successing
channel
and
this
enormous
pr
says
it's
adding
a
test
framework
into
ew.test.
C
That
sent
off
some
alarm
bells.
That
is
the
thing
that
we
would
definitely
want
to
know
about,
and
then
I
followed
back
to
the
cap
and
the
cap
you
know
was
mostly
like
tvd
on,
like
reviewers
and
approvers.
G
So
I
I
yeah,
and
so
I
I
did
that
and
so
that's
lgt
I
updated
the
cap
now
ben,
so
it's
lgtm
now.
If
someone
else
could
lgtm,
it
would
be
great
but
bowie
reviewed
it
yesterday.
So
it's
I
updated
that
so
that's
also
my
fault,
but
here's
all
the
reviewers
and
stuff-
and
I
made
it
very
clear
that
we're
not
gonna
like
dump
this
thing
on
y'all.
C
This
is
really
useful.
Thank
you
for
that.
It's
just
you
open
a
giant
pier.
You
know,
adding
a
test
framework
that
that
sets
off
some
like
hey,
wait,
a
sec.
G
Yeah,
okay,
so
all
right,
so
I
guess
we're
in
a
decent
place.
Let
me
know
if
anything
you
want
me
to
do
to
help
with
this,
and
also
the
other
thing
I
wanted
to
ask
is:
is
there
anybody
that
would
be
interested
in
helping
me
to
get
these
into
the
actual
testing
for
ci?
I
mean
I
don't
mind
doing
it
all
myself,
but
I
I'd
just
be
googling
around
a
lot.
A
I
want
like
I
want
to
be
helpful.
I
also
want
to
be
realistic
about
the
fact
that
I
am
prone
to
disappearing
for
large
periods
of
time,
I'm
hoping
that
that's
going
to
come
the
most
recent
bout
of
that
is
going
to
come
to
an
end.
So
I
think
my
ask
might
more
be
like
I
want
to
understand
where
our
documentation
is
not
answering
your
questions
to
begin
with,
either
you
didn't
find
what
you
needed
in
the
place
that
you
expected,
or
it
turns
out
the
answer.
You're
getting
is
tribal
knowledge
and
not
documented
anywhere.
A
Repos,
what
was
testing
from
so
I
would
start
with
the
in
test.
Infra
the
config
jobs
directory
is
where
all
of
our
job
configs
live
for,
proud
and
there's
a
readme
in
there
that
attempts
to
be
a
basic
cookbook.
That
tells
you,
if
you
want
to
run
this
kind
of
job,
here's
what
the
config
should
look
like.
A
If
you
want
to
test
it,
here's
how
you
might
test
it
and
then
it
tries
to
link
to
more
informative
references
about
like
browse
api
and
and
stuff
like
that.
But
it's
it's
trying
to
sort
of
be
since
it's
specifically
about
kubernetes
jobs.
It's
trying
to
be
about
the
conventions
that
we
use
for
product
io
as
opposed
to
what
like
open
shift
or
jet
stack
or
other
pro
deployments
use.
Okay
and
yeah.
A
I
was
gonna,
maybe
recommend
george
the
testing
commons
folks
might
have
some
more
experience
helping
out
so
I'd
like
drop,
drop
questions
in
in
slack
for
sure
and
when
we've
got
time,
we'll
answer
for
sure,
but
yeah.
C
Specifically
for
setting
up
ci
with
networking,
I
would
victimize
my
favorite
victim
antonio
yeah
yeah
yeah.
C
I
don't
know
if
he
you
know
has
time
right
now
either,
but
I'm
kind
of
in
the
same
boat
ovarian
is
like
I'd
love
to
help
I'm
a
little
hesitant
to
sign
up
for
anything
else.
G
A
Yeah
just
yeah
said
he'd
help.
So
sorry
I
apologize.
I
forget
if,
if
it's
jorge
or
george
I'll
get
it
right.
E
A
My
other
meta
comment
is
just
I,
I
think,
your
instincts
about
it
being
mostly
self-contained
to
sig
networking,
probably
okay,
but
I
still
think
I'm
about
to
add
a
large
chunk
of
code
to
the
to
the
test
directory.
A
I
still
think
like
maybe
asking
like
giving
us
a
heads
up
asking
and
asking
for
a
review
probably
might
have
avoided
this.
The
surprise
here
like
I
think
that
would
give
us
the
opportunity
to
be
like.
Oh,
this
is
not
our
our
thing
you're
good,
but
I
feel
like
one
of
the
problems
we
we
have
to
remind
ourselves
of
in
in
kubernetes.
Is
that,
like
it's
really
helpful
to
over
communicate
because
too
often
there
are
people
who
are
like
well,
I
never
heard
of
that.
A
Why
wasn't
I
involved,
and
so
I.
G
A
Anyway,
it's
it's
all
good.
I
appreciate
the
presentation
if
you
looks
like
you,
dropped,
slides
in
the
notes,
so
cool,
I'm
going
to
move
us
along
to
cloud
view
to
talk
about
using
build
x
for
test
images.
D
Yeah,
so
I'm
here
with
ernest
wong,
our
primary
primary
goal
is
to
basically
have
window
support
on
pretty
much
any
and
every
image,
and
currently
we're
gonna
have
to
talk
about
the
kubernetes
test
images.
D
D
Currently,
the
image
builder
is
also
building
windows
images
for
a
couple
of
the
test
images,
but
to
do
that,
it
uses
a
couple
of
remote
docker
nodes
or
windows,
locker
nodes
in
azure,
just
to
build
those
windows,
images-
and
someone
said
that-
maybe
that's
not
such
a
great
idea
to
to
rely
too
much
on
external
resources
and
that
basically
made
me
think
of
a
nutella
alternative,
and
that
was
dr
bill
dex.
D
I
have
managed
to
create
that
request
with
all
the
build
x
implementation
details
that
were
required
for
windows
as
well,
including
for
all
the
performance
images.
I
also
did
a
full
conformance
run
on
on
windows,
notes
of
conformance
tests
and
they
all
passed.
So
it's
working.
So
it's
it's
really
helpful
that
the
pause
image
build
x,
implementation
already
got
approved
because
it
uses
a
lot
of
the
mecady
mechanisms
for
there
but
the,
but
for
test
images.
D
There
are
a
couple
of
extra
workarounds
we
kind
of
have
to
use
for
a
couple
of
images
we
need
to
run
commands,
and
for
that
we,
because
we're
trying
to
build
windows
images
we
cannot
build.
We
cannot
run
commands
on
the
windows
stage,
it's
impossible.
D
Those
helpers
are
also
included
in
the
pull
request:
their
locker
files.
In
particular.
We
need
a
busy
box,
helper
and
powershell
helper,
which
basically
prepares
those
bits
just
so
we
can
then
include
them
in
the
final
images,
and
that's
one
of
the
things
that
everyone
will
kind
of
have
to
agree
on
at
the
moment.
With
the
request
I've
sent,
the
image
builder
will
no
longer
require
any
windows
images,
with
the
condition
that
it
can
use
those
helpers
that
I've
mentioned.
A
I
mean
certainly
in
the
name
of
merging
and
iterating.
It
sounds
like
a
substantial
step
forward
from
where
we
are,
and
then
I
think
we
should
you
know
just
like
you
sort
of
had
the
hey.
We
should
not
use
windows
nodes.
I
think
the
next
question
would
be
how
how
can
we
make
sure
that
the
building
of
these
helper
images
or
running
things
in
windows,
whatever
like
making
sure
that
that
process
can
still
be
done
by
like
anyone
in
the
community?
A
D
Yeah,
ideally,
they
will
never
have
to
be
rebuilt.
Ever
again,
they
are
pretty
straightforward,
for
example
the
powershell
helper.
D
So
if
anyone
will
ever
want
to
build
those
images
themselves,
they're
gonna
need
a
windows
note
for
that
and
just
one
but
yeah,
that's
that's
it,
and
additionally,
this
will
also
help
in
the
future
when
we
want
to
add
multiple
os
versions:
support
for
multiple
os
versions
of
windows
for
those
test
images
right
now,
we
kind
of
have
to
have
multiple
windows
nodes
for
that,
but
with
build
x.
D
D
And
there's
just
one
other
minor
thing
that
we
kind
of
have
to
agree
on.
We
are
using
nano
server
images
as
a
base
image
for
old
images.
The
reason
for
that
is
that
they
are
a
lot
smaller
than
the
server
called
server
core
counterparts.
They
are
at
least
10
times
smaller.
D
Now
it
wouldn't
be
a
great
idea
to
pull
those
server
core
images
every
time
the
image
builder
job
runs
again,
they're
huge
and
basically
they
would
at
least
triple
or
quadruple
the
build
time
just
for
those
images,
not
to
mention
any
potential
costs
associated
with
that.
So
I
was
suggesting
having
a
verdict:
job
a
monthly
job
which
builds
a
cache
image,
which
only
contains
those
dlls
we
depend
on,
and
basically
we
just
build
that
image
and
pull
those
that
server
called
image
once
per
month
and
then
in
the
regular
image
builder
job.
C
I
mean
that
sounds
perfectly
reasonable
to
me,
but
like
if
a
monthly
update
rate
is
sufficient
for
that,
then
we
can.
I
mean
that
sounds
like
a
nice
optimization
for
building
the
test
image
potentially
much
more
frequently.
D
A
That
sounds
good
to
me.
I
mean
that
part.
Yeah.
Part
of
the
thing
for
me
is
visibility
into
whether
we
have
an
outsized
outsized
resource
consumption.
What
you're
describing
doesn't
sound
like
that?
That
will
be
a
problem,
so
I
think,
like
yeah,
any
decision
is
is
certainly
subject
to
like
hey.
A
We've
noticed
that
resource
usage
for
this
particular
piece
of
infrastructure,
or
these
sets
of
jobs,
is
sort
of
out
of
line,
and
we
need
to
rethink,
but
everything
you've
discussed
sounds
reasonable
and
I,
I
think
the
things
you're
looking
for
agreement
on
should
be
the
the
usage
of
these
helper
images
that
you
know
we
can't
automatically
build
using
our
infrastructure
and
occasionally
using
our
infrastructure,
to
build
smaller
cache
or
base
images,
though
it
would
sound
reasonable
to
me
as
a
as
a
chair.
D
Those
were
the
the
main
concerns
that
we
had
to
agree
on
and
once
this
gets
in
and
the
other
two
requests
that
I
mentioned,
we
won't
really
have
to
use
docker
hub
for
testing
purposes
like
we
do,
for
so
many
boards
test
grids
and
so
on
and
so
forth,
and
speaking
of
which
just
this
is
about
docker
hub.
All
the
the
base,
images
for
all
the
test
images
are
on
docker
hub,
so
we
might
have
to
move
those
base,
images
to
gates
gcrio
as
well.
D
A
I
think
that
that
sounds
maybe
doable
that
might
well.
It
might
lead
to
a
broader
conversation,
possibly
even
at
the
cncf
level,
about
sort
of
sort
of
the
all
of
the
members
of
the
cncf
and
in
the
ways
in
which
they
can
contribute
to
this
project's
resources
for
ci,
but
I
think
yeah,
I
think
migrating
to
kate's.gcr.io
probably
sounds
reasonable.
A
We
get
into
a
tricky
thing
of
like
not
wanting
to
be
a
mirror
for
the
worldwide
ecosystem,
because
a
substantial
amount
of
the
project's
costs
right
now
do
go
to
serving
hosted
artifacts.
But
I
think,
like
that,
sounds
like
a
reasonable
approach
and
we
can.
We
can
start
down
that
path
and
adjust
if
we
find
out
it's,
it's
not
working
out
the
way
we
expected.
C
I
think
we
don't
have
a
good
pattern
to
manage
them,
yet
we'll
need
to
look
at
it,
but
for
something
like
we
just
need
a
periodic
job.
To
pull
this
large
image
you
the
size,
actually
shouldn't
matter
from
a
rate
limiting
perspective,
it's
just
number
of
images,
and
since
it's
in
free
code
it
shouldn't
be
a
big
deal,
but
even
if
it
was,
we
could
add
credentials.
C
I
think
the
more
interesting
thing
there
is
that
we
do
this
build
in
cloud
build,
and
I
have
no
idea,
like
you
know,
will
cloud
build,
be
sharing
vms
with
other
random
jobs
and
therefore
ips
and
like?
Are
we
going
to
have
unavoidable
rate
limits
there
or
do
we
need
to
authenticate?
If
we
need
to
authenticate
our
cloud
builds?
Even
then,
I
think
we
could.
You
know
we
could,
in
theory,
have
like
a
user
account
for
like
this
build.
C
I
I
don't
think
I
think
we
might
run
into
problems
here.
I
don't
think
the
problem
is
going
to
be
that
we
have
this
build,
though
I
think
we
might
just
kind
of
run
into
some
general
issues
and
whatever
would
apply
to
everything
else.
Will
apply
there
without
costing
us
excessively
versus
anything
else?
I
do
think
if
we,
if
we
moved
like
the
primary
host
of
like
windows,
server
case.gcrio,
that
would
probably
be
a
problem
for
the
project
as
just
like
a
bandwidth
cost
from
all
the
third-party
users.
D
G
D
D
A
Cost
is
going
to
be
negligible
for
that,
so
I
want
to
be
cognizant
in
respectable
people's
time.
We're
seven
minutes
over,
so
I
think
I'm
gonna
cap
us
off
there.
I
really
appreciate
all
the
time
people
put
towards
preparing
to
talk
here
to
keep
these
conversations
actionable.
Hopefully
we
all
got
the
decisions
we
needed
and
I
look
forward
to
seeing
you
all
again
in
two
weeks
time.
Until
then
I'll
see
you
all
in
slack.