►
From YouTube: Kubernetes SIG Testing 2017-07-11
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk
A
Okay,
hi
everybody
today
is
G's
day
it
Tuesday
July
11th.
This
is
the
kubernetes
trick.
Testing
meeting
is
being
publicly
recorded
and
will
be
posted
to
YouTube
a
little
bit
later
on
the
agenda
today,
I'm
going
to
go
over
some
of
the
issues
that
we've
committed
to
the
b18
milestone
some
of
the
labeling
policies
that
I
put
influence
to
sanity,
checking
that
that's
cool
with
everybody
here
and
then
I
will
hand
off
to
Chrismukkah
Linens
to
demo
the
cross
cloud
projects
that
are
happening
as
part
of
the
echch
c,
I'm
working
group.
A
A
All
right
so
last
week,
I
created
many
many
github
issues
that
were
basically
conversions
or
tasks
in
the
Google
Docs
and
Eric
data
put
together,
it's
sort
of
laid
out
what
we
would
like
to
accomplish
for
d-18
and
what
we
would
like
to
accomplish
for
the
rest
of
2070.
A
So
the
v18
milestone
also
lost
a
number
three
we
thought
reading.
That's
here
contains
basically
everything
we
think
we
can
get
done
between
now
and
when
the
release
gets
cut.
September
thirtieth
sort
of
seems
like
the
main
theme
here,
is
to
really
get
everything
over
two
scenarios
and
to
really
start
moving
away
from
lunch,
github
towards
prowl
and
more
operations
friendly
manner,
though
at
the
same
time,
we
do
have
some
efforts
underway
to
make
Munch
github,
maybe
like
faster
to
restart
we're,
also
interested
in
things
like
improving
transparency.
A
We
have
use
of
metrics,
so
we'd
like
to
be
able
to
instrument
proud
to
have
that
feed
into
velodrome.
I
think
it'd
be
cool
to
see
like
the
flakiness
as
I
actually
get
better
over
to
Bodrum
as
well.
So
we
can
start
seeing
like
if
our
test
improvement,
Lawrence
is
really
improving
at
the
time
and
then
something
I
am
personally
interested
in,
and
volunteering
for
just
to
allow
people
outside
of
Google
to
be
on
the
test,
confer
of
notation
one
of
the
key
technical
things.
That's
sort
of
holding
us
back
from
that.
A
A
A
Google
internal
challenges
and
we'd
like
to
find
something
that
lets
us
sign
to
other
people
such
as
myself
and
anybody
else,
who's
interested
in
signing
up
to
these
things.
Part
of
this
effort
is
going
to
involve
documenting
what
the
responsibilities
of
these
roles
are,
what
credentials
they
need
to
have,
what
knowledge
or
experience
they
should
have
to
be
capable
of
fulfilling
these
duties
and
I.
Think
some
of
the
stuff
is
going
on
with
the
submit
key.
A
A
A
And
I'm
clicking
around
a
lot.
I
hope
this
is
easy
to
follow.
Other
areas
that
I
added
that
didn't
exist
before
is
area
bootstraps.
That
kind
of
weird,
because
there
is
no
bootstrap
directory
this
job
as
a
Python
script
that
lives
inside
of
the
Jenkins
director
right
now,
but
it
seems
like
so
much
functionality
is
getting
thrown
into
that,
and
also
we
would
like
to
move
away
from
Jenkins,
but
I'm
hopeful.
We
can
at
some
point
refactor
bootstrap
into
its
own
directory
and
in
the
meantime
it
seems
like
enough.
A
Pac
stuff
is
happening
specifically
around
that
that
it
deserves
its
own
label.
I
also
created
the
area
labeled
for
the
metric
subdirectory
can
be
area
labeled
for
the
Q
test
directory,
there's
other
stuff
like
on-call
related
tasks,
that
don't
have
any
area
label
right
now,
I'm,
not
sure
what
to
label
them
so
that
you
can
keep
track
of
them.
A
And
then,
of
course,
there
are
a
bunch
of
issues
that
are
being
opened
up,
but
just
don't
have
any
labels
applied
right
now,
because
we
as
a
team
or
a
community
or
not
being
consistent,
deploying
this
so
I'd
like
to
actually
sort
of
document
this
label
stuff
for
now
notice.
It
testing
it
and
then
talk
about
this
at
the
community
meeting
to
sort
of
help
people
realize
yeah.
Unfortunately,
there
are
more
repos
than
just
communities
through
kinetics.
A
A
Hopefully
that's
cool
with
everybody.
If
there's
nothing
controversial
here,
my
plan
was
to
open
a
poll
request
and
maybe
put
information
in
the
contributing
MD
file
in
tested
for
us,
and
that
seems
to
be
the
file
that
github
links
you
to
whenever
you
try
creating
a
new
poll
request
just
to
let
you
know
like
hey
there
or
when
you
try
creating
a
new
issue.
Let
you
know
that
that
might
be
a
good
place
to
look
for
information
about
how
to
follow
this
particular
repository.
Second,.
A
Cool
okay
with
that
Chris
are
you
here,
hippie
hacker
I
am.
D
Alright
I'm
going
to
go
through
my
slide
deck
in
a
minute,
but
I
thought
before
I
start
the
slide.
Deck
I'll
go
ahead
and
do
the
demo
portion.
So
we
have
gitlab
Si
and
CI
CI,
which
is
a
good
lab
instance
up
and
running
on
packet
and
we've
imported
some
of
the
Deaf
projects
here,
I'm
going
to
go
into
core
DNS
and
specifically
make
a
change
on
our
current
release.
Branch
is
stable
for
CLA
and.
D
Trigger
quick,
CI
I'm
going
to
go
ahead
and
commit
that
change
we'll
come
back
to
this
later.
This
is
automatically
triggering
a
a
change
to
our
CI
and
this
pipeline,
and
if
someone
was
to
drop
that
into
the
channel,
it
will
be
great.
Okay,
thank
you.
So
we
go
back
to
the
presentation
on
the
slides.
Now
then
so,
let's
cross
cloud
CI,
the
CI
working
group
has
been
tasked
with
demonstrating
the
best
practices
for
integrating
testing
and
deploying
projects
with
an
intuitive
ecosystem
across
the
multiple
cloud
providers.
D
The
note
there's
a
wink
that
kind
of
called
arms
called
an
comb.
What
is
crossed
on
CI
for
us
we're
trying
to
split
this
up.
The
two
portions
one
is
the
cross
project
pipeline
and
it's
helping
to
create
artifacts
per
commit
for
all
of
the
ciencia
projects
and
then
taking
those
artifacts
and
deploying
them
across
multiple
files,
but
then
the
NPAC
stuff,
so
starting
out,
we've
got
three
projects,
we're
currently
integrating
that
prometheus
accordion
F
and
ribbon
any
pieces.
D
This
is
the
two
types
of
pipelines:
kind
of
side
by
side
to
kind
of
split
them
out.
The
first
one
is
the
cross
project
pipeline
that
each
project
would
have
something
similar
building
the
end-to-end
test,
compiling
the
binaries,
creating
the
containers
and
then
providing
a
release
for
this
particular
commit
so
that
anyone
could
reuse
these
artifacts
within
their
own
deployment.
D
It
doesn't
have
to
be
our
cross
loud
see,
I
could
be
any
other
type
of
CI,
and
our
second
pipeline
is
our
cross
cloud
which
integrates
all
of
the
artifacts
and
pulls
in
those
artifacts
in
this
in
the
first
stage
and
goes
through
and
deploys
kubernetes
on
those
clouds.
And
then,
on
top
of
that,
deploying
the
artifacts,
followed
by
some
end-to-end
packs
here's
a
list
of
the
projects.
If
you
in
the
slide
itself,
you
can
actually
click
on.
This
will
take
you
to
a
link
where
these
projects
are
on
our
system.
D
This
just
requires
adding
a
CI
mo
file
that
we're
injecting
in
on
our
side
within
the
gait
lab
incent
the
release
when
these
artifacts
are
triggered,
they
release
they
triggered
across
loud
to
repose,
and
the
cross.
Cloud
pipelines
pick
up
the
artifacts
from
all
the
projects
that
we've
integrated
so
far,
including
fresh
artifacts
for
kubernetes
and
using
those
artifacts
to
deploy
across
multiple
clouds
and
then
deploying
the
cross
project
portion
from
all
of
the
different
projects
Prometheus
and
for
DNS
and
then
running
the
end-to-end
test.
There's
a
link
to
this
pipeline
here
at
the
bottom.
D
D
We
can
see
that
that
change
that
I
made
is
probably
finished,
and
if
we
take
a
closer
look,
we
can
see
that
it's
generated
a
release
and
is
triggered
across
cloud
deployment
or
will
drop
that
link
as
well,
and
this
has
gone
through
and
done.
The
deploys
for
the
multiple
clouds
and
we're
in
the
cross
project
I
come
to
charge,
deploys
now.
D
D
D
The
release
artifact
for
each
of
these
commits
in
sup
being
a
pinning
that
has
a
bunch
of
environment
variables
that
we
can
use
in
any
CI
system,
but
we'll
show
you
how
we
use
in
cross
cloud
later.
There's
a
registry
built
in
to
get
lab
that
we
push
to,
and
then
we
use
that
later.
When
we
pull
down
for
our
cross
pods,
you
could
do
anything
with
those
artifacts
I'll
dig
into
what
we
do
in
the
next
section
here
is
our
cross
cloud
pipeline.
D
D
This
is
what
those
collection
looks
like
at
this
stage.
Each
of
these
jobs
aggregate
these
environment
files
together,
and
then
we
do
our
cross
cloud,
deploy
I
believe
these
have
links
on
them
to
take
you
to
the
job
outputs.
If
you
will
take
a
look
at
the
provisioning
jobs
for
AWS,
gke,
GC
and
packet,
the
result
of
the
deploys
our
tube
configs
that
are
saved
as
a
artifact
in
the
next
page,
we
collect
those
artifacts
and
deploy
using
helm
all
the
projects
that
were
interested
in
testing
across
all
of
the
cloud.
D
This
is
up
collecting
everything
from
our
cross
project
pipelines
for
all
of
the
scenes
to
have
projects
into
a
type
of
dashboard.
So
this
is
the
end
result
of
the
pipeline.
Currently,
those
C
and
C
F
artifacts
deployed
cross
cloud
for
each
of
the
projects
and
then
our
end-to-end
tests
that
link
takes
us
to
the
the
dashboard
and
looks
like
still
running
the
end-to-end
tests
for.
D
So,
what's
next
we're
looking
at
exploring
other
provisioning
methods
and
dish,
and
ours
maybe
get
some
feedback
on
how
our
positioning
so
thoughts
on
that
prioritizing
some
different
cloud
targets,
whether
it's
as
your
bluemix
and
we're,
probably
going
to
want
to
go
many
new
projects
just
yet.
So
we
get
some
feedback
in
different
projects.
We're
working
with
any
questions
or
anything.
I
can
dig
into
a
bit
further.
B
D
Sure
the
current
deployment
target
is
coreless
going
to
talk.
We
can
swap
it
out.
We
tried
to
go
with
something
minimal
that
we
could
target
without
doing
too
much
too
many
changes.
So
our
target
is
really
anything
that
supports
cloud
in
it.
We
don't
SSH
out
to
the
boxes
after
we
do
the
deploy.
We
just
wait
for
it
to
come
up.
E
D
E
D
Will
earlier
I
triggered
a
pipeline
for
the
core,
DNS
and
I'll
go
quickly
and
look
and
how
that
how
that's
constructed
these
columns
are?
Are
the
proper,
the
stages
and
they're
all
done
in
parallel?
So
everything
in
build
is
done
in
parallel,
but
that's
successful.
Everything
in
package
is
done.
Is
that
successful
everything
and
releases
them?
Let
me
make
this
bigger,
so
that
is
readable
in
people
scream
than
this.
D
D
The
the
last
artifact
here
is
the
release,
and
the
release
adds
the
registry
that
we
pointed
to.
So
these
variables
are
linked
to
include
pointing
to
the
artifact
we
just
generated.
So
this
is
the
container
registry
for
the
core
DNS
and
so
for
every
commit.
We
have
a
new
container
and
we're
trying
to
tie
that
together
so
that
within
our
length
job
once
we're
done
we're
triggering
cross
cloud
so
for
any
changes
with
any
of
the
upstream
projects
we
trigger
our
cross
pod
REE
bones.
D
Now
this
is
the
triggering
upstream
commit
and
artifacts,
but
we
need
the
artifacts
from
all
the
other
projects
as
well.
So
if
I
go
into
the
kubernetes
art
job
you
can
see,
here
are
the
kubernetes
images
and
the
tags,
and
this
is
the
stable
release.
So
it
includes
166
and
the
pipeline
ID
and
the
job
ID,
so
that
we
could
reference
a
particular
CI
artifact.
D
If
we
just
click
on
pipelines,
we
can
see
that
we
also
have
a
master
pipeline
here
that
was
triggered
earlier
and
it's
pulling
in
the
master
artifact.
So
we
made
a
change
in
upstream
kubernetes
on
master.
They
would
this
would
trigger
here
and
the
kubernetes
master
artifacts
are
from
upstream
master.
D
Once
all
these
artifacts
are
collected,
they're
used
as
part
of
the
deploying
so
I'll,
just
in
this
column,
we're
deploying
to
all
of
these
clouds
at
the
same
time,
within
this
job
for
deploying
to
kubernetes.
The
first
thing
it
does
is
pull
down
the
artifacts
from
the
prior
jobs,
including
all
of
the
environment
variables
for
kubernetes,
and
we
use
this
within
our
provisioning.
But
this
is
where
it
calls
terraform
and
does
the
deploy
to
terraform.
D
E
Very
cool
one
of
my
question
about
the
the
work.
So,
if
I
understand
it
correctly,
is
basically
a
bunch
of
input
environment
variables
on
the
far
left-hand
side
of
that
diagram
which
come
from
in
somewhere
and
then
each
one
of
the
I
guess
you
would
call
that
a
job
or
process
each
one
of
those
oblong
things
has
an
implicit
set
of
input,
environment,
variables
and
output,
environment
variables
and
those
are
just
aggregated
from
left
to
right.
E
E
The
okay,
so
they
just
passed
along
without
effect.
Okay,
that's
excelling!
That's
that
on
set,
but
this
under
guard
attack
point
that
understands
that
that
you
know
something
in
column.
Three
needs
the
following
inputs:
whether
it's
an
artifact
name
with
a
bunch
of
environment
variables
in
it
or
whether
it's
the
actual
environment
variables.
D
B
G
D
D
This
is
our
cross
cloud
ml
file,
this
ones-
these
are
the
branches
we're
tending
to
for
upstream.
If
I
were
to
look
at
I'm
going
to
change
this
from
CI
master
it
because
the
ice
table
you'll
see
that
we're
following
a
particular
set
of
pin
stable
releases,
and
these
will
have
different
a
different
set
of
pipelines
based
on
our
deployments.
D
It
breaks
out
the
stages,
how
we
grab
the
coop,
config
and
all
the
environment
variables.
So
this
is
how
we're
sourcing
all
of
the
environment
files
from
all
the
projects
and
then
the
first
set
of
jobs.
Does
the
retrieval
to
this
file
here
is
where
the
logic
for
this
pipeline
occurs
and
each
of
our
other
projects
has
a
similar
file.
That's
much
simpler
than
the
cross
project.
One
sorry,
the
cross
loud
one.
B
F
It
when
we
take
a
shot
at
this,
you
know
what
we're
using
at
the
moment
is
cure.
Foam
has
some
logic
on
generating
gamal
files,
so
we're
just
you
doing.
Writing
envy
options
to
cast
a
couplet
when
it
starts
and
writing
the
yellow
files
to
the
communities.
Application,
Google,
stat,
cube,
drop
proxy
to
the
API
server
and
cubelet
reads
the
directory
and
Yama
file,
so
you're
just
dropping
manifests.
B
Yeah,
so
if
you're
dropping
manifests,
then
like
you're
doing
a
form
of
self
hosting,
but
the
it's
not
what
necessarily
what
is
done
and
other
things
the
@
çd.
Are
you
leveraging
I
mean
like
I'm,
just
I'm,
just
trying
to
figure
out
your
deployment
is
a
particular
configuration
type
and
then
your
your
stack
and
the
lower
portions
of
your
stack,
which
under
underneath
kubernetes,
could
be
very
imporant.
So
I
want
to.
I
want
to
know
what
variables
there
are
and
what
variables
people
could
tweak
like
what
version
of
SED
are.
F
At
the
moment,
in
terms
of
NCD,
we're
not
pinning
it,
but
we
want
to
switch
to
container
izing
that
and
being
able
to
pass
that
in
as
environment
variable
as
well,
so
we're
using
this
environment
the
STD
container
we
using
this
environment
for
the
Kuban,
it
is
convergence
and
then,
as
they're
using
dock
container
d
and
rocket,
would
like
to
be
able
to
switch
out
those
components
depending
and
show
a
metric
of
metric
of
we're.
Using
this
cuban
ease
version
on
this
vision,
a
docker
or,
and
it
failed.
B
B
With
that
in
mind,
because
I
have
also
worked
on
sig
clustered
life
cycle,
the
our
goal,
our
stated
goal-
is
to
create
a
working
group
and
the
working
group
is
to
spread
qu
BDM
across
all
the
installers
right.
So
that
way
we
have
one
commonality
for
how
we
do
deployments
and
that
will
provide
you,
the
capabilities
to
specify
the
manifest
or
the
version
of
the
vet
CD
that
we
desire
right.
C
Yeah,
that's
looks
very
cool.
Is
there
is
there
any
concept
of
you
know
limiting
the
amount
of
work
like
if
I
want
a
certain
stage
to
have
a
certain.
You
know
throttle
that
a
certain
rate
is
that
concept
existed
moment.
C
D
It's
not
anything
in
place
currently
that
I
can
think
of
a
quick
mechanism,
but
I
I
suspect
it
wouldn't
be
terribly
hard.
A
lot
of
that
is
around
how
we
do
the
runners
and
what
jobs
are
scheduled
lists.
You
can
actually
have
per
job
tags,
that
request
particular
runners
and
if
there's
no
more
runners
available,
it
will
wait,
but.
D
But
there's
also
kubernetes
runners
that
allow
us
to
target
a
kubernetes
cluster
and
some
of
our
earlier
iterations.
We
were
creating
the
cluster
on
the
fly
and
then
using
that
cluster
for
the
CI
job
itself,
at
the
set
of
runners
for
that
deployment.
But
one
of
the
things
we
found
that
was
nice
was
having
a
decent
set
of
caching,
particularly
for
some
of
the
more
complex
built,
so
there's
lots
of
options.
As
far
as
the
runners
we've
experimented
with
the
trooping
ideas,
runner
and
then
the
direct
running
on
the
packet
host.
H
Hi,
so
this
is
chris
chris
hobson,
the
OpenStack
foundation
and
I
actually
just
managed
to
secure
a
bunch
of
hardware
that
I
can
use
for
testing
and,
if
possible,
I'd
like
to
help
you
out
on
adding
OpenStack
is
one
of
your
stable
cloud
providers
for
testing.
So
if
there's
something
we
could
connect
with,
maybe
after
the
meeting
you
know
or
later
on
this
week,
I
think
they
would
be
fantastic.
D
That
sounds
great
one
of
the
things
that
we
had
a
blog
on
that
you
might
want
to
take
a
look
at
regards
to
that
is
our
co-op.
It's
not
coop
coop,
this
last
blog
post
back
from
January
actually
has
this
deploying
using
an
API
endpoint,
something
similar
to
so
what
Rob
is
doing
with
crowbar.
Although
this
deployment
was
using.
H
The
meetings
yeah
yeah
I,
mean
I,
think
yeah
I
think
that
one
of
the
barriers
for
us
was
being
able
to
find
a
you
know,
kind
of
a
stable.
You
know
stable
OpenStack
cloud
release
that
had
sufficient
resources
for
it
and
I
think
that
over
the
last
few
days
is
something
that
I
was
I've
kind
of
able
to
been
able
to
get
my
hands
on
and
I.
You
know
and
I
think
that
you
know
something
we're
definitely
interested
in.
You
know
we'd
like
to
contribute
that
hardware.