►
From YouTube: Kubernetes SIG Testing - 2020-05-05
Description
B
A
A
But
if
anybody
has
anything
they
want
to
shout
out,
do
so
and
hippy
wants
to
talk
about
deploying
brow,
dot,
C
and
C
F
dot,
IO,
and
then
this
week
is
the
magical
week
where
state
testing
has
been
selected
to
give
an
update
to
the
kubernetes
community
on
Thursday,
so
I
would
love
any
input.
Anybody
has
on
what
to
do
so
or
on
what
to
say
so
with
that
Marshall.
Why
don't
you
kick
us
off?
A
B
B
Let
me
just
gender
yeah,
so
I,
so
there's
a
background
to
it
is
so
recently
C
group
v2
support
was
added
in
cubelet
and
now
the
runtimes
right
like
creo
or
continuity.
They
are
getting
ready
to
add
test
cases
that
that
will
test
C
group
v2
functionality,
even
the
runtime,
the
run,
see
patches
well,
Mars,
gene
and
there's
another
runtime
which
supports
out-of-the-box
C
run.
It's
called
sear
on
the
functionality
of
C
go
v2,
so
looking
at
that
I
we'll
be
working
which
I
I
predominantly
work
for
creo
I'll
be
working
with
maintainer
of
continuity.
B
His
name
is
Mike
Brown
to
essentially
make
sure
that
cubelet
also
our
cue
blade.
No
tests
also
test
C
Group
B
to
support
so
out
of
the
box.
If
we
have
to
choose
an
operating
system
for
our
test,
its
Federer
31
and
above
because
it
comes
with
C
group
B
to
functionality
enabled
by
default
nevertheless
is
one
of
the
most
popular
distros
anyway.
So
I
I
submitted
a
PR.
Although
it's
it's
a
work
in
progress,
we
are
I,
have
given
the
link
for
it
in
this
doc
or
gender
doc.
B
B
A
Do
you
have
any
feedback?
I
have
two
thoughts
off
the
top
or
two
questions
off
the
top
of
my
head
/
thoughts,
so
I
see
today
there's
already
a
cryo
e
e
Fedora
job
in
test
grade.
I,
don't
actually
know
what
like
gamal
that
links
to.
But
how
is
what
you
are
proposing
different
than
the
existing
job
today.
So.
B
The
biggest
problem
right
now
we
have
is
right
now
how
to
get
a
node
test
going
on
on
a
fedora
based
image
right.
This
particular
test
case
I
need
to
look
into
it,
whether
I
talk
with
it.
It
uses
Federer
at
all
or,
if
I'm,
sorry,
it's
a
federal,
whether
it
is
a
secret
v2
or
not,
but
it's
still
not.
The
pre
submit
test
case.
A
B
B
A
So,
as
far
as
like,
which
which
project
my
answer
is
kind
of,
like
all
of
them,
I
think
it
became
pretty
clear
a
week
or
two
ago
that
the
current
signal
tests
being
pinned
to
a
single
project
doesn't
seem
to
be
really
scalable.
If
one
of
them
ends
up
blowing
up
causes
the
project
to
run
out
of
resources
or
causes
the
project
to
have
its
network
access
revoked,
then
every
single
other
node
test
that
uses
that
project
blows
up
as
well.
A
So
what
we
prefer
to
do
for
end
n
tests
is
to
have
a
pool
of
GCP
projects
that
can
be
used
to
you
know:
provision
resources
like
VMs,
running
Fedora
or
cos,
or
a
bun
for
them.
Each
we
tests
and
ideally
is
little
customization
to
those
projects
as
possible.
So
is
there
any
reason
you
couldn't
just
make
the
image
publicly
available?
Oh.
B
The
one
I'm
using
is
a
development
project
that
is
not
really
a
partly
used
for
production
set
up.
So
this
might.
That
was
my
going
to
be
my
next
question
like
when
you
see
all
of
them,
which
project
do
you?
Do
you
say,
I
should
use
and
I
need
to
have
somebody
either
create
that
Fedora
image
for
me
or
I
can
create
it
from
like
myself
if
I
have
a
require
access,
I'm.
A
C
B
A
Yeah
I,
unfortunately,
don't
know
offhand
what
the
process
is
involved
for,
putting
an
image
into
GCP
and
then
making
it
accessible
to
multiple
other
projects.
But
that's
what
I'm
asking
for
I'm
trying
to
find
a
link
to
the
file
I
would
show
you
like
the
list
of
projects
that
we
currently
have
in
the
pool,
but
the
reason
I'm
saying
all
of
them
is
cuz
I'm.
Also
in
the
midst
of
creating
a
bunch
of
GCP
projects
over
here
and
I
will
be,
creating
more
I
will
need
to
grow
them.
A
B
It
kind
of
makes
sense,
I
just
don't
know
whether
I
have
access
to
any
stable
project
that
I
can
use.
You
know,
I,
don't
want
to
end
up
making
something
public
from
a
development
project
which
ends
up
resetting.
Something
ends
up
breaking
the
test
cases
tomorrow,
so
I
would
rather
go
with
the
project
that
are
dedicated
for
the
sick,
testing
or
testing
in
general,
and
then
have
the
image
go
straight
there.
So
my
question
would
be:
how
do
we,
I'm?
Whom
do
you
come
to?
B
Whom
should
I
contact
so
that
they
can
push
the
image
on
behalf
of
me
and
I?
Don't
want
access
I
can
give
them
a
image
star
file
and
they
can
just
upload
it
off
for
us.
So
that's
also
works
fine
for
me,
but
I
need
to
know
the
contact
whom
to
approach
for
those
things.
The
project,
names
and
people
to
people
to
contact.
B
B
B
A
To
maybe
create
a
project
for
this
over
in
WG,
Cates
infra,
but
I
don't
have
the
bandwidth
for
right
now
so
I'd
like
I
kind
of
went
sick
now
to
own
this
as
a
sake
versus
just
you
doing
it
as
a
person.
If
that
makes
any
sense,
because
I'd
like
to
make
sure
to
take
board
sig
note
is
on
board
with
supporting
this
and
they're
currently
going
through
an
auditing
like
all
of
their
no
d2b
jobs
and
also
potentially
want
the
ability
to
test
other
operating
systems
or
other
images.
B
D
B
D
B
A
A
Pollutant
rate,
yes,
it
does
the
point
I'm
trying
to
make
by
giving
you
that
list
of
projects
is
that
like,
for
example,
any
not
quite
but
like
any
one
of
the
projects
listed
in
GCE
project
in
the
GCE
project
pool,
for
example,
could
be
used
by
an
end
to
end
tests.
So
you
need
to
find
some
way
for
your
image
to
be
accessible
by
all
the
projects
listed
in
that
file,
not
just
picking
one
but
yeah
so
because
it
could
be.
A
B
A
A
Okay
and
thank
you
very
much
in
the
Olaf
for
taking
notes,
you
are
amazing,
so
the
next
thing
I
wanted
to
discuss
briefly
was
work.
I
have
been
doing
as
part
of
the
Cates
in
for
a
working
group.
Oh,
it's
been
hippy.
Okay,
I
thought.
I
saw
you
take
some
notes
as
well.
A
So
this
is
a
PR
that
I
made
to
describe
how
to
stand
up
build
clusters
over
in
caves
in
furs,
cncs
stuff,
alright,
sorry,
Kinkaid
sin,
frizz,
CN,
CF
owns
GCP
organization.
What
this
does
is
it
uses.
This
is
probably
too
small.
Let
me
pull
this
up.
It
uses
terraform
to
provision
the
actual
build
cluster,
and
then
he
and
then
I
kind
of
have
to
do
some
manual
stuff
to
manage
the
kubernetes
gamal
resources.
A
There
we
go
I
stood
up
two
clusters:
one
was
a
trusted
cluster,
intended
to
run
jobs
that
need
access
to
sensitive
secrets
or
the
ability
to
push
to
sensitive
places.
So,
for
example,
the
trusted
cluster
would
be
used
to
run
jobs
that
can
push.
They
can
run
GCB
google
cloud
built
things
to
build
images
and
push
them
to
staging
buckets,
or
you
can
see
this
being
used
to
run
parabola
or
something
that
requires
a
github
OAuth
token
that
is
capable
of
doing
things
to
the
kubernetes
github
organization.
Stuff,
like
that.
A
A
I
had
some
open
questions
for
the
group,
so
one
of
them
has
to
do
with
some
of
the
amazing
dashboards
that
are
available
at
monitoring,
proud,
Cates
thought
I/o,
which
are
gonna
take
some
time
to
live
so
the
boscoe's
resource
usage,
dashboard
I
find
particularly
useful,
since
we
were
just
talking
about
me
to
be
projects
for
for
node
to
provision
right
like
there
are
pools
of
projects
here.
Green
is
good.
It
means
the
project
is
free.
Red
is
bad.
A
It
means
it's
not
bad,
but
red
means
that
project
has
been
leased
out
or
taken
by
an
end-to-end
job
for
it
to
go,
stand
up,
resources
and
stuff
blue
means.
Bosko
says
janitor
is
in
the
midst
of
cleaning
that
project
up
before
putting
it
back
into
the
pool
for
lease
so
on
and
so
forth.
So
this
just
gives
me
a
quick
glimpse
into
you
know
we
have
plenty
of
headroom
and
capacity
to
throw
more
end-to-end
jobs
and
use
more
clusters.
That's
awesome!
A
A
What
I'm
more
concerned
about
is
whether
adding
a
new
Bosco's
instance
will
throw
off
the
metrics
in
this
dashboard.
This
is
the
bosco
server
dashboard,
which
shows
just
general
HTTP.
You
know
latency
and
response
response
codes
for
different
calls
to
the
boscoe's
api,
which
has
been
helpful
in
troubleshooting
like
some
resource
contention
and
lock
issues
within
Bosco's
over
the
past
couple
of
months,
there's
been
a
number
of
PRS
which
have
greatly
improved
its
performance,
but
you'll
notice.
This
dashboard
currently
doesn't
differentiate
between,
like
which
Bosco's
instance,
all
these
metrics
are
coming
from.
B
A
A
A
So
there's
this
thing
here
which
allows
this
dashboard
to
scrape
from
the
google.com
hosted,
build
cluster
and
Bosco's
instance,
and
if
I
were
to
just
add
an
entry
for
another
Bosco's
instance.
It's
not
clear
to
me
that
Prometheus
automatically
tags
the
metrics
with
the
hosts
that
it's
scraping
these
things
from
anybody
have
any
insight
or
suggestions.
What
to
do
here.
A
So
that's
one
thing:
the
let's
see
here,
I
think
the
only
other
thing
I
wanted
to
just
mention
for
the
group
is
part
of
the
point.
I'll
stop
by
share
here.
Part
of
the
point
of
running
this
infrastructure
is
to
be
able
to
allow
community
members
to
support
it
and
access
it
and
the
permission
structure
that
I
have
more
or
less
copy-pasted
is
probably
not
quite
set
up
for
that.
A
A
C
F
G
G
Part
of
the
as
we
hand,
stuff
off
to
the
larger
community
outside
of
of
Google,
for
managing
the
kubernetes
infrastructure
and
I
see
that
as
a
neat
opportunity
to
help
replicate
the
way
that
our
community
functions
so
well
in
with
community
managed
infrastructure
and
a
good
place
for
that
is
within
the
CNCs,
because
they
have
a
multitude
of
other
projects.
That
could
also
benefit
greatly
from
from
having
interactions
from
prowl
and
all
mute.
G
The
ecosystem
that
tests
in
front
provides
I've
were
created,
a
CN,
CF,
infra,
org
and
we're
I,
don't
think
there's
anything
in
there
yet.
But
the
idea
is
to
help
support
initially
the
C
and
C
F
conformance
program,
so
that,
and
specifically
the
CNC
F,
slash
kate's
conformance
repo,
so
that
our
automation
for
vendors
wanting
to
submit
their
test
results
and
ensure
that
their
clouds
can
run
all
of
the
conformance
tests.
In
a
way,
that's
verifiable.
G
We've
had
some
time
slip
through
the
cracks
where
that
we
basing
it
on
the
count
of
tests
and
not
verifying
so
rather
than
having
it
be
a
human
automating,
a
human
going
through
and
saying
this
looks.
Okay,
starting
to
add
some
Bach
stuff
there,
we
have
Google
who's
graciously
donated
quite
a
bit
of
bitte
bits
of
infrastructure
for
the
scene
for
the
CNC
F.
But
it's
specifically
for
the
kubernetes
community.
G
We
have
other
cloud
other
supporters
as
well,
but
I
I
thought
it
might
be
interesting
if
we
could
look
at
creating
infrastructure
that
was
supportable
by
my.
My
great
goal
would
be
that
it's
all
self
hosting
on
kubernetes,
but
for
for
now,
let's
do
the
small
pieces
that
we
can
so
we're
modeling.
Similarly
off
of
what
Aaron's
working
on
it
with
the
initial
work
with
the
Cate
Center
working
group
on
the
terraform
approach,
so
we
I'm
looking
to
create.
G
Missions
model
and
the
creation
of
the
node
pools
and
the
creation
of
well
in
this
case
that
were
probably
going
to
use
some
Amazon
resources,
so
they're,
they're
versions
of
these
things
to
the
point
of
bringing
up
a
prowl
instance
for
CN
CF
at
IO.
I
know
that
in
this
process,
we'll
hit
a
lot
of
Google
if
ocation
and
so
any
they
would
I.
G
What
I'm
kind
of
looking
for
is
some
direction
on
where
those
scotches
will
come
and
also
anybody
that's
interested
in
seeing,
are
and
and
I
really
feel
that
test
in
front
and
the
the
tooling
that
we
have
put
together.
That
supports
the
kubernetes
community
and
the
way
that
that
works
is
something
that
can
have
a
great
impact
on
how
many
many
organizations
work
in
the
world,
particularly
if
we
can
find
a
way
where
it's
I'm
easily
easy
to
configure
and
spin
up
and
I
in
a
modular
way.
So
that's
that's
the
that's.
G
The
call
is
that
let's
take
the
tooling
that
we've
so
wonderfully
created
for
the
kubernetes
community
as
an
initial
trial.
Let's
get
it
up
and
running
for
it.
This
CN
CF,
for
example,
allowing
people
to
sponsor
a
particular
project
and
then
having
all
of
the
having
about
it's
at
Bosco's
or
to
spin
up
clusters
there
and
then
being
able
to
assign
proud
jobs
to
those
clusters.
G
E
E
On
a
proud
installer,
it's
not
really
documented
right
now,
but
it
helps
us
set
up
prowl
instances
fairly
easy
on
noodle
GCP
projects.
For
example,
it
takes
care
of
creating
buckets
service
accounts
and
everything.
So
that
might
be
something
that
you
might
want
to
look
at
when
you
are
looking
at
Google
specific
things,
because
that's
all
in
there
and
oh
code.
Basically,
that
might
be
easy
if
you're
looking
to
abstract
this,
to
do
some
things.
D
Just
for
the
Google
specific
connections,
all
right
now,
powers
sort
of
tightly
coupled
to
TCP,
mostly
due
to
the
fact
that
it
only
supports
GCS
as
object,
storage
for
test
results
and
pretty
much
anything
else.
But
right
now,
someone
working
on
adding
s3
support
all
the
way
through.
But
it's
not
done
yet.
A
Yeah,
there's
that
and
I
think
we
also
we
take
advantage
of
the
ability
to
find
a
given
pod
to
a
given
service
account
when
it
comes
to
allowing
prowl
to
talk
to
privileged
services.
I,
don't
know
if
there's
an
equivalent
sort
of
thing
or
other
cluster
deployments,
but
we
are
trying,
where
possible,
to
get
out
of
the
business
of
storing
secrets
inside
of
the
cluster
plus
they
get
leaked.
A
E
We
can
do
that.
It
was
mostly
motivated
by
not
wanting
to
use
basil
and
also
of
I
mean
the
documentation
itself
like
we
had.
We
had
an
own
installed,
prowl
for
your
own
or
code
cluster
or
something
documentation
were
Oh.
Add
these
five
service
accounts
and
then,
in
the
end
the
Installer
failed
because
it
needed
one
more
and
we
tried
to
automate
this
and
also
putting
that
into
go
code.
We
felt
like
we
can
test
it
easier.
E
A
D
Pacific
suggests,
and
so
right
now,
I'm
working
for
redhead,
so
I'm
working
on
the
instance
but
I
work
for
another
company
for
which
I
set
up
a
for
instance
before
and
that
was
basically
there's
a
started.
A
DML
fire
and
the
dog
and
I
followed
that
and
read
quite
a
bit
of
code
at
the
time
and
they're
getting
started.
Part
takes
time
because
dogs
are
not
up
to
date
and
no
one
is
really
maintaining
that.
But
once
you
get
to
the
point
where
the
polling
centers
I've
been
running,
it's
not
so
much
maintenance.
E
E
So
there
is
somewhere
in
the
prowl
repository.
There
is
an
announcement.
Dot
MDE
file
were
basically
all
the
changes
for
prowl
are
maintained
and
that's
our
file
at
least
where
we
say:
hey,
we're
gonna
have
to
change
from
March
25th.
That's
our
new,
proud
node,
and
we
check
then
deployments
on
the
upstream
prowl
if
it
got
reverted
at
one
point
or
if
we
have
to
get.
D
B
D
A
There
is
one
proposal.
That's
been
posted
to
the
cig
testing
mailing
list
for
what
it's
worth
to
have
this
concept
of
a
staging
prowl
instance
to
which
changes
are
deployed
automatically,
and
then
some
end-to-end
tests
could
be
run
against
it
and
then,
if
those
NN
tests
pass,
then
opening
up
a
PR
tube
of
the
actual
production
crowd
deployment.
If
you
can
call
how
we
treat
it
production,
so
that's
that's
one
thought
another
proposal
which
is
I,
think
being
drafted.
A
I'm,
not
sure
where
it
is,
is
the
idea
of
having
a
staging
repo
in
a
production
repo.
So,
instead
of
you
having
to
like
scrape
an
image
tag
how
to
manifests
inside
of
the
testing
for
repo,
you
could
just
get
like
the
latest
tag
inside
of
the
production
images
repo,
so
that
all
of
the
like
development.
B
A
You
know
the
multiple
image
tags
add
a
get
pushed
to
the
staging
repo,
but
the
image
tag
that's
actually
happy
and
deployed,
gets
pushed
over
to
the
final
repo,
so
those
things
could
help
alleviate.
Some
of
that
I
would
agree
with
Alvaro
that
sometimes
the
set
of
changes
that
fold
into
a
given
prowl
instance,
don't
necessarily
land
in
announcements.
Thought
md,
though,
people
try
to
put
breaking
changes
in
there
when
possible.
I
feel
like
the
overall
velocity
of
code.
A
F
A
Yeah
I
think
we're
kind
of
trending
now
into
meeting
with
topics
I
wanted
to
talk
about
to
the
community,
so
the
staging
crowd
proposal
was
one
whatever
happens
with
boss.
Gaius
would
be
another
I
think
that
that
was
a
proposal
and
document
that
was
brought
up
a
couple
meetings
ago
and
then
also
sent
out
to
the
cig
testing
mailing
list,
which
describes
the
intent
I'm,
not
necessarily
sure
that
timeline
has
really
been
nailed
down
for
that.
A
B
A
A
And
literally
just
did
these
slides,
like
20
minutes
before
this
meeting,
so
there's
not
much
on
them.
The
standard
format
for
updating
to
the
community
is
talking
about
stuff.
We
did
if
anybody
has
any
suggestions,
toss
them
in
the
state
testing
slack
or
you
are
able
to
comped
on
these
slides
as
well.
There
LinkedIn,
Genda
and
I'll
send
it
out
a
link
to
the
cig,
testing
mailing
list
and
post
it
in
slack
so
some
stuff
off
the
top
of
my
head.
A
I
know
you
did
since
our
last
update
in
October
2019
was
we
stopped
retrying
flakes
and
M&M
tests,
which
was
really
painful
for
a
little
while,
but
allowed
us
to
find
things
like
a
race
condition
upstream
and
run
C.
We
updated,
go
triage,
I,
love
this
thing
to
be
able
to
use
to
be
able
to
just
like
include
like
we're,
strict
down
to
like
what
jobs
do.
A
I,
don't
know
what
her
plans
for
2020
are.
I
could
use
some
help.
There
don't
know
how
they're
going
to
affect
other
people
in
the
community.
I
do
want
people
to
kind
of
revisit
the
job.
I
think
we're
gonna
I'm
gonna
ask
six
to
revisit
the
jobs
that
they
own,
like
I,
said
Cigna
recently
discovered
they
were
kind
of
unable
to
effectively
troubleshoot
or
debug
the
end-to-end
tests
that
they
own.
Let's
see
if
their
dashboard
looks
any
better.
A
If
I
can
find
the
word
node
so
I
was
we
had
an
incident,
we're
like
the
release,
blocking
tests
or
node
we're
like
continuously
failing
and
all
patch
releases,
were
kind
of
block
on
solving
this,
so
I
help
them
sort
of
unblock
the
specific
release
branch
tests
which
you
can
see.
They
have
like
all
these
other
jobs
that
are
continuously
failing
and
they're,
taking
sort
of
an
assessment,
figuring,
Allegan.
Okay,
these
tests
in
this
job
are
continuously
failing.
Do
we
know
why
they're
failing
do?
We
can
care
enough
to
support
this.
A
I'm
supposed
to
give
updates
on
each
sub
project,
that's
the
sink.
That
falls
on
your
stick
testing,
so
I'm
just
going
to
talk
about
the
making
baskets
as
a
product
thing.
I
was
going
to
talk
about
the
fact
that
kind
released,
zero
to
version
0.8
and
it
supports
restarting
kind
clusters
after
a
reboot,
which
has
been
a
very
much
asked
for
feature
and
I'm
sure
Ben
can
fill
me
in
on
many
other
things.
A
We're
proud
I
talked
about
the
prowl
staging
incidence
thing.
I
know
we
have
definitely
leaned
into
using
workload
identity
to
try
to
improve
our
security
posture,
and
there
was
another
proposal
sent
out
to
the
mailing
list
about
having,
like
one
prowl
instance,
potentially
serve
many
different
projects
or
repos
or
whatever,
but
then
have
different
Dec
instances
so
that,
if
I
like
right
now,
not
that
I
want
this
to
be
the
state
of
proud
Akkad's
that
I,
oh,
but
right
now.
A
The
testing
Commons
sub
project
is
really
has
been
pushing
at
proposal
in
the
past
couple
of
weeks
about
refactoring
the
e2e
framework
package
inside
of
the
kubernetes
repo
and
freaking
enough
dependencies,
so
that
it
could
be
moved
to
the
staging
repo
so
that
it
could
be
consumed
as
an
artifact
without
having
to
import
or
vendor
in
the
kubernetes
repo,
which
is
a
bad
idea,
and
you
should
feel
bad
for
doing
it.
But
some
people
had
to
do
it
in
the
name
of
expediency
to
help
support
of
that.
A
The
import
boss
job
so
import
boss
is
a
tool
that
enforces
that
this
package
is
only
allowed
to
import
these
other
packages,
or
it's
not
allowed
to
import
these
other
packages.
It
supports
the
animal
as
a
format
it
it
used
to
only
support
JSON,
so
we
have
no
idea
of
why
rules
were
being
added
where
so.
You
can
see
now
that
we
can
like
comment
why
these
rules
aren't
here
oops.
Sorry,
like
the
packages
in
package
directory
shouldn't
import
things
from
commands
and
vice
versa.
A
Whatever
so
because
we
can
comment
why
these
dependencies
and
these
rules
are
here
that
should
help
us
untangle
and
break
apart.
The
e2b
frameworks
things,
and
it
also
applies
to
all
test
files,
not
just
regular,
go
packages.
I,
don't
know
if
there
are
any
related
caps
that
we
should
care
about,
I'm
open
to
feedback
there
and
then
I
need
to
give
updates
on
any
of
related
working
groups.
Kaede-San,
for
is
the
only
one
I
know
of
today.
A
Kate's
infra
runs
all
these
fine
services,
self-service,
DNS,
self-service,
Google,
Group,
reconciliate,
Google,
Group
management,
the
go
dockets
that
IO
redirector
perf
hates
that
IO
was
moved
over
recently
GCSE
web.
The
thing
that
lets
you
view
artifacts
in
GCS,
like
a
plain
old
website
that
that
runs
over
there
now
I
think
all
the
slack
in
for
a
stuff
now
runs
over
there.