►
From YouTube: Kubernetes SIG Testing - 2019-07-30
Description
A
A
To
the
kubernetes
code
of
conduct
on
YouTube
later,
where
you're,
all
being
your
very
best
selves
and
off
being
jerks
today
on
the
agenda,
Cole's
going
to
show
us
this
new
workflow
he's
been
working
on
to
sort
of
test
proud
jobs
locally,
using
among
other
things,
kind.
I
wanted
to
give
an
update
on
the
doodle
I
sent
out
about
maybe
a
different
meeting
time
or
meeting.
C
D
A
D
Okay
yeah,
so
this
is
this
file.
Super
simple
I!
Imagine
that
this
file,
of
course,
one
similar
to
it,
can
be
copied
around
to
different
priority
posts
that
have
config
in
them
and
the
only
things
that
will
need
to
change.
Are
these
two
lines
that
specify
the
config
path
in
Java?
If
you
have,
and
potentially
there's
also
some
extra
variables,
they
can
be
provided
that
have
reasonable
defaults
and
other
repos.
Instead
of
just
calling
this
others
early
script
will
then
called
the
actual
script
here.
D
D
So
it
starts
off
by
loading
some
arguments
and
then
it
ensures
an
install,
which
means
that
it
checks
out
a
couple
different
tools.
In
particular,
it
gets
our
make
PJ
tool
crowd,
which
creates
a
crowd.
Job
based
on
config,
so
creates
the
actual,
like
crowd,
shot
the
animal
resource,
and
then
it
gets
the
main
topology
resource
or
tool
which
creates
a
audial
resource
from
a
crowd
job
resource,
which
includes
all
the
decoration
steps,
and
it
also
gets
kind
and
sets
up
a
kind.
D
Cluster
called
make
pod
and
it
mounts
the
node
in
that
kind
cluster
to
an
output
directory,
then
from
there
we
create
a
for
how
job
and
whose
make
pod
after
that
to
create
a
pod
from
the
crowd
job.
And
the
one
thing
of
interest
here
is
that
we
use
this
local
flag
and
local
is
a
new
feature
to
the
pod
utilities
that,
instead
of
it
essentially
runs
GCS
upload
functionality
in
a
like
dry
run
mode.
D
D
So
what
this
looks
like
is
when
you
run
this
command.
You
either
run
this
that
one
that
I
just
showed
with
a
bunch
of
arguments,
or
you
run
the
the
wrapped
version
here
and
you'd
run
something
like
PJ
on
kind,
SH
old
test
and
for
a
yellow
or
Emmalyn,
and
that
would
create
create
the
proud
job,
create
a
pod
from
the
prowl
job
and
apply
that
to
the
kind
cluster
and
assuming
these
tools
are
already
installed,
which
they
will
be
up
to
the
first
time
you
run.
E
D
Goes
really
quickly
and
tools
like
make
PJ
includes
interactive.
So
if
there's
things
like,
if
you
create
a
presubmit
job,
it
will
ask
you
which
it'll
interactively
ask
you
which
will
request
you
on
a
point
to
similar
for
post,
submit
jobs,
it'll
ask
you
which
shot
you
want
to
use
yeah
and
there's
one
additional
feature,
which
is
that
sometimes,
when
you
run
these
proud
jobs,
they're
going
to
have
volumes
that
are
both
won't
exists
in
the
kind
cluster.
D
For
example,
if
there
are
some
service
count,
volumes
that
are
mounted
in
or
some
secrets
that
taxes
other
clusters,
those
may
only
exist
in
the
production
cluster.
So
when
you're
running
this
tool,
if
there
is
any
volumes
that
are
not
empty
during
post
path,
it'll
interactively
prompt
you
to
replace
them
and
you
can
either
replace
them
with
an
empty
dirt
or
specify
a
host
path
on
your
local
machine,
or
you
can
continue
to
use
the
existing
value.
And
if
you
do
that,
you
just
be
expected
to
load
your
load.
D
A
Yeah
I
know
I
know
it's
not
a
demo
but
like
I
personally,
just
having
gone
through
and
trying
to
rewrite
the
products
so
that
the
cookbook
feel
like
okay,
here's
how
you
write
a
job
now,
here's
how
you
test
a
job!
Oh
wait
before
you
can
test
the
job.
You
actually
need
to
stand
up
an
entire
crowd
cluster
over
here
then.
A
D
And
record
one
two
or
something
to
make
sure
that
it's
a
lasting
and
trying
to
figure
out
what's
going
on
here,
cuz
one
advantage.
We
have
the
tool
list
I'm
over
this
already
that's
called
V
know
the
reason
that
I
went
this
route
is
that
it
uses
the
make
and
make
PJ
utilities
so
that
it's
essentially
using
the
same
code
that
we
used
to
generate
things
in
in
production
or
at
least
very
close
to
it.
D
So
that
makes
that
should
make
things
more
consistent
and
permits
you
and,
more
importantly,
it
lets
us
use
the
the
pod
utilities
properly.
So
the
pod
utilities
are
like
actually
running
and
decorated
around
your
job,
so
you
get
all
the
environment,
same
environment,
variables
provided
and
the
same
artifact
type
village
provided.
A
D
I
think
so
I
think
I
know
is
going
to
be
like
it's
certainly
cool
and
a
great
start,
but
the
the
main
issue
is
that
it
doesn't
have
proper
pod
utility
support
and
since
we
expect
everybody
to
be
using
that
for
their
jobs
in
particular
like
the
main
difference,
people
that
really
care
about
a
live
load,
necessarily
local
testing,
but
checking
out
the
source
code
consistently
and
getting
all
the
extra
extra
referee
post
checked
out.
That
was
not
really
emulated,
or
at
least
not
perfectly
so.
D
Well,
you
can
certainly
convert
this
into
something
better
I
had
it
this
way,
because
it's
mainly
calling
out
to
a
lot
of
other
applications
right
now
and
I
figured
that
other
people
might
also
want
to
like
add
you
cuddle
commands
to
deploying
the
secrets
of
their
jobs
might
need,
or
something
like
that,
but
we
can
certainly
make
these.
Are
these
extra
little
sorry.
D
A
A
Yeah
no
I
think
this
was
cool
to
like
prove
that
this
is
in
fact
possible
yet
another
place
where
ever
putting
kind
at
the
center
of
all
things,
which
is
great
yeah
and
looking
forward
to
developing
faster
with
this.
So
next
I
just
wanted
to
real
quick
reveal
this
live
I
can
I,
don't
actually
know
what
the
results
the
doodle
Paul
were.
So
you'll
watch
me
unwrap
the
envelope,
so
it
looks
like
there
are
a
bunch
of
different
choices
that
all
have
12
checkmarks.
A
A
But
if
I
go
based
on
the
number
of
actual
check
marks
that
aren't
that
it
looks
like
Tuesday
10:00
to
11:00,
cific
bi-weekly
is
sort
of
where
we
want
to
live
yay
earlier
earlier
time
on
Tuesdays
longer
meeting
but
less
frequently-
and
my
hope
is,
this
is
a
little
friendlier
to
those
of
us
who
want
to
join
from
Europe
or
time
zones
that
are
not
on
the
west
coast.
So
I
will
send
out
invites
after
this.
The
question
I
have
for
the
group
is:
do
you
all
want
to
meet
next
week?
B
A
B
Didn't
put
a
link,
so
it's
not
actually
in
issue
it's
upon
request,
and
it's
one
night
it
like
two
weeks
ago
in
which
I
added
functionality
and
sinker
to
automatically
delete
pots
for
for
jobs
for
presubmit
project
that
were
created
on
a
public
rest.
That
has
that
you
are
division
and
in
that
case
the
whole
job
getting
put
into
state
about
it
and
so
far
nothing
happened.
B
It
cleaned
up
the
pots,
because
no
one
cares
about
the
result.
Couple
of
days
later,
a
colleague
of
my
notice
that
there's
actually
a
configuration
option
father's
already
present,
which
defers
faults,
which
is
why
it
didn't
happen
before
and
then
I
posted
this
and
Ben
mentioned
that
there
are
some
jobs
currently
for
the
product.
B
Kids,
that
I
or
instance
that
solos
belied
are
not
getting
terminated
and
the
interesting
or
one
of
the
interesting
learning
of
the
whole
thing
is
that
even
if
we
had
had
tests
for
the
whole
thing
and
the
current
behavior,
they
would
have
continued
to
pass,
because
this
configuration
option
or
something
and
the
change
I
did
was
I.
Think
in
the
status
quo
is
that
we
basically
have
the
business
logic
for
photo
ops
with
the
quest
agents.
B
A
D
Yeah
I
think
there's
a
pretty
clear
boundary
here
and
I
think
very
at
least
what
it
should
be.
I
think
so
I
think
sinker
should
only
do
proud
jobs,
because
that's
the
only
resource
that
isn't
managed
by
some
other
execution
controller
plank.
Should
mantle
pot
manage
pods
entirely
their
entire
lifecycle
and
it's
the
including
their
garbage
collection.
Build
controller
builds
all
of
the
garbage
collection.
The
only
thing
that
isn't
covered
by
all
of
by
each
controller
handling
its
own
garbage
collection
is
the
proud
jobs
of
themselves
getting.
F
F
But
then
you,
furthermore,
have
like
the
entry
point
timeout
grace
period,
so
they're,
just
like
absolutely
no
reason
that
if
you
have
a
long-running
termination
process
or
a
cleanup,
it
takes
forever
like
you
should
be
able
to
tell
the
system
that
that
is
the
case
so
that
we
don't
accidentally,
kill
you.
So
your
changes
by
comporting
it
like
that
is
it.
We.
F
B
I
actually
think
that's
a
new
question,
because
the
one
I
initially
asked
was
that
we
have
objections
to
basically
moving
everything
related
to
power,
jobs
or
Chuck
wears
out
of
secret
into
plank,
and
it
seems
that
is
not
the
case.
And
the
second
question
that
you
now
started,
I
think
is:
if
we
should
have
this
behavior
of
cleaning
up
pots
for
poor
jobs
that
have
a
newer
version,
basically
by
default
or
always
I,
Deb
I,.
F
B
I
really
agree:
that's
why
the
change
in
the
first
place
and
I
mean
maybe
I,
don't
know
how
easy
these
scalability
jobs
are
to
fix.
I
have
no
knowledge
at
all
about
these
things,
but
if
we
meet
this
for
some
jobs-
and
this
should
absolutely
be
opt-in
and
not
a
default
at
the
very
least
and
currently
the
default
is
to
always
keep
cuts.
Even
if
you
don't
care
about
at
all
about
the
result-
and
that's
just
just
doesn't
make
sense
so
you're
saying
we
should
opt
into
keeping
them
yeah.
D
F
The
for
the
scalability
jobs
I
think
that
to
stumbling-blocks
was
the
first
one
I'm,
not
sure
who
is
a
subject
matter
expert
there
Michelle
and
we
should
ask
if
those
jobs
are
currently
handling
like
sick
term
correctly.
That's
the
first
step
and
then
the
second
step
would
just
be
like
literally
running
down
a
grace
period.
To
do
this,
but
just
so
the.
B
E
B
A
F
The
statement
may
not
be
that
you
want
to
use
the
pods
around
if
the
project
has
been
deleted,
but
more
that
that's
the
current
behavior
and
so
in
order
for
this
to
be
easier
to
transition
with
my
life.
Isn't
that
and
also
it's
not
a
deletion
on
a
project,
but
rather
than
transition
between
any
state
into
aborted?
F
The
reason
this
impacts,
scalability
jobs,
is
I,
believe
scalability
jobs
today
are
not
configured
correctly
to
expose
to
the
cubelet
the
fact
that
their
teardown
takes
a
long
time.
So
if
those
jobs
get
orded
and
then
we
delete
the
pod,
the
cubelet
defaults
to
like
a
couple
seconds
of
grace
period
before
it
sends
it
kill
the
scalability
jobs
haven't
had
time
to
deprovision
everything
and
they
end
up
leaking
resources.
So
it's
just
a
matter
of
like.
G
A
A
A
A
B
C
A
H
Oh
no
in
heaven,
yes,
oh
hi,
everyone
so
I've
been
to
like
Yukon
China
last
year,
so
I
don't
look
like
a
developer.
Her
contribution,
kind
of
scope
workshop,
so
I
didn't
get
much
time
to
like
start
contributing
so
I
thought
like
I
play
with
like
pro
and
so
so
I
thought
like
this
will
be
a
good
starting
place.
No
like
it's
really
and
its
didn't.
Quite
yet
right
now,
it's
like
this
more
than
150
game
here
in
Delhi
India,
so
that
was
the
timing
was
really
good
like
that.
A
Yeah,
that's
sort
of
it's
kind
of
what
I
figured
like
art
time
thus
far
has
not
been
chaos.
It's
not
just
I
sent
a
link
in
chat
to
are
contributing
da
mãe,
which
I
just
recently
rewrote,
and
there
are
some
links
in
there
to
all
the
issues
we
have
that
are
labeled
with
good
first
issue
and
help
on
it.
If
you
want
some
place
places
to
get
started
there
and
then
I
feel
like
if
you
have
any
other
questions
or
those
don't
look
interesting
to
you.