►
From YouTube: Kubernetes SIG Testing - 2020-01-28
Description
A
Hi
everybody
welcome
to
the
sig
testing
meeting
kubernetes
ink
testing
meeting
for
Tuesday
January
28th
I
am
your
host
Aaron
Berger.
We
are
all
being
publicly
recorded,
so
we
should
all
adhere
to
our
best
code
of
conduct,
as
laid
out
by
the
communities
community,
which
basically
boils
down
to
don't
be
a
jerk
on
the
agenda.
For
today's
meeting,
Katherine's
gonna
talk
to
us
a
little
bit
about
the
work
she
has
done
in
improving
crown
job
visibility
and
then
Antonio.
A
C
C
But
basically
the
idea
is
that
Crowl
jobs
can
often
fail
in
ways
that
are
not
obvious
to
the
people
who
are
trying
to
run
their
jobs
like
the
image
doesn't
exist
or
they
have
misconfigured
their
volumes
or
something
along
those
lines
or
you
know
to
just
decides.
It
doesn't
like
things
being
scheduled
to
it
today.
C
So
the
idea
is
for,
by
reporting
more
information
more
consistently
about
trial
jobs.
We
can
indicate
to
users
in
the
job
View
UI,
but
something
has
gone
wrong
so
as
part
of
this
I
have
added
reporters
but
introverts
started
and
finished
files
always
get
reported.
So
you
always
know
if
your
job
started
and
always
over
your
job
finished,
we
upload
for
our
job
to
GCSE
alongside
B
artifacts.
So
you
know
if
a
job
was
and
we
can
upload
the
tot
info
and
events
so
that
you
can
reconstruct
what
happens
but
as
well.
C
All
of
that
is
in
now
and
can
be
turned
on
with
a
bunch
of
new
config
flags
and
is
enabled
on
we
kubernetes
per
hour
and
soon
I
will
have
a
view
of
an
is
similar
to
effectively
cubicle
describe
and
also
the
status
thing
at
the
top
of
spyglass.
But
job
humor
will
start
showing
helpful
hints
along
the
lines
of
it.
Looks
like
your
job.
Couldn't
pull
this
image
check?
What
image
she
wanted
to
pull
them,
that
you
have
access
or
something
on
where's
lines.
C
Probably
not,
but
it
could.
If
anyone
feels
like
you're
bringing
me
a
Cooper,
that
she's
cooking.
E
A
D
C
A
Okay,
and
so
people
can
find
these
additional
files
in
the
artifacts
link
off
of
spyglass.
Is
that
correct?
That
is.
F
F
G
The
original
argument
from
Clayton
first
of
all,
I,
don't
know
the
city
exactly
what
problem
you
are.
Having
that
the
original
argument
from
Clayton
is
it
kubernetes
core
does
not
understand?
What
is
a
master
master
note?
There
is
no
concept
of
that.
So
some
of
the
deployers
have
this
master
roll
label,
and
this
is
completely
customized
and
his
work
was
to
completely
remove.
D
G
F
H
Know
at
least
some
of
them
it
doesn't
make
much
sense
that
it's
even
checking
for
this.
It
feels
like
some
kind
of
Miss,
optimization
to
say,
oh
well.
Only
if
this
particular
kind
of
note
is
present
am
I
going
to
move
forward.
H
A
H
What
do
we
want
to
do
there?
Because
there
there
are
a
bunch
of
things
going
on
and
the
test
framework
that
make
this
assumption,
that
you
can
look
for
a
node
pending
with
its
approximately
something
like
a
node
ending
with
mastery
of
the
node
object
as
a
gating
mechanism
for
doing
more
things
like
grabbing
metrics.
A
G
H
H
The
prop
well,
so
we
did
discuss
that
route
previously
work
on
that
was
shot
down
because
of
Layton's
effort
to
remove
having
a
label
from
court.
There
were
also
some
few
places
in
actual
kubernetes
components
that
were
keyed
off
this
very
non
granular
label
to
change
behavior.
There
are
a
few
more
granular
labels
that
are
respect,
but
they're
not
related
to
that
specifically.
F
F
H
Some
extended
sorry,
it
works
an
extended
testing
behavior
from
working
properly,
and
so
the
question
was
just
like:
where
should
we
change
things?
I
mean
we
could
rename
that
there's
some
kind,
but
that
doesn't
seem
productive
like
we
want
to
move
away
from
that.
Naming
there's
been
a
long-standing
discussion
about
that.
So
the
question
is
just
like:
how
do
we
want
to
approach
this
and
the
tests,
this
tech
debt,
where
we're
keying
off
a
node
name?
H
H
A
A
A
Personally,
don't
have
a
great
answer
to
this.
I
mean
I
kind
of
don't
necessarily
see
something
terribly
wrong
with
a
label
that
is
purposed
for
for
the
tests
to
know
what
they
can
and
cannot
hit.
I
do
think.
Ideally,
these
checks
should
be
as
granular
as
possible
so
that
they
wouldn't
have
to
rely
on
the
blanket
check
or
concept
of
a
master
node
or
a
control
plane
node.
They
could
ask
for
what
it
is
they're
looking
for
specifically,
but
it
seems
like
something
that
would
have
to
be
evaluated
on
a
case-by-case
basis.
A
H
And
I
think
for
like
I'm,
not
sure
how
much
effort
we
want
on
this.
We
probably
want
some
of
them
just
for,
like
the
purposes
of
we
want
to
make
sure
it
is
working
for
kubernetes,
but,
for
example,
for
conformance
I
mean
we
can't
guarantee
access
to
any
of
these
components.
So,
of
course,
that
seems
to
be
the
main
area
where
you
know
people
are
investing
time
in
the
tests
right
now,.
I
A
F
Is
a
an
interesting
topic
because
it's
about
using
the
Skip
analysts
provided
ease
because
there
are
some
tests
and
tests
that
only
convenience
and
cloud
providers,
but
what
the
history
with
the
cloud
providers
is
well-known.
They
are
supposed
to
be
out
of
three,
but
they
are
not
out
of
three.
So
I
may
have
this
poodles
rocket
because
of
this
people
are
win
about
them,
not
G.
Seen
as
keep
on
is
providing
is
another
I,
don't
know
how
how
to
move
forward.
H
So
I'll
add
a
little
bit
more
context.
The
particular
test
is
checking
the
addresses
on
nodes
to
see
that
they
have
both
an
ipv4
and
ipv6
reduce
tech
clusters.
Yeah
all
right,
my
question
is:
is
that
actually
of
like
a
valid
requirement
that
we
expect
of
clusters
and
if
so,
then
like?
Why
does
it
need
to
check
on
providers
and
add
a
little
bit
more
context
to
what
the
provider
check?
Is
it's
not
doing
something
like
finding
out
what
external
cloud
right
are
you
doing?
H
I,
don't
know
where
we
get
it
more
I
was
hoping
here
might
be
a
place
where
we
appear
to
get
a
more
official
response
on
it.
I
think
this
is
a
thing
we
should
stop
doing.
Besides
the
fact
that
we're
trying
to
remove
cloud
providers,
the
provider
concept
seems
broken,
given
that
it
is
just
a
string
and
isn't
really
mapped
to
any
particular
concrete
concept
of
a
provider
anywhere
in
the
past.
H
F
F
G
A
H
What's
worse,
is
a
few
things:
do
try
to
do
that
also
based
off
of
a
mapping
from
the
string
key
that
is
lost,
beat
okay,
but
in
this
case,
I'm
I'm,
actually
more,
specifically
even
questioning
the
concept
of
having
a
test
that
checks,
if
there's
a
specific
set
of
addresses
on
the
nodes,
when
this
is
a
thing
that
isn't
actually
like
it's
specific
to
the
deployment,
it
feels
like
we're
testing
the
way
that
the
vendor
chose
to
implement
this
as
opposed
to
kubernetes.
Okay,.
F
H
F
G
This
way
so
fall
in
your
case
with
the
ipv6
door
stack
support
but,
for
instance,
six
storage
tests
have
this
check
all
over
the
place
because
of
the
volume
specifics
for
different
providers.
I.
Think
overall
consensus
was
that
the
framework
should
be
agnostic
in
terms
of
providers,
and
this
requires
code
organization
changes
it's
doable,
but
it's
a
major
factor
to
do
properly.
G
H
A
H
A
E
J
F
And
this
is
I
want
to
share
my
experience
and
my
conclusion.
So
you
come
follow.
You
can
get
the
country
so
for
ipv6
we
decided
we
have
to
test
it
with
kind
kindness,
you
nap
for
some
kind
of
testing
and
we
need
to
testing
one
cloud
provider.
So
we
have
better
sign.
Okay,
then
I
tried
talking
with
just
in
Santa
Barbara,
with
Hobbs
checkout
class
of
API
and
check
out
all
the
different
projects.
F
What's
the
problem
is
that
consumes
a
lot
of
time
and
it's
very
difficult
to
maintain
because
you
most
most
of
the
time
you're
maintaining
this
story.
So
what
I?
The
conclusion
that
they
have
is
well
I
see
this.
What
is
in
the
cube
up
world
is
in
for
DC
and
I
have
when
template.
That
is,
you
can
spawn
in
Amazon
and
it
creates
the
de
cluster
with
three
four
moles
for
conformance
with
cloud
in
it,
and
it
responds
the
crusted
and
in
transient
when
test
it's
a
general
file
with
the
template
and
cloud
init
scripts.
F
You
got
the
cluster
and
it
runs
the
end-to-end
test.
So
I
wanted
to
know.
This
is
something
viable
for
testing
and,
if
it's
viable,
how
can
I
add
this
to
pro?
No
because
it
has
two
beam
in
one
report,
this
I'm
saying
this
because
nobody
is
maintaining
this
and
I
want.
If
I
want
to
put
a
test,
I
wait
to
pull
something
that
you
know,
I
can
be
able
to
maintain
and
and
to
support.
I
don't
want
a
job
fell
in
constantly.
F
F
Iii
deploy
it
manually
with
my
account,
and
this
is
other
problem.
I
don't
have
I,
don't
want
to
spend
my
money
on
requested,
but
you
can
do
Kanban,
and
this
is
the
thinnest
it's
a
kubernetes
cluster
is
not
the
kubernetes
with
the
Amazon
integration
and
all
the
you
know
is
the
conformal
stress
in
a
crewneck
disgusted
in
Amazon.
F
That's
why
I'm
saying
that
this
is
very
limited.
Scope
is
not
fully
pcs.
Crusted
integrated
with
Amazon
I
know
that
the
issue
people
is
working
but
and
I
tried
to
work
with
that,
but
you
know
is
at
the
end.
Is
an
historian
casts
a
lot
of
scenes
that
I
don't
have
time
to
support
that
and
to
maintain
and
the
bellman
is
complex.
G
F
Already
done
that
Lumiere
and
I
had
a
patch.
The
problem
is
that
when
I
did,
that
is
big
we're
enough
a
three
the
problem
is
without
for
three
mastered.
It
was
unstable
because
they
have
problems,
because
when
you
add
a
new
feed
to
read
the
fourth
to
the
b2
and
I
had
a
lot
of
panic,
so
I
develop
it
in
in
Urfa
too,
but
the
problem
is:
when
I
have
it
working
in
alpha
2,
they
are
moving
to
alpha
4.
That's
why
I
tend
to
say
you
know.
The
problem
is
I.
F
F
F
H
H
Yeah
cube
test
is
entirely
orthogonal
to
this
problem.
Like
the
problem
is,
we
need
a
reliable
mechanism
by
which
to
get
a
cluster
on
in
the
cloud
with
dual
stack,
so
that
means
we
need
some
actual
cluster
provisioning
code
that
supports
this
right
and
you
know
lost
a
lot
of
time
doing
this
to
one
of
the
cluster
lifecycle
projects.
It
doesn't
sound,
like
the
other
projects
are
going
to
do
this
for
us.
How
do
we
get
this?
To
be
a
thing?
That's
actually
going
to
be
maintained.
H
H
That,
conversely,
also
needs
to
be
maintained
and
it
has
to
get
into
one
of
the
existing
ones.
Somehow
I
think
it's
pretty
reasonable,
at
least
in
the
short
term,
to
hook
up
the
cloud
formation
thing
and
get
some
signal,
without
necessarily
committing
to
like
anyone
else
supporting
that
long
term
and
I.
Just
don't
think
that
the
feature
can
go
like
GA
or
whatever.
Depending
on
this
thing,.
A
Yeah
I
mean
it
sounds
like
going
to
be
preferable
to
have
one
of
cluster
lifecycles
ways
of
deploying
a
cluster
to
support
this,
but
throwing
cloud
formation
templates
and
the
appropriate
scripts
to
launch
them
in
a
separate
repo
and
then
plumbing
that
3d
cube
test
sounds
like
it
could
be
feasible.
The
unknown
for
me,
is
how
you
would
like
which
build
of
kubernetes.
Are
you
picking
to
launch
with
these
cloud
formation
templates
and
how
are?
How
are
they
going
to
pick
that
up
right.
F
Yeah,
this
is
the
nice
theme,
because
the
the
crusted
API
guys
have
some
intense
instances
that
they
are
building
with
the
latest
bits.
So
this
is
a
temporary
thing
until
the
producing
mature
and
they
catch
up.
I,
don't
plan
to
to
do
so.
I
know
that
the
the
Asil
guys
are
working
in
in
something
I
just
want
to.
You
know
to
try
to
get
traction
and
see
these
people
see
that
this
is
working.
F
You
know
they
they
come
move
faster,
because
my
thought
is
that
if
we
have
something
working
sooner
or
later,
people
is
going
to
start
to
demand
it,
and-
and
this
is
how
it
works.
If
the
project
see
the
month
is
going
to
really
you
know,
one
or
two
weeks
is
just
put
a
developer
to
develop
they.
If
it
is
not
a
big
deal,
it's
time.
G
G
F
They
are
pretty
helpful.
The
problem
is
that
the
project
is
moving
too
fast.
You
know,
and
and
I
was
assuming,
that
they
weren't
going
to
maintain.
You
know
if
I
develop
him
in
alpha
2
and
they
are
in
our
favor
for
and
my
patch
is
not
in
I
I
was
assuming
that
they
will
ask
me
to
to
renew
it
for
alpha
4
and
the
other
thing
is
what
happens
if
they
break
something
with
r45
on
when
removed.
G
As
a
consumer,
they
probably
not
going
to
break
you
dramatically
as
long
as
you
have
your
valid
configuration
for
v1
over
to
you
could
then
transfer
this
manually
convert
this
to
the
next
alpha
tomato
setting
the
next
office
coming
in
July
this
year.
So
you
have
time
to
continue
operating
on
this
offer,
but,
yes,
I,
do
either
selling
this
law
to
malloc
cooperation
between
the
manifests
that
you
have
to
write
for
the
plow
job
that
they
pass
through
the
coastal
gap,
but
the
images
are
going
to
be
solid,
so
you
pull
an
image.
G
G
G
F
F
G
We
captured
this
in
a
pro
job
and
we
started
tagging
repositories
in
the
orc,
like
the
precedium
repository
and
the
proposal
from
Katherine
was
to
use
a
possible
job
previously
discussed
pro
plugin
with
Steve,
but
they
had
a
consensus
that
possibly
job
is
the
better
approach.
I
guess.
My
question
here
is
like
how
do
we
even
grab
the
possible
job
privileges
to
to
have
write
access
on
some
of
these
repositories,
possibly
using
a
token.
A
G
Yeah
I
have
a
breed
I
had
a
brief
discussion
with
James
about
this,
and
he
said
that
the
publishing
bot
currently
is
doing
that,
but
it
also
pushes
source
codes.
So
it's
not
only
tagging
commits,
for
instance,
not
tagging.
The
latest
commit
it's
also
bound
to
pushing
source
code,
and
we
don't
want
that.
We
also,
we
only
want
to
tack
or
branch
delicious
like
that.
A
A
I
feel
like
some
of
the
work
that
Eric
has
done
with.
Probably
you
need
to
use
workload.
Identity
for
containers
may
conflict
with
the
desire
to
increasingly
had
more
trusted
functionality,
two
or
more
trusted
jobs,
but
your
proposal
seems
sound
to
me.
I
mean
a
post
submit
in
the
trusted.
Cluster
would
be
the
starting
place
and
we
could
figure
out
how
to
iterate
from
there,
like
my
personal
preference
would
be
to
find
a
way
to
do
this
with
potentially
using
Prowse
github
token,
but
seeing
if
there's
a
more
appropriate,
more
skilled,
github
token.
This.
G
H
Possibly,
but
even
if
it
doesn't
necessarily
do
that,
we
could
follow
the
model,
which
I
think
is
that
there
is
a
like
they
have
a
github
account.
That's
used
for
that
with
a
token
and
I
believe
the
security
around.
That
is
that
this
is
run
in
GCB
and
I.
Think
for
that
it's
actually
kicked
off
by
humans,
but
we
have
patterns
for
kicking
that
off
from
prowl
as
well.
G
G
Apparently,
the
code
that
is
doing
the
tagging
inaudible
is
concurrent
and
there's
some
states
starting
mobile
variables
and
things
that
it
needs.
They
don't
want
to
touch
it
and
I
was
thinking
it.
So
my
question
is:
is:
is
the
addition
to
the
trusted
cluster,
something
that
is
concerned
to
the
extent
that
maybe
we
should
really
look
for
the
alternative
in
Ottawa.
C
C
G
C
C
H
Mean
the
nicest
thing
there
is
that
it's
it's
a
little
bit
stronger,
multi-tenant
like
the
other
thing,
is
that
we
can
actually
hit
like
we
actually
can't
people
access
to
these
we're
group.
Kids,
don't
really
gives
you
a
project
and
whereas
kind
of
the
trusted
cluster
is
all
eggs
in
a
basket,
especially
given
that
the
trusted
cluster
is
not
only
trusted
build
jobs
but,
like
pro
itself,.
I
G
Yeah,
there
are
major
concerns
we
done
ago,
otherwise
it
definitely
feels
like
the
place
to
add
this
also.
I
would
like
to
mention
that,
apparently,
six
storage
have
a
similar
demand.
They
want
to
break
a
repository
or
other.
The
repository
is
already
out
there.
They
want
to
target
and
branch
it
synchronized
to
caking
for
some
reason,
so
I
guess
this
is
not
only
sequester
psycho
/medium
commands
at.
B
J
A
A
G
J
A
K
The
process
of
rotating
is
you
they
rotate
the
master
token
all
the
web
hooks,
and
so
there
was
action
item
I
mean
there
was
a
bug
created
like
you
know,
so
how
can
we
make
sure
that
this
never
happens
again
and
that
design
dog
is
kind
of
as
a
proposal
to
solve
that
problem?
And
let
me
just
share
the
link
here
as
well-
have
shared
the
link,
like
you
know,
over
the
document
which
Aaron
added
earlier.
So
the
goal
here
is
just
to
give
a
quick,
like
you
know,
design
overview.
K
The
goal
here
is
to
not
have
a
master
token.
Like
a
global
token
anymore.
Everything
would
have
a
there
would
be
a
global
token,
but
all
the
repositories
or
organization
will
have
their
own
token.
So
if,
let's
say
there's
organization
once
to
upon
both
drown,
they
would
have
their
own
organization
specific
token
and
if
they
were
ever
or
if
let's
say
so,
there
are
hierarchy
of
token
global
level,
maybe
just
for
backward
compatibility
organization
level
and
on
then,
like
you
know,
the
quality
level.
K
If
there
are
organization
which
may
decide
that
hey,
we
want
to
maintain
a
different
token
for
every
repository
they
can
choose
to
do
that
and
we
would
be
using
the
most
specific
token
configured
for
that
organization
or
repository.
So
if,
let's
say
a
repository
has
a
repository
specific
attachment
token,
then
they
cannot
authenticate
their
web
hooks
using
global
token
or
organization
based
oak
and
they
have
to
use
the
repository
base
token.
K
So
the
benefit
of
this
would
be
that,
like
you
know,
if
a
token
gets
leaked
and
of
course
only
like
you
know
that
repository
or
organization
is
compromised,
dude
I
mean
it's
an
if
is
affected
for
the
time
period
and
also
the
way
so
we
can
go
and
we
can
go
look
at
the
design
document.
If
people
want-
or
we
can
like
you
know,
I
can
just
give
a
little
overview
and
then
folks
can
take
a
look
at
it
offline.
B
D
K
So
another
thing
with
the
rotation
problem
right
now
is
because
if
let's
say
we
have
just
one
master
joke
and
then
we
have,
let's
say
20
or
30,
like
n
n
number
of,
like
you
know,
organization,
isn't
that
singer
token?
The
moment
you
rotate
your
master
token,
all
of
the
webhooks
are
broken
because
it's
not
a
like.
You
know
atomic
operation
to
update
all
the
webhooks
out
there.
So
how
do
you
like
you
know,
make
sure
that
the
rotation
does
not
break
anything
even
more
momentarily?
K
So
having
like
you
know,
kind
of
a
repository
base
or
organization
page
token
will
allow
you
to
first,
like
you
know,
kind
of
make
that
downtime
much
smaller
and
in
addition
to
that,
what
they're
planning
in
doing
is
we
applying
and
saying
that
hey
when
you
rotate
a
token
you
can
give
your
previous
token.
If
you
decide
you
can
give
you
a
previous
token
a
little
bit
of
expiration
time.
You
can
say:
hey
expire,
my
previous
token,
after
five
minutes.
So
what
will
happen
during
that
five
minute?
K
K
And
also
like
you
know,
this
just
goes,
and
also
yes
and
then
a
couple
of
thing
is
right.
Now
the
token
management
and
updation
is
not
completely
automated.
This
is
another
thing
as
part
of
this
change
we
want
to
do.
Is
that
as
a
on-call
person
or
whoever
is
rotating,
like
you
know,
token
or
onboarding,
and
you
deposit
to
your
organization
to
like
you
know
to
prowl,
they
won't
have
to
copy
the
token
and,
like
you
know,
go
to
the
github
UI
etcetera.
A
D
D
That
being
said,
like
all
the
secrets
are
still
in
the
cluster
I'm,
not
sure
what
we
definitely
considered,
that
and
I'm
not
really
sure
if
there's
any
benefit
that
we
get
having
a
separate
four
separate
secrets.
The
only
reason
that
we
want
that
is.
We
wanted
people
to
be
able
to
supply
each
max
themselves
and
use
some
sort
of
secret
store
and
allow
people
to
like
have
access
to
individual
secrets.
But
that's
one,
that's
something
we
don't
have
now
and
two
I.
K
Ok,
thanks
and
also
for
backup
and,
like
you
know,
just
maintenance
having
a
single
like
secret
would
be
better
that
hey.
We
can
update
the
file,
we
can
upload
it
on
like
an
overnight
and
her
secret
store
think
it
could
be
just
easier.
So,
like
it'll,
be
less
overhead
to
just
maintain
that
we
don't
lose
our
secret.
D
If
we
do
see
a
reason
to
have
multiple
secrets
down
low
I
know,
this
was
not
as
a
block
that
sorry,
if
we,
if
we
do,
find
a
reason
that
we
want
multiple
secrets
down
the
road.
This
design
doesn't
really
block
that
we
could
always
merge
config
files
purgatory
things
another
way,
so
we're
not
really
locking
ourselves
into
that.