►
From YouTube: Kubernetes SIG Testing - 2020-04-21
Description
A
So
welcome
everybody
to
the
kubernetes
sleep
testing
bi-weekly
meeting
today
is
Tuesday
April
21st
I
am
your
host
Aaron
Berger.
We
are
all
being
recorded
right
now
and
this
will
be
posted
to
YouTube
later
so
you
can
all
watch
yourselves
adhere
to
the
kubernetes
code
of
conduct
by
being
or
very
best
selves
and
not
being
a
jerk.
A
B
That's
what
I
call
it
yeah
great.
Thank
you,
I
apologize
if
I
should
have
done
some
more
due
diligence
before
hand
I.
This
is
kind
of
an
intro
and
also
Jared's
on
the
line.
Who
was
a
big
part
of
putting
this
together.
We,
we
are
part
of
the
kudo
team,
which
is
about
declarative
operators
and
in
the
process
of
putting
kudo
together,
we
created
what
made
the
most
sense
to
us
is
a
declarative
way
of
testing,
and
it
turns
out
that
there
was
a
lot
of
interest
in
that.
B
So
I
wanted
to
introduce
the
concept
and
find
out
from
this
group
what
the
next
steps
might
be.
I'm,
looking
or
hoping
to
make
people
aware
of
it
and
I
would
love
it
to
be
a
part
of
the
kubernetes
ecosystem
as
a
whole.
Just
to
give
you
a
quick
rundown
of
what
it
does
it
is,
it
is
a
it's
written
and
go
it's
for
essentially
integration
or
end-to-end
testing
and
I
would
say.
The
push
is
more
for
end
to
end
than
integration.
B
The
use
for
integration
testing
is
that
it
can
spin
up
just
an
instance
of
an
API
server
and
sed
as
a
minimum
minimalistic
test
against
api's,
but
obviously
doesn't
have
controllers
and
other
operations
that
is
through
a
tight
integration
with
kind.
So
there's
an
expectation
for
using
that
that
you
have
kind
on
deck.
B
B
B
There
is
a
cuddle
dev
site,
which
is
fully
again.
The
information
I
had
that
I
put
together
in
one
day,
but
apparently
I've
already
mental,
that
it
has
a
lot
of
information,
and
that
probably
stems
from
the
fact
that
we
generated
that
on
Kudo
and
then
just
made
the
transfer
over
anyways
with
that
I
could
go
on
and
I
don't
want
it
to
get
too
much
time.
That
is
what
it
is.
B
We
we
see
a
lot
of
value
and
being
able
to
go
through
steps
test,
steps
where
we
apply
or
create
things
potentially
things
and
then
make
an
assertion
that
something
is
true
or
is
not
true,
and
it
has
the
ability
to
also
run
commands.
So
if
you
have
your
own
Cooper,
Nettie's
command
or
any
kind
of
command
that
would
run
on
the
platform
you're
on
that
is
also
possible
and
sometimes
useful.
So
with
that
I'll
open
up
for
either
conversations
or
hopefully
a
discussion
of
where
we
should
go
next,
we.
B
Thanks
Jordan
so
to
say,
you
know
it
just
made
sense:
I
haven't
been
able
to
get
an
official
icon
or
image
for
this,
but
a
couple
fish
just
makes
more
sense
to
me.
So
what
with
that?
But
let's
get
started,
probably
creating
your
force.
Your
first
test
is
the
best
route
to
go.
It's
probably
worth
noting
that
we
can
integrate
from
a
CLI
and
it's
a
standalone
thing
or
you
can
integrate
with
with
API
there's
four
steps,
like
my
lost
under.
B
You
don't
even
know
my
own
Dax
there
we
go.
Thank
you,
okay,
so
you
know
it's
the
creation
of
llamo
in
a
way
that
we
would
expect
the
important
thing
about
this
yamo
is
assuming
somebody
might
have
done
a
pre
step.
There
could
be
the
existence
of
this
thing
already
in
the
cluster,
and
this
would
be
then
a
strategic
merge
against
that,
so
it
would
just
update
the
latest.
So
this
can
be
very
brief.
B
Your
very
first
step,
you
would
probably
very
verbose
in
order
to
create
the
object
you're
looking
for
and
then
we
have
the
to
have
filenames
and
we
were
making
some
changes
that
are
active
now,
but
essentially
there's
our
numbered
files
and
the
numbers
become
the
steps
with
the
expectation
that,
if
it
has
an
assert,
that
is
an
assertion
of
truth
within
that
step.
So
all
the
animals
are
essentially
applied
in
some
way
either
created
or
updated
or
potentially
deleted.
C
We're
gonna
say
it's
important
note
that
this
is
a
partial
assertion
right,
so
you
know
you're
asserting
that
the
status
here
is
ready
right
because
three
right-
and
this
actually
doesn't
have
to
be
about
the
resource
itself.
So
let's
say
you're
testing
a
CR
D
right
and
you
want
to
test
that
your
CR
T
creates
a
stateful
set.
This
could
be
a
a
kind
stateful
set.
C
B
And
there's
probably
two
aspects
of
this
that
make
it
unique
or
I'm
expecting
that
it's
unique
somebody
might
correct
me,
and
that
is
one
is
it's
based
on
declarative,
so
it's
very
ya
know
friendly
to
those
who
are
used
to
seeing
that.
But
the
other
is
that
it's
geared
around
testing
operators
or
controllers,
so
that
expectation
is
kind
of
baked
in
there.
This
was
this
would
be
how
we
might
test
something
so
we're
saying
test
all
the
things
under
here.
B
So
under
e
to
e
there
would
be
a
series
of
folders
and
those
folders
or
tests,
and
those
tests
would
have
yamo
files,
which
would
then
be
ordered
and
asserted
in
some
way
yeah.
So
there's
the
documentation
and
there's
a
quick
run-through,
there's
a
whole
lot
more
as
far
as
being
able
to
integrate
it
with
existing
test
frameworks
or
run
it
against
different
control
planes
and
such
and
you
know,
there's
an
order
to
this
madness.
C
As
far
as
some
users
that
we
have,
we
have,
of
course
we're
using
it
for
Kudo
operator.
Sdk
is
integrating
it
in
for
doing
operator
scorecards.
We
have
a
bunch
of
other
teams
using
it
for
totally
non
operator
things
just
to
just
to
even
test
helm,
charts
right,
especially
ones
that
have
subsequent
resources.
So
it
you
know,
we
we've
seen
a
lot
of
interest.
We
don't
think
we've
even
come
close
to
the
number
of
possible
use
cases
for
this.
C
You
know
we're
we're
evaluating
a
couple
other
things
like
you're
having
having
potential
Rago
assertions
in
there
and
a
few
other
things
depending
on
on
where
the
community
drives
it
and
and
that
that's
and
we're
following
a
a
community
governance
process
from
the
outset
we
do.
We
do
caps,
but
there
kudo
enhancement
proposals,
because
we
don't
sig.
C
So
like
it's
a
mini
cap-
and
you
know
we
we're
working
at
getting
kudo
itself
into
the
the
CNCs
sandbox
we're
hoping
to
donate
this
somewhere
as
well,
because
it,
you
know
we're
trying
we're
trying
to
make
sure
the
right
products
in
the
right
place.
So
you
know
we'd
be
open
to
you
know
where,
wherever
this
goes,
artists
like
we
would
either
some
of
this
to
the
cnc
of
sandbox,
or
you
know,
testing
sub
project
or
other
places.
But
we
would
really
love
to
continue
to
foster
that
out
in
in
the
community.
B
Yeah
with
that
said,
I
mean
left
off.
That
list
is
and
kind
of
my
memory
was
jog.
Well,
Jared
was
explaining
some
things.
This
is
totally
used
for
setting
up
a
cluster
as
well,
and
then
asserting
that
that
set
up
was
what
you
expected
it
to
be
now,
as
I
presented
here
to
the
sig
I'm,
really
more
focused
around
testing
as
a
whole.
If
you
have
thoughts
for
advice
on
what
we
should
do
next
or
what
we
might
be
able
to
get
the
sig
support
around
that
would
be
super
helpful
and
valuable
and.
A
Do
you
feel
like
that
person
who
needs
some
time
to
evaluate,
from
speaking
from
a
state
perspective,
we're
always
happy
to
have
people
who
are
interested
in
like
donating
their
stuff
as
a
sub-project
to
become
maybe
a
more
cohesive
part
of
the
kubernetes
ecosystem?
My
caveat
there
is
like
we
because
we
don't
really
know
much
about
this.
It's
not
like
you
would
suddenly
get
a
pool
of
maintainer
x'
for
free,
but
we
would
be
willing
to
help
you
become
part
of
the
community.
A
To
me
this
looks
like
maybe
a
nicer
way
of
doing
some
of
the
ends
to
end
testing
that
we
do
today,
I.
Think
of
this,
specifically
from
a
perspective
of
conformance
testing
for
kubernetes,
where
we
want
to
be
very
blackbox
about
the
behavior
of
a
kubernetes
cluster
and
so
being
able
to
declaratively,
say
given
this
resource,
and
this
thing
that
I'm
going
to
adjust
to
this
resource.
What
do
I
expect
the
outcome
to
be,
and
so
it
looks
like
both
of
this
is
about
declaratively,
saying
what
you
expect.
A
The
resources
to
look
like,
but
you
do
also
have.
This
command
looks
like
the
ability
to
specify
a
command
in
one
of
your
tests
except
CR
G's,
because
I
think
sometimes
from
a
conformance
perspective,
we're
interested
in
verifying
connectivity
between
things,
and
maybe
this
could
be
useful
for
doing
that
sort
of
thing.
You.
B
Know
absolutely
and
I
would
add
in
there
real,
quick
and
I
totally
failed
on
this
part.
I
apologize.
But
if
you
look
at
the
kubernetes
testing
that
are
that's
in
the
core
Cooper
90s
right
now,
oftentimes,
it's
like
bash
assertions,
which
are
really
kind
of
like
it's
great
that
we
had
testing.
But
those
could
just
really
be
much
more
easy
to
understand
and
to
maintain
if
it
was
just
yeah,
Mille
and
right
now,
there's
just
a
I
meant
to
have
that
as
an
example
and
I'm
hunting
for
it
now.
But.
A
B
A
Yeah
so
I'm
happy
to
take
more
of
a
look
at
this,
but
I
have
time
he
hacker
is
on
hi.
Co-Op
is
the
name
right
now
Lindsey
and
he
also
works
heavily
in
the
conformance
stuff
project.
So
that
could
be
an
interesting
place
to
look
for
possibly
more
engagement
and
then
I'm
happy
to
chat
with
you
offline
about
what
donation,
as
a
subject,
looks
like
if
that's
the
path
you're
interested
in
pursuing
antastic.
Yes,
definitely
sure.
C
We're
already
fairly
involved
as
well
in
the
key
builder
step
project
as
well,
and
so
we
didn't
donate
that,
but
we
operate
well
within
that
space
where,
as
this
was
just
a
nice
artifact
of
some
work
over
there,
so
they're
really
looking
forward
to
that
I
think
I
think,
first
and
foremost,
their
goal
would
be
to
help.
Of
course,
you
know
get
this
through
throughout
IDI
testing.
If
it
makes
sense
to
do
you
mentioned
maintainer,
they
think
I
think
right
now
the
backlog
I
mean.
C
Of
course
this
will
increase
with
with
more
use,
but
you
know
the
the
the
real
the
main
goal
around
donating.
This
back
is
you
know
our
our
core
companies
Erica
focuses
on
writing
operators,
and
this
is
a
tool
in
our
toolbox
and
we've
seen
a
lot
of
other
people
wanting
to
use
that
tool
and
so
having
a
modicum
of
neutrality.
Of
the
air,
of
course,
is
in
a
sense
alone.
A
E
This
is
hippie,
I
am
I,
have
a
team
of
folks
and
we're
working
on
increasing
the
coverage
of
the
conformance
test,
and
we
are
writing
a
lot
of
tests
from
scratch.
I
wouldn't
mind
spending
some
time
with
Ken
returned
it
at
some
point.
Take
a
closer
look
at
that
and
I'm
glad
we
couldn't
do
a
pair
session
and
go
through
some
of
your
existing
work.
If
you
have
yeah.
B
A
A
B
F
Logic
that
is
actually
used
for
end-to-end
tests
to
connect
one
cluster,
something
of
the
sort,
a
decoupling
dialogic
from
a
functions
check
if
and
certain
number
of
ports
exist
or
Ori,
some
other
conditions
or
some
other
conditions,
and
for
the
last
year
that's
been
the
main
effort
just
trying
to
get
the
frame
at
a
current
when
test
framework
in
a
more
a
manageable
state.
So
it
is
something
something
that
is
easier
easier
to
digest.
A
tough
old
area
of
the
project
has
been
a
growing
more
or
less
organically,
based
on
a
10
itself.
F
They
only
a
based
on
day
needs
of
six,
whatever
they
whatever
they
actually
need
to
test.
In
that,
and
for
a
time
now,
we
wanna
say
we
tell
you
by
the
I,
mean
testing
comments.
We've
been,
we've
been
actually
trying
around
the
idea
of
moving
by
technically
taking
another
step
towards
organizing
the
framework
and
all
the
utility
functions
are
used
for
end-to-end
tests
right
now,
and
hence
this
is
the
k,
kubernetes
enhancement
proposal.
Then
I
wanted
to
create
that
I
wanted
to
propose
today,
and
they
asked
for
feedback
on
the
TL.
F
It
was
a
to
ultimately
move
the
day
and
the
end-to-end
test
framework
and
its
packages
into
inter
staging,
so
the
other
projects
outside
of
kubernetes
can
properly
use
it
without
having
to
import
all
of
coronaries,
and
the
other
main
goal
that
we
had,
we
had
in
mind,
is
to
put
in
stay
put
into
stage
and
as
a
way
of
a
forcing
us
to
actually
go
back
into.
The
design
of
the
framework
in
all
its
utility
package
to
probate
properly
serve
a
surface
useful,
a
useful
API,
useful
utilities
for
people
to
write
into
when
tests
and.
B
F
A
A
A
A
B
A
Be
more
stable
feels
a
little
backwards,
maybe
maybe
that's
me,
but
I
I
do
really
agree
that
one
of
the
first
steps
needs
to
be
better
use
of
something
like
import
loss
to
better
to
make
the
e2b
framework
more
self-contained.
I.
Think
you
know
the
idea
of
not
relying
on
in
any
inside
of
the
kubernetes
Rico
is
a
great
idea,
especially
relying
on
anything
inside
of
the
package
directory
right.
That's
that's
important.
A
A
Things
like
that
and
the
last
thought
I
had
is
one
of
the
problems
that
hippy
is
actually
also
encountering
from
a
conformance
perspective
is
reliance
on
coolant
api's
and
it's
I
don't
know.
If
anybody
has
asked
or
figured
out
why
API
is
around
the
the
couplets
stats
end
point
and
stuff
like
that,
why
those
should
not
be
published
api's,
because
I
think
that's
aside
from
cute
mark
I
think
that's
the
other.
Like
main
thing.
A
D
A
So
I
don't
know
I'm
sorry.
That
was
a
whole
lot
to
say
all
at
once,
but
like
the
TLDR
of
it
is
like
I've
been
super
in
favor
of
trying
to
make
the
e2b
frameworks
imports
more
self-contained,
but
I
feel
maybe
less
certain
that
just
moving
it
into
staging
after
having
done.
That
is
a
good
idea,
without
maybe
some
more
thought
put
towards
the
support
ability
guarantees
we
want
to.
We
want
to
put
there
and
how
much
of
the
provider
dependency
stuff.
F
Are
actually
really
need,
those
are
actually
really
good
points
and
I
think
a
first
off.
Thank
you
very
much
for
mentioning
all
that.
A
in
and
I
guess,
one
of
the
a
one
of
the
really
nice
features
of
actually
getting
outside
of
testing
comments
and
sharing
it
with
the
rest
of
the
community
they
create.
The
qurÃan
working
plan
was
a
has
been
more
or
less
to
really
just
move
things
into
state
to
the
couple
day.
It's
really.
B
F
Couple
the
framework
as
much
as
possible
they
put
into
staging,
so
people
can
consume
it
and
then
cleaner,
a
cleaner
of
eventually
I.
Think
I
also
think
that
will
in
a
way
imply
that
we
need
to
think
about
it,
be
a
combative,
a
compatibility
and
something
of
the
sort,
and
we
can
just
go
willy-nilly
removing
or
rename
a
renaming
things,
but
in
a
way,
in
a
way,
that's
been
the
current
one.
The
other
similar
alternate
alternative
that
comes
to
mind,
is
in
the
cap.
F
They
use
import
most
as
much
as
possible
to
make
sure
that
we
really
keep
the
number
keep
the
tell
it
that
we
have
a
really
strict
control
in
the
imports
that
are
allowing
the
into
the
end-to-end
test
framework
and
as
we
get
as
we
get
to
that
point,
we
can
refer,
we
can
keep
on
refactoring
the
end-to-end
test.
We
can
keep
on
refactoring
the
test
framework.
F
So
we
turned
the
first
effort
between
the
first
two
alternatives
around
that
I
mentioned.
I
think
definitely
the
second
one,
it
just
a
working
on
import
balls,
a
keeping
the
framework
where
the
a
what
it
is
right
now
and
having
that
as
phase
one
and
let's
just
resonate
with
those
a
number
of
dependencies.
It's
a
little
bit
better.
F
The
other
thing
I
also
wanted
to
ask
as
feedback,
and
this
is
it
and
that
their
option,
I
guess,
is
to
call
before
before
actually
beginning
with
this
cap.
You
know
assume
a
assuming
that
it
was
perfectly
perfect,
as
it
is
right
now.
The
turn
option
that
comes
to
mind
right
now
is
to
kind
of
work
with
signal
or
six
scalability
to
actually
publish
they
don't
say:
P
is
in
some
other
way,
so
that
we
a
so
that
people
can
a
test.
F
For
example,
if
s
other
people
can
actually
do
things
with
measuring
metrics
from
from
the
cubelet,
without
actually
having
to
import
kubernetes
and
kind
of
look
for
those
places
within
that
end-to-end
test
framework
that
we
shall
exploit
it,
that
we
shall
expose
beforehand
and
I'm.
Imagining
that
some
of
them
might
turn
out
to
be
their
own
caps.
A
A
Asking
this
asking
those
folks
to
make
their
things
more
reusable
outside
of
kubernetes,
is
to
me
a
more
appropriate
way
of
dealing
with
the
problem
of
not
being
able
to
test
them
without
internal
kubernetes
and,
like
I,
said
I'm
fully
supportive
of
phase
one
of
trying
to
like
D
couple
more
things
and
use
import
cost
more
I.
Just
from
the
Cabrera
perspective,
giving
you
a
heads
up,
I
tried
that
just
experimentally
to
get
a
sense
of
what
the
scope
of
work
is
there:
okay,
Marcus
your
your
biggest
problem.
F
So
taking
that
into
account
and
I'm
a
I'm
just
asking
for
advice
now,
do
you
think
it
will
be
sensible
to
keep
the
cap
as
it
is
right
now,
and
just
a
note
that
at
some
point
we
eventually
want
to
ban
T
framework
into
staging
a
or
would
it
be
better
to
just
have
a
reduce
a
scope
of
the
cap
to
increase
one
producing
paired
with
other
dependencies?
And
just
let's
keep
working
on
that.
A
So
I
see
Ben
has
he
wants
to
say
something,
but
I'll
just
answer
it
real,
quick
down
on
the
graduation
criteria.
Section
of
this
cap.
You
talk
about
how
this
isn't
a
traditional
feature
and
so
doesn't
need
to
do
alpha.
Beta,
stable
I
actually
feel
like
alpha
beta.
Stable,
are
perfect
phases
to
denote
the
level
of
stability
and
reliability.
We
want
here,
I,
would
say
shuffling
things
around
inside
of
its
current
directory
and
trying
to
untangle
the
dependencies
is
the
alpha
phase
and
the
completion.
A
A
D
I
have
some
questions
regarding
like
motivation
of
staging
it.
Are
we
trying
to
target
actual
third
parties
and
a
more
of
a
common
question
looking
at
the
ones
that
were
mentioned
in
this
list,
except
for
asana
GUI,
which
is
kind
of
an
odd
one
out?
I'll
come
back
to
you,
the
other
ones
so
far
all
appear
to
depend
on
Kate's,
I/o
kubernetes
in
other
packages,
non-test
ways,
and
they
all
seem
like
kind
of
first
party
projects.
Are
we
trying
to
like?
D
Are
we
also
solving
that
problem,
or
are
they
going
to
continue
depend
on
Kate's,
aoku
Vietnamese
anyhow,
and
are
we
actually
trying
to
do
like?
Do?
We
want
real
third
parties
using
this
I
feel
like
if
I
were
gonna,
build
a
test
framework
for
third
parties?
I'd,
probably
actually
just
start
from
scratch,
instead
of
trying
to
stage
what's
in
Kate's
I/o
and
know
that
it
doesn't
depend
on
Nettie's
and
maybe
has
some
rethink
to
how
it
works,
as
opposed
to
just
like
the
thing
that
we
have
that
we
used
to
test
that
we
need
that.
D
F
They
create
a
create
the
current
state
of
the
a
copy
of
the
kubernetes
codebase
soy
is
so
nobody
can
take
what
we
already
have
in
shuffle
a
and
shuffler
around
to
shuffle
our
around
to
accommodate
people,
and
that's
been
there
that
that's
been
that's
been
the
way
of
actually
trying
to
maintain
what
we
are
a.
What
we
have
that.
D
F
A
F
A
A
So
next
up
I
had
a
couple
things
on
the
agenda.
Just
taking
less
time,
I'm
gonna
go
through
them
super
quickly.
I
had
wanted
to
talk
about
extracting
ideas
on
how
to
best
extract
prowl
over
to
wgk
tempura
I
did
not
have
time
to
actually
put
said
plan
together,
but
one
quick
question
I
had
for
the
group
that
feels
like
it
was
raised
by
problems
we
had
with
no
d2e
testing.
A
It
doesn't
necessarily
give
the
community
any
more
insight
into
Prowse
behavior
itself,
but
it
would
allow
us
to
start
using
a
lot
more
of
the
funds
that
have
been
donated
to
run
the
kubernetes
project.
Does
that
seem
like
maybe
a
good
idea
to
pursue
first
or
you
think
it
would
be
more
worthwhile
to
pursue
like
setting.
B
E
A
D
So
we
discussed
this
in
work.
Group
Cates
previously
I'll,
just
briefly
point
out
that
if
you
do
the
projects
first
and
you
start
actually
using
them,
all
of
the
state
about
what's
in
use
is
in
bosses,
which
is
in
the
build
cluster
and
you'll
wind
up
having
to
like
move
that
again,
it
will
be
slightly
cleaner
if
you
can
do
the
build
closer
to
and
have,
although
like
Bosco
state
and
what
not
and
then
this
is
just
a
one-time
transition
between
boss
cos.
A
Okay,
if
anybody
has
any
other
thoughts,
feel
free
to
chat
with
me
on
slack
I'll.
Keep
us
moving
I
like
to
try
and
take
a
look
back
at
the
PRS
we
have
merged.
Since
we
last
had
a
meeting
and
point
out
anything
that
looks
interesting,
a
couple
things
that
looked
useful
to
the
community
at
large
one,
really
simple
user
friendly
thing.
Apparently,
if
you
typed
in
slash
retest
to
retry
or
all
of
your
failed
jobs
on
a
PR
that
wouldn't
work
unless
retest
was
the
very
last
thing.
A
And
then
the
other
tiny
little
bit
of
maintenance
that
I
have
linked
in
the
meeting
notes
is
Eric's,
which
are,
for
our
instance,
is
plank
config
to
use
a
report
template
config,
which
allows
us
to
specify
when
proud
comments
on
your
PR
about
jobs.
Failing
or
not.
We
can
now
configure
what
that
looks
like
on
a
per
org
or
a
per
beep
a
basis.
If
there
are
people
who
feel
like
they
want
browse
report
to
look
maybe
a
little
different
or
expose
different
things.
A
G
H
So
there
have
been
a
variety
of
things
that
we
are
switching
in
the
config
to
sort
of
special.
You
can,
you
know,
configure
a
repo
specifically
rather
than
having
to
use
some
awkward
go
templating
to
do
that.
So
yeah,
that's
been
super
great
on
yeah
on
the
gesso,
you
know
there's
two
ways
that
prep
on
the
switching
pouch
ops
there's
two
ways
that
you
can
write
a
proud
job.
H
So
all
that
comes
from
decorating-
and
you
know
for
the
past-
I,
don't
know
at
least
a
year,
maybe
two
years
now
that
sort
of
been
the
way
that
we
want
people
to
write
jobs
and
most
new
jobs,
use
that
format.
I
still
think,
there's
a
some
of
our
Indian
Indian
jobs.
Probably
a
lot
of
them
haven't
been
migrated
to
use
using
decoration.
Yet
so
there
are,
we
do
have
jobs,
especially
in
the
kubernetes
org
that
do
not
use
decoration.
H
They
just
create
a
pod
and
it's
totally
up
to
the
pod
to
do
whatever
it
wants,
like
there'll,
be
environment
variables
injected
about
like
what
commit
you're
supposed
to
check
out
or
whatever.
But
what
actually
happens?
It's
totally
up
to
the
pod
and
we
right
now
that
is
the
default
behavior.
So
unless
you
opt
in
to
using
decoration,
we
will
not
decorate
your
pod
with
the
nice
goodness
to
check
out
repos
and
upload
logs,
and
we
would
like
to
switch
that
on
to
being
the
default.
H
And
so
basically
you
know
if
your
job
explicitly
says
decorated
false.
We
will
continue,
not
decorating
a
job
or,
if
you
explicitly
say
decorate.
True,
we
will
continue
decorating
your
job,
but
otherwise
we
will
change
the
default
behavior
if
nothing
is
specified
from
not
decorating
a
job
to
decorate
a
job.
H
We've
done
this
before
with,
like
initially
I,
think
our
default
agent
was
to
talk
to
Jenkins,
because
that's
what
most
of
our
jobs
were
and
then
we
split
it
over
to
be
the
kubernetes
agent
that
creates
a
pod
instead
of
the
Jenkins
job
by
default,
and
that
mostly
involved
you
know,
going
through
all
of
our
jobs
and
making
sure
that
was
explicitly
defined
everywhere.
And
so
that
would
happen
here.
H
H
A
H
Let's
see
so
yeah,
the
other
thing
is,
you
know
the
testing
for
repo
use
this
Basel
pretty
extensively,
and
we
also
in
the
kubernetes
kubernetes
repo
use
it
for
our
CI
and
one
of
the
challenges
there
has
been
switching
between
Basel
versions
because
often
times
basil
is
not
backwards
compatible,
especially
that's
less
than
the
case
now.
But
you
know
since
1.0,
but
there
there's,
you
know
the
I
guess.
H
The
it's
important
to
you
know
build
with
this
consistent
version
that
everybody
wants
to
build
a
particular
version
of
basil
and
switching
that
historically
has
been
kind
of.
We
haven't
had
a
great
way
to
do
that,
atomically
in
a
single
commit,
there's
a
tool
called
basalis
that
allows
you
to
add
a
file
called
basil
version
that
specifies
explicitly
what
version
of
basil
your
repo
is
expecting
to
build
with,
and
that
has
been
working
pretty
great.
H
You
know
we
can
kind
of
just
say
from
2.0
or
to
2,
and
then
people
you
know
run
doing
basil
test
or
basil.
Build
will
make
sure
that
once
installed,
but
for
CI
you
know
that
hasn't
CI.
Typically,
we
will
have
a
image,
we'll
use
a
you
know.
Dr.
Amit
has
a
single
version
of
basil
installed
and
so
the
default
behavior
was,
you
know
if
we're
using
an
image
with
2.2
and
the
repo
says
2.0
the
thing
blows
up
because,
like
there's
no
tool
2.0
installed
and
it
doesn't
want
to
download
it
and.
H
Great
image
so
use
the
new
thing
and
then
hope
it
works
and
if
it
doesn't
fix
anything,
that's
broken,
but
we
wanted
to
be
a
little
bit
more
careful
for
the
main
repos,
and
so
what
we
want
to
doing,
which
I
think
is
actually
a
pretty
great
pattern-
is
to
just
create
an
image
that
has
both
well
all
the
expected
versions
of
Basel
installed
in
there.
So
we
had
something
that
had
both
23.2
and
2.2
installed,
and
so
that
way
we
could
in
a
single
PR.
H
You
know
have
both
of
the
switch
the
da
Basel
version
from
23
to
to
2.2
and
then
do
whatever
rules
changes.
We
needed
to
support
that
and
make
sure
that
you
know
the
repo
is
passing
test
without
the
change
on
the
old
version
and
continuing
to
pass
tests
with
the
new
version
and
the
new,
updated
rules,
and
that's
been
pretty
great
because
we
can
kind
of
you
know,
clip
those
two
versions
and
so
we're
updating
our
images
to
sort
of
specify
what
the
old
person
was
and
what
the
new
version
were.
H
Specifying
too
and
after
we
roll
out
all
of
those
images,
then
we
can,
you
know,
change
the
top
Basel
version.
So
that's
been
a
pretty
good
pattern
for
us
and
I
think
the
main
problem
is,
you
know
we
have
a
surprisingly
large
number
of
basel
images,
they're
all
fairly
similar,
but
slightly
different
from
each
other
for
mysterious
reasons,
and
so
you
know
like
yesterday,
I
guess
on
Friday
I
wound
up
grading,
all
of
our
pre
submits
and
one
of
our
post
submits
and
our
CI
job.
H
But
there
were
a
bunch
of
other
post
admits
that
would
push
Basel
or
you
know,
push
images
or
deploy
them,
which
use
a
slightly
different
version
of
the
image
that
still
had
2.0
and
so
and
had
not
been
upgraded
to
use
the
multiple
versions
so
I
want
to
breaking
all
those
post
omits
over
the
weekend,
because
that
image
was
slightly
different
and
apparently
they're
like
two
different
variants
of
that
image.
So
I
think
long-term.
H
You
know.
Hopefully
we
can
make
updating
this
less
painful
and
more
efficient,
but
yeah.
So
that's
sort
of
an
interesting
thing,
but
I
would
you
know
if
anybody
is
using
Basel
I
would
definitely
recommend
the
the
pattern
you
could
check
out
like
the
pukin
zde
image
or
the
basel
image
inside
of
the
images
directory
and
the
test
and
for
repo
and
yeah.
So
then
I
feel
like
this
far.
It's
been
working
pretty
well
and
that
is
all
I
had
for
both
of
those
things.