►
From YouTube: Kubernetes SIG Release 20191021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
this
is
the
October
21st
2019
sig
reliefs
meeting
for
the
kubernetes
project.
This
is
a
recorded
meeting
and
we
ask
everybody
to
adhere
to
the
kubernetes
code
of
conduct
during
the
meeting.
I
think
I
might
be
a
short
meeting.
I
will
drop
the
meeting
notes
document
in
the
zoom
chat
just
for
reference
for
anybody
there.
If
you
could
add
your
name
to
the
attendees.
That
would
be
awesome.
A
Just
strike,
who
all
was
here
so
normally
on
the
cig,
release,
meeting
distinct
from
the
release
team
or
the
release
engineering
meeting
we
go
through
our
sub
project
updates
licensing
is
the
first
one
on
the
list.
I'm
gonna
pull
up
the
project
board
there
I,
don't
know
that
we
have
anybody
on
the
call
who
can
speak
to
anything
there
yeah
the
open
issues.
Right
now
it
looks
like
we
don't
have
anybody
on
the
line?
Who
could
talk
to
those
so
I'm
going
to
skip
that
for
the
day
release
engineering
status.
A
So
last
week
we
did
patch
releases
of
all
of
the
support
branches,
including
the
last
build
for
113,
and
we
actually
had
a
couple
of
issues
there.
There
were
so
the
normal
way.
The
build
works
is
there's
a
staging
process
which
is
building
and
then
validation
could
happen,
and
publication
happens
from
there
and
in
between
staging
and
publication.
The
publication
step
failed
due
to
missing
artifacts,
and
the
logging
basically
didn't
show
anything
that
I
could
see.
A
So
this
is
maybe
just
just
to
kind
of
mention
it
here
and
it's
maybe
more
in
the
context
of
the
release.
Engineering.
We've
done
a
lot
of
talking
and
brainstorming
about
how
to
update
the
tooling
there,
we've
kind
of
had
this
magical
tooling
that
was
handed
to
us
historically
and
it's
mostly
worked,
but
when
it
fails,
it's
very
opaque.
So
this
is
something
that
hopefully
we
get
better
on,
then
we,
additionally
after
we
did
that
we
discovered
that
the
release
notes
were
empty
and
release
notes.
There's
at
least
a
couple
of
bugs.
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
So
this
has
been
a
bit
problematic
where
what
are
we
a
month
and
change
a
month
and
a
half
since
113
came
out,
we
do
have
a
scalability
regression
that
we
saw.
The
expectation
was
that
the
fix
for
that
was
going
to
be
in
the
NGO
releases
that
came
out
last
week,
but
those
got
preempted
mostly
by
a
security
fix,
so
I
need
to
go,
look
I'll
check
after
the
meeting
and
follow
up
with
Jordan
and
probably
Christoph
as
well
to
see
where
that
stuff
is
at
a
minimum.
A
A
A
A
B
Sorry,
just
I
had
to
move
to
something
it's
someplace
where
I
could
actually
talk.
So
a
couple
updates
on
the
cover
on
the
last
tree
that
you
mentioned
they
cleaned
up
the
secret
least
discreet
dashboards.
If
most
of
it
has
been
done
so
I
will
I
will
make
a
comment
in
there
and
pulling
people
to
an
appropriate
places.
B
So
if
anyone's
interested
you're,
all
of
you
can
get
background
on
the
issues
and
the
PR
for
the
conversations
happen,
the
other
one
that
you
pointed
out,
the
shroom
artifacts
published
by
publishing
Bobby
release
blocking
that
was
product
in
last
week.
Doing
the
release,
engineer,
meaning
I,
believe
so,
but
there
was
no
that
was
just
left
or
up
for
conversation.
So
no
I,
don't
think
anything
has
been
doing
anything
anymore.
Everything
has
been
done
under,
but
if
anything,
Stephen
Stephen
is
definitely
the
person
to
ask
for
that.
B
B
They
think
they
the
thing
that
they
so
it
just
it
just
took
a
really
quick.
The
issue
for
the
issue,
just
for
context,
is
that
last
release.
We,
we
had
a
lot
of
problems
with
all
of
the
tests
cornbread
and
due
to
my
boss,
cons
a
do
two
or
nothing
on
the
boss
almost
cost,
and
they
have
all
the
people
all
the
testing.
B
A
testing
provokes
a
test
or
a
test
in
Hong
Kong
and
everyone
that
works
on
that
they
have
been
working
on
making
whole
procedure
or
a
bob
being
a
version
for
Bosco's
brow
or
anything
that
is
related
to
brown.
They
they
are
working
on,
making
it
more
secure
by
adding
a
better
observability
doing
a
doing
a
doing,
updates,
more
frequent
updates
more
frequently
and
all
those
things
put.
B
The
big
issue
from
their
side
is
a
perk
say,
for
example,
the
big
thing
prowl
and
anything
really
anything
related
to
problem
is
that
jaws
is
not
just
use
for
kubernetes
or
is
not
just
used
by
kubernetes,
and
it
they
actually
had
a
test
team
forest
they
will
have
to
freeze
then
will
essentially
have
to
freeze
prowl
for
a
couple
of
weeks
and
during
those
couple
weeks
the
development
probably
is
not
going
to
stop
again
because
people
outside
of
kubernetes
and
work
and
contribute
to
prowl
on
the
daily
basis,
and-
and
you
know
it
would
be
kind
of
it-
would
be
kind
of
an
issue
I
can,
we
will
be
making
up
say
we
we
could
be
causing
a
problem
for
ourselves,
because
right
now,
prowl
and
everything
related
to
prowess.
B
Actually
they
actually
do
a
person
bump
every
single
day.
So
also
is
soldering.
This
hypothetical
testing
forest
and
no
updates
nothing
would
actually
be
tested
for
a
positively
two
to
two
weeks
or
more,
and
then
we
were
and
then
they
will
have
to
do
it
again
it
once
it
actually
was.
The
frist
was
a
was
different,
but
you
know,
potentially
we
won't
be
walk
a
coming
day,
we're
having
the
same
problem
that
we
had
with
Bosco's,
which
was
related
to
using
a
Bosco's
version
from
Silla
three
months,
really
all
Bosco's
version,
there's.
C
A
C
Let
me
let
me
jump
in
here,
I'm,
not
sure.
Sorry,
for
you
know
apologize
for
joining
later,
so
this
might
have
already
been
said,
but
I
think
that
you
know
what
we're
trying
to
do
or
what
we're
trying
to
see
if
it's
possible.
It's
not
so
much
freeze
proud
development.
We
don't
we
don't
want
free.
We
don't
really
care
to
do
that.
Specifically.
C
What
we
want
to
what
we
want
to
do
is
protect
the
configs
of
things
that
are
running
into
running
into
the
release,
repos
and
and
kubernetes
kubernetes
right,
so
so,
basically
stuff
that
lives
under
config
jobs,
kubernetes
and
and
sake
release
and
and
release
folders
right.
Those
are
the
big
things
that
we
want
to
slow
down
or
freeze.
If
we
can
so
I'm,
not
sure
if
having
a
more
finer
point
on
those
changes,
are
those
freezes
what's
discussed
but
I.
C
B
Yes,
a
really
quick
note
on
the
Safari,
so,
for
example,
all
the
conflicts
that
you
mentioned,
I
think
that
you
know
we
could.
Potentially
we
could
potentially
work
on
something
like
that,
but
I
think
originally,
these
testing
for
phase
was
actually
brought
up
because
of
the
boscoe's
update
that
was
done
last
release
during
a
drink'll
freeze
and
in
the
boscoe's
update.
Is
you
know
that
that's
outside
of
the
configure
any
brown
configurations
that
we
use
for
anything?
Oh.
B
C
And
no
I
mean
it's
just
a
few
folders
up
right,
so
so
creating
a
new
I
think
bumping.
A
version
of
you
know
creating
a
new
version
of
some
test
in
for
a
component.
It's
not
the
same
as
deciding
to
bump
it
for
the
kubernetes
project.
Right,
as
you
mentioned,
they're
a
bunch
of
different
projects.
You
leverage
proud,
so
it's
you
know
whether
it
be
I
can
think
of
one
which
is
OpenShift,
but
I
know
there
are
a
bunch
of
others
that
leverage
crowd
so
yeah.
C
B
Just
for
the
solace
all
this
response
from
testing
from
was
you
know,
release
we
can
stop
it,
so
things
are
going
to
keep
coming
in
and
if
they
actually
stop
any
version
bombs
during
code
freeze,
then
we
are
going
to
end
up
with
you
know.
After
two
weeks
we're
going
to
end
up
with
that
ton
of
changes
and
I
haven't
been
tested.
I
haven't
been
tested
in
anything
in
the
kubernetes
community,
but.
A
B
So
it's
what
I
saw
this
based
on
a
paternal
conversation.
The
healthy
quality
control
is
a
I
like
they
they
actually,
they
actually
mentioned
actually
mentioned
a
lot
of
things
in
the
indigestion
they're
still
working
on.
You
know
adding
more
alerts
whenever
so
whenever
something
looks
goofy,
but
at
least
that's
part,
at
least
that
was
part
of
the
process
and
the
fact
that
they
bump
ahead
to
a
new
version
every
single
day.
C
C
So
we
have
tried
to
I.
You
know:
I've
essentially
set
up
a
blockade
on
Cobra
on
kubernetes
release,
free
on
the
kubernetes
release
rate
boats,
you
so
that
changes
to
certain
files
at
and
break
more
than
just
our
jobs
need
to
be
vetted.
It
need
to
be
vetted
by
a
sake:
release
admin
right,
I,
don't
think
that's
an
unreasonable
change
right,
especially
if
it
protects
the
project.
B
That
makes
sense,
but
it'd
be
I,
guess
a
problem
statement
from
the
from
the
testing
side
is
what
would
be
a
a
barrier
that
will
improve
quality
for
kubernetes
while
not
stopping.
It
will
not
stop
in
the
the
bow
life
cycle
and
because
Pao
is
not.
The
problem
is
because
beyond
kubernetes
and
also
a
good
barrier
that
is
going
to
allow
say,
a
cig
testing
to
also
it
also
cap,
the
ability
to
assure
that
anything
that
goes
into
brow
is
properly
is.
C
C
It's
your
testing
in
prod
right
and
that's
that's
good
and
bad.
You
know
it's.
We
do
get
that
quick
signal,
but
we
also
fall
into
the
mode
of
you
know.
We
have
to
activate
test
and
franck
all
right,
and
this
is
a
very
that's,
a
very
small
amount
of
people.
We
have
to
activate
anyone
who
is
working,
who
could
be
affected
by
the
release
and
that's
that
that's
multiple
people
on
the
release
team
right.
So
it's
it's!
It's
not
just
like.
A
Think
the
freeze
is
like
there
are
benefits
to
testing
in
production.
There's
always
gonna
be
some
aspect
of
what
you
do,
that
is
testing
and
production
unless
you're
on
an
extremely
formal
system,
so
that
the
benefit
then
is
recognizing
that
that
happens
and
understanding
how
to
plan
and
react
to
it
appropriately.
But
if
we're
solely
testing
production,
that's
kind
of
scary
and
the
that
we
had
a
notable
failure
in
the
last
release
cycle,
I
mean
that
that's
it's
extreme,
maybe
for
us
to
react
by
saying.
A
Oh,
let's
just
do
a
test
him
for
freeze,
but
this
has
been
a
repeated
pattern
and
we
we
have
other
things
where
we
have
sort
of
phasing
of
the
the
rigor
that
goes
into
things,
we're
trying
to
keep
the
project
moving
fast,
most
of
the
time,
but
at
other
times
dial
things
back,
because
we
we
recognize
that
there's
a
lot
of
uncontrolled
variables
or
minimally
controlled
variables,
and
that
you
have
that
that
always
present
aspect
of
escapes
your
test
cases,
never
our
production.
So
there's
always
be
things.
We
miss
yeah.
C
I,
don't
think
it's
an
unreasonable
to
ask
for
this
or
figure
out
how
to
walk
through
this.
We,
like
you,
said
we
do
these
these
gates
in
place,
especially
on
the
release
side,
to
make
sure
that
we're
you
know
whether
it's
it's
doing
a
release
without
the
no
mock
flag
and
stuff,
like
that
to
get
to
a
point
where
we
get
the
signal
of
what
we're
about
to
do
before
we
do
it.
C
The
I
think
the
problem
that
I
probably
have
with
us
is
when
we
hit
late
stage,
release
and
and
we're
late
from
the
release
cycle,
and
we
have
to
contend
with
this
stuff
right.
I
think
it's
it's.
It
would
be
unfair
to
not
recognize
that
everyone
is
in
a
master
of
test
infra
and-
and
you
know,
even
even
the
people
who
work
on
testing
for
have
not
seen
all
the
hidey-holes
that
that
are
involved
and
and
the
way
the
machine
is
put
together
right.
C
So
when
you
have
a
failure,
you
know
when
you
take
CI
signal
or
bug
triage
people
opening
up
failures
around
around
just
CI
events
right,
not
realizing
that
there
is
an
underlying
component.
That
is
causing
this
right
that
burns
a
lot
of
time,
I
think
with
abbas
cos
failure
as
we
lost
was
it
close
to
a
week.
It
was
definitely
a
few
days,
but
it
was
close
to
a
week
in
terms
of
in
terms
of
vetting
exactly
what
it
was
and
then
turning
it
around
and
rolling
it
back
right.
Well,.
A
We
just
a
few
minutes
ago.
We
did
a
brief
run
of
test
grid
and
there
was
a
lot
of
flakiness
there.
I
think
we
all
recognize
that
our
our
CI
has
flakes.
So
it's
almost
become
a
standard
practice.
Oh
I'll,
just
retest
or
oh
I'll,
give
it
a
day
or
two
and
see
so.
If
we're,
if
we're
claiming
that
we're
we're
really
just
trying
to
be
focused
on
agility
and
speed,
yet
anytime
there's
an
issue
we
give
it
a
day
or
two
more
things
go
by
that
obscures
what
the
underlying
issue
might
have
been.
A
If
there
is
one
like
the
so
walking
the
the
board,
there
were
some
networking
issues
that
seem
to
have
cropped
up
on
Friday
or
so
I'll.
Granted
that's
over
the
weekend
need
to
go,
look
and
see
whether
it
that's
been
observed
as
being
triaged.
What
merged
around
that
time,
that
type
of
thing,
but
we
also
have
a
lot
of
flakes
and
things
can
go
days
before
people
decide,
maybe
that
wasn't
a
flake
and
that
this
all
of
this
slows.
C
Yeah
definitely
agree,
you
know
and
I
feel
that
again
again,
none
of
us
surf
masters
of
test
infra,
but
as
we
move
into
as
we
move
more
and
more
of
our
processes
into
a
CNCs
sponsored
infrastructure
right,
we
have
to
be
incredibly
cautious
about
cost
right.
We
have
to
be
very,
very
concerned
with
that
stuff
and
the
the
defaulting
to
I
think
you
know,
as
a
community,
there
are
some
I
think
the
the
default
is
to
like.
Let
me
retest
this
thing
right.
C
Let
me
retest
this
thing,
especially
cuz,
like
that's
what
the
PRS
tell
you
to
do,
they
tell
you
like
this
thing,
failed,
go,
look
at
it
and
then
retest
it
right.
If
we
you
know,
if
we
have
built
the
you
know,
collective
collective
thought:
that's!
This
is
just
a
flake.
Let
me
rerun
it
like
that
costs,
someone
money
right,
I,
think.
A
C
Yeah,
you
know,
and
and
I
spend
a
lot
of
my
time
on
on
lower
touch
PR.
You
know,
I
spend
I,
spend
less
of
my
time
in
the
kubernetes
kubernetes
repo
spend
more
of
my
time
on,
like
kubernetes
release
or
cig
release,
or
the
enhancements
or
cluster
API
Azure
right,
where
the
tests
are
gonna,
be
cheaper
right,
but
in
kubernetes
kubernetes
we're
running
we're
running
scalability
tests
on
these
things
right
and
it's
and
it's
you
know,
I
think
it's
it's
something
that
we
need
to
consider.
We
need
to
make
sure
that's
the
overall.
C
You
know
the
global
knowledge
for
the
projects,
not
necessarily
always
to
retest
right
as
to
do
some
some
sort
of
vetting
of
what's
happening,
what's
happening
behind
the
scenes
or
help
people
better
read
the
tea
leaves
that
are
a
job
failure
right.
Even
if
you
are
familiar
with
what
could
be
happening
like
reading
the
logs
that
are
sometimes
hard.
C
Yeah
and
that's
you
know,
that's
not
something.
We
can
necessarily
expect
a
new
contributor
to
do
efficiently
right
if
you're
doing
you've
got
one.
This
is
your
first
PR
to
kubernetes
and
you
have
no
idea
why
it's
failing
right,
seeing
you
know
it's
giving
you
instructions
to
do
something
that
that
costs
money
right.
C
C
How
can
we,
how
can
we
make
sure
that
the
changes
that
we're
happening
on
the
release
side
and
on
the
test
and
for
side
or
our
jump
through
more
hoops
to
be
vetted
before
rolling
out
to
quote-unquote
production
right
there
I
mean
there
are
a
few
places
that
we
need
to
attack
I'm
just
saying
that
I
think
that's
figuring
out
how
to
build
a
system
or
incorporate
changes
into
the
current
system
about
testing
slower.
Our
testing
in
different
venues
first
would
be
valuable
as
a
conversation.
B
So
I
so
I
mean
I
completely
all
the
things
that
you
all
mentioned.
They
are
absolutely
important.
We
just
to
refocus
a
conversation
with
the
on
the
regional
day
on
the
result
instantly
other
things
that
I
feel
a
you
know,
a
the
other,
a
total
really
a
couple
aspects:
I'm
Myron.
These
thing
is
that,
for
example,
to
possibly
the
worst
cost
issue
a
that
was
so,
for
example,
that
failure.
It
was
actually
a
money.
It
wasn't.
B
It
was
actually
something
that
people
from
a
tech
from
testing
they
manually
did
because
they
feel
they
figure
that
there
was
going
to
be
no
issue.
You
post
cause
once
updated,
so
that
update
did
not
follow
the
normal,
proud,
a
great
they
take
it
progress
from
a
person
bump
and
the
other
end.
The
other
thing
is,
and
they
they
definitely
they
do
have
some
a
so
it's
almost
they
they
do.
B
They
do
have
a
lot
of
testing
I'd
be
lying
to
you
if
I
just
re-released
in
every,
if
I
just
tried
to
list
it
out,
because
there
are
a
couple,
things
are
I,
don't
worry,
I,
don't
working
there
every
single
day,
so
they
which
you
know
they
they
do
at
the
cannery
they
have,
and
they
have
multiple
other
things
too
way
to
work.
Also.
They
also
relay
also
this
specifically
related
to
both
way.
One
of
the
other
things
at
least
Bosco's
a
development
was
trailing
behind
the
rest
of
proud
of
elements.
B
One
of
the
action
items
that
they
found
for
a
for
this
release
or
the
near
future
is
to
actually
add
alerts
for
a
a
for
510
for
hundreds,
it
was
was.
It
was,
was
a
issue
that
we
went
into
a
it'll
be
running
today
during
the
editor
in
the
last
release
a
but
I
put.
Overall,
they
did
well
our
work
with
canary
canary
deployments,
a
generator
alerting
system
maintained
on
that.
The
other
thing
is
a
a
team
Steven,
since
we
named
and
everyone
that
might
be
interest.
B
I
can,
since
this
is
a
secret
list,
meaning
I
can
share
with
you
the
links
in
slack
where
we
had
some
other
conversational
wise,
a
what
things
can
be
told,
what
things
K,
what
things
are
not
really
feasible
and
all
that
we
can
pick
that
conversation
again
and
also
a
very
last
thing
that
I
want
to
mention
is
that
if
anyone
is
interested
in
talking
more
about
this,
sick
testing
is
going
to
have
a
meeting.
They
had
their
bi-weekly
meeting
tomorrow,
10:00
a.m.
a
US
Pacific.
C
Okay,
yeah
yeah
I'm,
like
this,
is
definitely
like.
We,
we
were
not
trying
to
boil
the
ocean
or
our
cast
flame
or
anything
like
that.
We
just
want
to
make
sure
that,
like
we
can,
this
is
it's
it's
hard
stuff,
it's
hard
stuff
and
and,
as
you
know,
as
you
were
mentioning
like,
if
people
on
this
call
are
interested
in
contributing
to
tests,
infra
I
think
it's
it's
one
of
the
it's
one
of
the
many
it's
one
of
the
many
lesser
thanked,
I'm,
not
gonna,
say
thankless,
but
lesser
thanked
jobs.
C
C
A
C
A
C
C
It
was
well
not
not
ready
yet
I
have
to
issue
an
update
to
the
Charter,
but
you
were
not
able
to.
These
were
questions
that
we
weren't
able
to
answer
before
when
when
Aaron
opened
this
issue,
because
the
release
engineering
project
didn't
that
the
sub
project
did
not
exist
right
and-
and
we
did
not
start
on
this-
like
massive
adventure
that
we've
had
over
the
last
few
cycles
to
do
investigation
into
some
of
these
processes.
So
I
think
then,
now
that
we
have
we
we're
actually
in
good
shape,
to
update
the
Charter
and.
A
A
C
A
An
issue
that
will
basically
always
be
open
so
like
so
I,
would
also
have
to
go
113
specific
one,
I'm
gonna
follow
up
on
that
afterwards
and
I'll
see,
if
maybe
on
this
one,
we
can
well
there's
two
aspects
to
it:
one,
it's
an
umbrella.
It
will
always
be
open
because
there's
always
gonna
be
some
new
Co
coming
out,
but
there
is
process
definition
that
needs
to
happen.
There
are
some
better
decision
on
how
we
move
to
new
goes
and
trying
to
not
fall
out
of
support
relative
to
go
as
well.
C
Yeah
I
think
once
we
have
a
process
doc.
This
is
something
that
we
can.
We
also
have
a
team
of.
We
have.
We
have
an
undefined
team
of
people
who
get
involved
in
in
doing
go
boom.
So
it's
like
it's
like
it.
It's
it's
Christophe
and
Tim's
rate.
You
know,
you've
got
a
bunch
of
people
who
jump
in
and
and
kind
of,
inherently
know
the
things
to
do
and
the
places
to
poke
and
what
images
need
to
be
bumped.
Where
and
all
that.
C
So
I
know
there
were
some
back-and-forth
about
what
to
do
with
test
right
kind.
So
I,
don't
think
that
we
should
specify
test
as
a
kind
right
it.
There
I
think
there
already
is
an
area
test
right
area
test
plus
the
area
that
it
actually
involves,
so
whether
it's
area
test
and
plus
the
sig
release,
label
or
area
test,
plus
plus
area
provider
Asscher,
or
something
like
that
right
gives
you
an
idea
of
where
the
test
is
focused.
C
A
Those
may
be
things
that
are
safer,
but
I
also
don't
see
this
as
a
necessarily
requiring
a
kind
label
like
that.
These,
these
labels
help
us
triage
things
and,
like
my
Karen
says,
we
have
so
many
of
them
that
it
can
be
overwhelming,
and
all
it
takes
is
noting,
like
you've,
got
a
discussion
flow
in
a
PR
to
say
like
hey.
A
This
is
we're
trying
to
merge
this
now,
regardless
of
where
now
is
in
this
context,
regardless
of
what
the
context
is,
but
spell
it
out,
because
this
improves
test
signal,
I,
think
that
when
reviewers
see
that
they
can
have
a
discussion
about
the
merits
and
risks
and
make
a
decision,
but
to
think
that
there
would
be
some
some
blanket
like.
Oh,
this
is
a
test
only
fix
we
will
merge
it
or
and
or
not.
Somehow,
I
don't
see
that
being
so
cutting
stone
that
the
kind
label
actually
yields
much
there.
Yeah.
A
B
C
So
so
kinds
refer
to
the
type
of
PR
that
something
is
right
and
it's
and
it's
fairly
quote,
unquote
easy
to
to
try
to
categorize
what
a
PR
is
cleanup
documentation,
bug
failing
test,
what-have-you
right,
it
gets
trickier
when
you,
you
know
in
a
perfect
world,
we
write
perfect
tests
and
we
include
those
tests.
One
way
when
we
make
code
changes
right
in
you
know
in
that
sense,
what's
great
about
the
the
owners
files
is
that
you
can
likely
tagged
something,
as
you
know,
depending
on
where
it
is
the
the
area
right.
C
So
areas
are
areas
or
cig
labels
or
sub-project
labels
refer
to
specific
pieces
of
code
right.
So
you
know
it
gets
harder
to
leverage.
You
know.
If
we
have
a
test
folder,
you
can
label
that
test.
Folders
area
test
all
right,
as
opposed
to
like
you're,
not
necessarily
going
to
to
label
it
as
kind
test,
because
maybe
what
you're
doing
is
kind
clean
up
our
kind
documentation
of
that's
you
know
of
that
folder
right.
So
it
gets
a
little
trickier
to
categorize
what
you're
doing,
depending
on
what
you
touch
right.
C
The
the
person
who
opens
the
PR,
the
first
new
triage
is
the
PR
reviews
it.
The
PR
issue
needs
to
be
able
to
categorize
that
stuff
and
I
think
we
all
have.
We
all
have
different
definitions
of
what
that
stuff
could
be
so
yeah
I'm,
not
sure
that
kind
is
appropriate
here,
but
I
want
to
see
if
I
can
maybe
come
on
kind
dog,
emoji,
no
I
I
want
to
see
if
I
can
maybe
take
the
screen
for
a
bit
I.
C
This
is
totally
not
related
to
kind
stuff,
but
I've
been
working
on
something
and
I
kind
of
wanted
to
show
it
off.
I
probably
should
have
put
it
on
the
agenda,
but
there
are
twelve
minutes
left
and
I
want
to
show
you
all
some
stuff
in
the
spirit
of
Stephon
was
hacking
on
a
thing,
and
let's
have
him
walk
through
it
quickly
on
the
video
George?
Oh
okay
was
it?
Was
it
a
wave
or
okay?
Okay,
it's
a
sweet,
live
demo
and
I'm
gonna
see
how
quickly
I
can
do
some
of
this
stuff.
C
All
right,
so
I
have
been
working
on
a
few
different
PRS,
oh,
and
what?
What
a
queue
this
and
this
right
so
a
few
different
PRS
building
yeah
all
right.
So
how
can
I
change
this
together
properly?
All
right?
So
GCV
builds
right,
so
we
have
a
intestine,
for
there
is
within
the
image
images
builder
folder.
C
There
is
in
basically
definition
to
docker
file,
some
entry
point
script
cloud,
build
file
and
docker
file
did
I,
say
doctors
well,
probably
that
define
that
define
basically
a
tool
to
that
is
sugar
around
a
around
the
GCB,
the
G
cloud
build
submit
command
right,
and
what
we've
essentially
done
is
what
I,
what
I
did?
There
is
kind
of
tweaked,
a
few
of
the
things
that
allow
us
to
arbitrarily
build
from
random
directories
right,
so
this
tool,
I'll
show
it
to
you
I.
C
Think
I
have
enough
time
to
show
it
to
you
so
walking
through
the
changes
really
quickly
added
some
new
flags,
a
build
directory
flag
and
updated
the
readme.
Basically.
So,
if
you've
ever
seen
like
the
variants
that
yamo
file,
basically
what
it
allows
you
to
do
is
supply
a
set
of
different
configurations
for
the
way
you
might
build.
C
An
image
right
are
the
way
you
might
submit
a
it's
more
accurate
to
say
submitting
a
Google
cloud,
build,
run
right
and
the
reason
for
that
is
I
kind
of
I
kind
of
tweaked
this
to
be
agnostic
of
images,
because
I
realized
that
it's
generally
useful
to
us
as
sig
release.
So
we
have
kind
of
three
processes
that
we
run
through,
one
that
is
not
day-to-day
affected
by
human,
which
is
the
which
is
the
build
stage
right.
C
C
C
This
is
what
builds
kubernetes
it's
kind
of
in
the
in
the
name
and
essentially
it
will
dump.
This
will
build
all
their
artifacts
for
kubernetes
and
most
of
the
artifacts
excuse
me,
the
tar
balls
and
all
all
that
good
stuff
and
land
it
in
and
if
there
are
images,
we're
pushing
images
to
GC
r,
io,
kubernetes,
CI
images
and,
basically,
what
it
does
is.
C
It
uses
the
bootstrap
image
which
has
been
deprecated
for
a
thousand
years
now
and
checks
out
the
kubernetes
repo,
the
release,
repo
uses
this
as
the
source
and
then
runs
the
scenario
right.
The
scoober
Nettie's
build
scenario
right.
What
I
wanted
to
do
was
I
created,
something
called
shadow
builds
shadow
builds.
So
what
I
wanted
to
do
is
mirror
that
process
and
essentially
see
what
it
would
look
like
for
us
to
push
to
you
a
new,
a
new
GCP
project
right.
C
So
that
being
the
case,
staging
release
test,
which
is
going
to
be
the
the
project
that
we're
using
moving
forward
for
so
this
is
a
the
CN
CF
GCP
project
that
release
will
be
using
right.
So
what
I
wanted
to
see
is
if
I
could
start
teasing
out
some
of
the
errors
that
would
pop
up
right,
whether
they
be
permission,
errors
or
their
miss
configurations
of
tests.
C
Things
that
we
had
made
assumptions
about
that
we're
not
actually
true
or
no
longer
true,
when
you
move
from
project
to
project
so
that
we
could
have
kind
of
like
a
shadow
build
in
the
process.
In
the
background,
seeding,
basically
seeding
information
for
us
as
we're
starting
to
test
different
things
like
building
Deb's
and
rpms
right,
can't
really
do
that
unless
the
artifacts
that
you
need
to
have
or
in
the
places
that
you
need
to
have
them
to
test
that
flow
right.
C
What
ended
up
happening
is
that
I'm,
starting
to
look
at
essentially
rewriting
the
job
in
a
way
that
does
not
require
the
bootstrap,
the
the
the
bootstrap
image
right
and
what
I
found
was
that
it
was
a
I
kind
of
made
sense
to
use
what
we're
calling
the
image
builder
image
as
just
a
generic
GCB
builder
right.
So
what's
really
cool
about
it
is
we
can
that
variants
file
that
I
mentioned
you
can
start
to?
C
You
can
start
to
configure
something
that
looks
close
to
the
different
variants
that
we
have
for
building
the
project,
whether
it's
Bill
build
fast
or
it's
like
make
make
quick
release
so
on
and
so
forth,
right
and
and
and
kind
of
configure
things
as
you
see
fit,
but
also
you
know
what's
great
about
it.
Is
it
gives
you
the
opportunity
to
not
depend
on
kind
of
understanding
everything,
that's
happening
and
test
infra,
because
if
you
look
at
you
look
at
this
test,
you
don't
like
you,
get
a
vague
idea
of
here
scenario:
kubernetes
build.
C
What
does
that
mean?
Where
do
I
find
that
right
and
the
answer
is
there-
is
a
scenarios
folder
in
tests?
Infra
that
has
a
bunch
of
scenarios
right
and
one
of
the
scenarios
is
kubernetes
build.
If
you
go
into
kubernetes
build,
you
see
that
it's
eventually
running
main
with
some
arguments-
and
this
is
main-
does
a
few
things
in
terms
of
getting
getting
environment
variables
and
then
eventually
runs
a
make
clean
if
you've
specified
the
fast
flag.
C
What
will
do
make
quick
release
if
not
it'll
do
a
make
release
and
then
it
will
run
the
push
build
script
right
with
the
arguments
that
you've
configured
here
right.
So
this
is
not
easy
to
understand.
It's
incredibly
difficult,
I,
remember
the
first
time
I
figured
it
out,
it's
incredibly
difficult
to
understand
without
some
context
right.
But
if
you
take
a
look
at
this
PR
right,
adding
support
for
kubernetes
builds
via
Google
Cloud
build.
This
makes
it
this
starts
to
make
it
a
little
clearer
about
what's
happening
right.
C
So
this
is
a
build
file
that
lives
within
our
our
repo
repo
that
we
control
under
our
owners
files
and
all
that
good
stuff
right.
It's
doing
a
it's
using
the
get
cloud
builder.
It's
it's
going
to
this
directory.
It's
doing
a
clone
of
kubernetes
repo
and
a
branch
that
you
can
specify.
Then
it's
doing
a
clone
of
the
release
tools,
repo.
C
C
This
is
a
registry
I
want
to
use
I'm
gonna,
allow
duplicates
and
marker
files
I'm
going
to
use
this
as
my
extra
publish
file,
so
it
will
write
a
Kate's
master
txt
on
the
build
is
done
and
I'm
also
going
to
build
hypercube
right.
So
and
then
this
is
kind
of
my
test
doodad
here,
which
I
think
should
work.
Maybe
I,
don't
know,
let's,
let's
try
it
out
right
and
then
also
a
new
docker
file.
That's
based
on
the
cube
cross
version
one.
This
is
a
go
version
right,
so
we
just
bumped
a
112.
C
So
this
is
a
bump
to
112
here
and
then
it
adds
a
few
more
things.
The
stalker
file
will
get
cleaned
up
but
adds
a
few
more
things
to
essentially
enable
using
g-cloud
within
the
image.
If
we
need
to,
as
well
as
adding
docker
for
images
that
need
to
do
a
docker
build
and
then
a
docker
push
right.
So
seeing
this
in
action,
doot-doot
doot-doot
where's
my
little
scratch
pad
all
right
right.
So
there
is
an
image.
C
Right
so
the
what
this
is
doing
is
that
I'm
going
to
use
this
building
image,
that's
already
already
been
built.
Project
is
kubernetes
release
tests,
a
scratch
bucket
is
turbinates
release
tests
GCB,
I'm
not
going
to
upload
any
source
to
GCB
using
this
as
a
variant.
So
this
variant
that
I
specified
again
is
here
right.
So
this
is
how
my
job
is
going
to
be
configured.
C
The
reason
this
is
important
is
because
we
have,
as
we
see,
reveal
and
sidebar
we've
got
a
release,
a
mole
as
well
as
a
staging
right,
and
these
are
the
ones
that
we've
been
using
for
like
GCB
manager,
and
you
can
see
it's
kind
of
templated
a
similar
way,
but
with
one
minute
left.
Let's
kick
that
off
and
see
what
happens
right?
You
can
see
that
the
logs
are
available
here
and
stuff
is
happening.
C
C
Google
cloud
builds
not
builds
that
are
necessarily
I'm
going
to
end
in
an
image
push
right
so
that
PR
this
PR
does
that
and
then
use
that
to
yeah
use
that
too?
Yes,
yes,
then
we're
at
time.
So
then
use
that
to
do
the
kubernetes,
build
process
right
and
then
wire
that
into
tests
in
from
right.
So
this
so
you
can
view
all
these
PRS.
I
will
need
reviews
on
this
one
super
soon,
so
anyone
who
wants
to
do
that
can
do
that.
Our
demo
is
kicking
off.