►
From YouTube: Kubernetes Release Engineering 20191028
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
This
is
the
October
28th
2019
kubernetes
sig,
release
sub-project
release
engineering
meeting.
This
meeting
is
being
recorded,
it
will
be
uploaded
to
YouTube
after
we
complete.
We
ask
everybody
to
adhere
to
the
kubernetes
community
code
of
conduct,
please,
which
amounts
to
be
great
people
we're
here
to
collaborate,
and
we
have
to
behave
in
a
certain
way
for
that
to
happen
efficiently.
A
So
today's
agenda
is
in
the
normal
place,
which
is
linked
off
of
the
cig
list.
Information
in
the
community
repo
for
anybody
following
along
on
the
video
for
people
here,
I've
dropped
it
in
the
chat
in
zoom.
We
don't
have
a
lot
of
specific
stuff
on
the
agenda
today,
but,
as
always,
please
put
your
name
in
the
notes,
so
we
can
see
who
is
here?
A
Do
we
have
any
new
folks
today
we
might
say
hi
to.
First
of
all,
it
looks
like
we've
got
mostly
all
returning
people.
So
then
that
brings
us
into
sub
project
updates
and
usually
this
is
where
Steven
tells
us.
What
all
he's
hacked
up
and
I
see
like
3
a.m.
commits
from
him
over
the
weekend.
So
Steven
is
there
anything
that
you
would
like
to
chat
about
today?
A
B
Sure
so
I've
been
working
on
hacking
on
actually
moving
our
builds
the
build
stage
and
release
process
from
the
from
the
Google
infrastructure
over
to
CN
CF
infrastructure.
So
the
build
process
has
been
completed
for
a
little
bit,
and
that
is
people
who
are
on
the
release.
Engineering
mailing
list
may
have
seen
not
the
release.
B
Engineering
mailing
list
about
the
the
github
team
may
have
seen
some
of
those
PRS
flyby
I
am
trying
to
pull
up
the
PRS
on
my
phone
right
now,
but
I
took
some
time
to
aggregate
kind
of
some
of
the
stuff
that
I
was
working
on
in
one
place
so
that
you
can
view
it
so
I've
also
tagged
release
engineering
projects
on
a
new
PR.
That
is
a
prototype
of
what
it
would
look
like
to
stage
within
the
CN
CF
infrastructure.
B
Okay,
so
that's
in
the
chat,
if
someone
can
drop
that
in
the
notes
as
well,
so
I
can
walk
through
this
issue
really
quickly.
So
basically
the
first,
the
first
part
of
it
was
the
build
process.
So
what
we
wanted
to
see
is
if
we
could
enable
kubernetes,
builds
on
the
Kate's
infrastructure
and
do
it
via
prowl.
So
the
same
way
the
CI
kubernetes
build
and
CI
kubernetes
build
fast
jobs
run
see
if
we
can
wire
that
up
to
run
as
Google
cloud
build,
runs
instead
and
then
plug
those
into
prowl.
B
So
you'll
see
some
of
the
some
of
that
checklist
is
actually
turning
the
image
builder
into
a
generic.
So
the
test
in
for
image
builder,
which
I
did
like
a
kind
of
walkthrough
and
one
of
the
I
guess
last
week's
call
not
last
week
called
the
week
before
that
I
did
kind
of
a
walkthrough
of
what
the
image
builder
looks
like
so
wiring
that
up
to
be
more
of
a
generic
GCB
builder.
B
So
once
that
was
done,
I
also
added
a
bunch
of
prototype
jobs,
which
can
view
on
the
cig
release
prototype
prototype
blocking
board.
That
board
includes
two
jobs:
one
is
the
build,
build
master
and
then
build
fast
job
from
there.
I've
been
working
on
the
staging
process,
so
the
first
part
of
that
you
may
have
seen
like
the
alerts
pop
up
into
the
channel
about
me,
pushing
commits
into
manually
pushing
commits
into
branches
instead
of
be
ours,
so
that
was
a
creation
of
a
feature
branch
called
prototype.
B
The
type
feature
branch
is
essentially
set,
so
that
is
essentially
created
so
that
we
can
do
all
of
this
work
and
then
cut
everything
over
cleanly.
Instead
of
so,
the
reason
behind
that
is
essentially,
our
staging
process
depends
on
some
GCB
cloud,
build
files
that
are
located
in
the
release
repo,
so
it's
its
urban
Nettie's
release,
GCB
and
then
GCD,
and
then
their
separate
folders
build
stage
and
release
in
each
of
those
have
a
cloud
build
within
them.
So
the
cloud
build.
B
If
we
were
to
change
those
today
and
the
job
was
essentially
half
done,
we
would
break
the
actual
staging
jobs
right.
We
were
not
the
actual
staging
jobs,
but
when
a
release
manager
actually
runs
a
staging
run,
it
would
fail
right
because
it
would
try
to
land
assets
in
a
combination
of
the
old
infrastructure
in
the
new.
So
the
feature
branch
is
there
to
essentially
make
sure
that
we
can
test
out
the
new
flow
and
do
it
cleanly
without
breaking
the
current
flow
for
release.
Managers.
I.
A
Have
a
question
there,
the
at
least
a
portion
of
that
current
flow
is
broken,
the
the
Google
internal
portion.
Do
you
think
we
should
roll
back
like
do
some
revert
PRS
so
that
them,
instead
of
them,
building
from
it?
Well
I
like
the
idea
of
them
building
from
attack
first
of
all,
but
the
head
is
actually
in
a
functional
State.
Also
so.
B
No
I
would
prefer
to
roll
forward
so
there
they're
working
on
the
old
tag,
because,
because
essentially,
what
we've
done
is
move
some
of
the
locations
around
for
the
Deb
and
rpm
builds.
What
I
want
to
do
is
unify
the
tool
that
is,
that
is
building
both
the
Deb's
and
rpms
and
then
roll
their
tool
internally
forward.
You
know
if
I
may
expected
inputs
outputs.
We
saw
if
the
schedule
a
call
for
that,
but
I
don't
want
to
do
that
until
I
get
closer
to
unifying
the
two,
so
the
idea
would
be
so.
B
B
It
looks
at
the
dependencies
for
each
of
the
packages
that
we
want
to
build,
and
then
it
does
its
thing
right
and
then
on
the
RPM
side
we
do
something
very
similar
except
it's
a
super
tiny
bash
script
right
so
I
would
like
to
I
would
like
to
make
sure
those
two
things
do
the
same
stuff
before
we
ask
them
to
refactor
any
of
the
internal
stuff.
I,
don't
think,
there's
much
I,
don't
think.
There's
much
value
in
us
reverting
the
stuff
at
this
moment,
but
someone
feel
free
to
chime
in
if
they
feel
differently.
B
Okay,
all
right,
okay,
so
the
some
more
of
the
weekend
work
so
on
the
staging
side,
I
kind
of
I
kind
of
broke
stage
and
release
up
into
three
phases:
there,
probably
more
and
there
probably
sub
phases
within
these
phases.
It's
just
kind
of
a
brain
dump
of
the
ideas
that
I
had
as
I
was
working
through
some
of
these
PRS.
So
the
phase
0
is
essentially
staging
only
phase.
B
B
Let
me
just
pop
that
into
the
chat
as
well.
So
that's
that
PR
there,
and
essentially
what
I've
done
in
that
PR
is
a
tweak
a
few
things
so
that
we
can
configure
buckets
or
so
that
bucket
names
are
consistent.
It
was
hard-coded
in
a
bunch
of
places,
so
there
so
some
of
the
buckets
and
some
of
the
GCR
I/o
paths
are
Arkansas.
B
Durable
now
are
are
more
consistent.
It
also
moved
some
things
around
that
makes
make
assumptions
based
on
whether
or
not
you're
a
Googler
I
removed.
All
of
that
logic,
I
also
temporarily
disabled,
any
checks
that
look
for
a
github
token
right.
So
the
reason
behind
that
and
like
each
of
each
of
the
PRS
within
that
each
of
the
commits
with
a
net
P
are
basically
the
ones
that
I
want
to
drop
our
prepended
with
the
with
a
tag
that
says
drop.
B
So
each
of
those
are
essentially
commenting
out
logic
that
would
check
for
check
for
the
github
token
now
the
reason
behind
that
is
the
github
token
that
we
use
is
stored
essentially
in
kms
right
so
in
the
kubernetes
release
test
KMS
GCP
KMS
api
freight,
so
we
can't
use
the
credentials
that
we
store
in
the
kubernetes
release
test
KMS
on
the
new
tates
staging
release,
test
GCP
project
right,
and
we
also
don't
yet
have
access
to
the
kms
api
on
the
new
projects.
So
there
is
yet
another
PR
that
I
will
pop
into
the
chat.
B
So
there's
another
PR
for
the
kate's
that
io
repo
and
essentially
what
that
PR
does.
Is
it
the
first
thing
it
does?
Is
it
adds
a
kubernetes
release,
managers,
admins
group
right
to
our
our
kubernetes
io
g
suite,
so
that
group
is
meant
to
have
elevated
privileges
on
sake,
release,
GC,
p
projects
right,
and
the
members
of
that
group
will
be
the
sig
chairs
right.
So
the
elevated
privileges
that
we're
referring
to
or
things
like,
being
able
to
create
GCS
buckets
being
able
to
admin,
kms
and
I
think
being
able
to
push
to
GCR
right.
B
So
we're
also
configuring
those
projects,
the
PR,
also
configures
its
projects,
so
that
people
in
the
release
managers
private
group
can
access
GCR
as
well
as
execute
cloud,
builds
and
we're.
Also
configuring
the
release
managers
at
kubernetes
at
I/o
group,
which
is
the
release
which
is
essentially
the
the
aggregate
of
patch
release.
B
Was
it
release,
prod,
test
project
or
a
test
prod
project?
Something
like
that.
So
once
those
once
that
PR
merges
I'll
be
able
to
configure
KMS
secrets
for
the
github
token
and
then
reconfigure,
the
cloud
builds
to
store
that
token
right.
It's
a
store,
the
new
token,
or
rather
to
store
the
same
token,
but
the
new
secret
for
the
new
project.
B
Places
that
don't
necessarily
have
the
same
vanity
URLs
that
that
we
would
come
to
expect
so
one
of
them
being
DL
Cates
audio,
which
is
a
vanity
URL
for
for
getting
downloads
of
kubernetes
or
understanding
where
the
what
the
current
versions
are
so
like
download
that
cait's
Ida
a
slash
latest
text
or
something
will
give
you
like
the
latest,
build
for
kubernetes
and
because
a
deal
case
that
IO
doesn't
point
to
the
new
test
prod.
Yet
it
doesn't
make
sense
to
use
to
try
to
cut
any
of
that
stuff
over.
B
B
There's
also
somewhere
in
this
I'm,
going
to
be
configuring,
prowl,
staging
jobs?
So
what's
kind
of
cool
about?
This
is
we'll
get
to
the
point
where
these
staging
jobs
will
be
running
consistently
and
no
mock,
and
they
will
be
staging
releases
constantly
right,
so
I'm
thinking,
we
run
them
every
four
hours.
B
B
B
That
I,
don't
know
all
of
the
shapes
of
which
include
like
migrating
the
old
images
into,
and
not
just
like
the
old
release
images,
but
all
of
the
images
that
exists
on
Kate's,
that
GC
r,
GC
r
dot
IO
into
new
Prada,
and
then
there's
also
the
fact
that
we
need
to
figure
out
what
image
promotion
looks
like
first
aging
right
now.
Essentially,
what
we
do
is
we.
B
The
Onaga
will
build
the
docker
images
or
the
container
images
and
and
then
push
that
to
the
prod
registry.
What
we
would
want
instead
is
to
push
them
to
the
staging
registry
right
and
then
run
a
script
that
maybe
maybe
either
outputs
the
list
of
Shaw's
for
for
those
for
those
images,
and
then
we
go
to
we
follow
the
image
promotion
process,
or
maybe
we
have
a
script
that
actually
will
create
the
pr
to
do
the
image
promotion
right.
So
we
have
to
figure
out
how
we
want
to
approach
that
as
well.
B
A
So
I've
got
a
question
on
that
prod
discussion,
so
there
are,
there
are
number
of
things
that
are
that
are
TBD
for
the
future,
but
the
kind
of
TL
DR
they're
felt
like
how
do
we
make
sure
that
fraud
is
the
same?
The
new
Prada
is
the
same
as
old,
prod
and
I
wonder
how
much
that's
necessary
as
a
demonstration
like.
Could
we
leave
old
prod
there
and,
instead
of
so
just
so,
if
we
define
all
the
things
that
are
going
to
be
in
new
prod
ensure
that
they're
there?
A
Maybe
it's
not
everything,
that's
an
old
prod,
depending
on
reasons
or
what
what
we
see
but
we're
talking
about,
potentially
making
a
break
here
intentionally
and
I
I
think
that
that
could
be
okay
like
there's
a
lot
of
random
old
things
out
there
that
we
don't
fully
understand.
But
the
one
thing
that
I
think
we
would
need
to
do
is
anything
that
we're
pushing
to
prod
and
new
prod
that
final
output
would
also
potentially
need
copied
to
old,
prod,
so
old,
prod
and
it's
weird
kind
of
broken
state
for
certain
criteria
could
stay
that
way.
A
B
Yeah
so
I
don't
know
how
they're
going
to
be
handling
container
images
just
yet
it's
less
so
the
it's
less,
though
the
flow
itself,
and
it's
more
so
the
fact
that
we
know
that
people
would
use
these
vanity
URLs
right.
So
we
have
to
make
sure
that
if
we
cut
those
over
like
if
we
use
a
staging
gates,
teacher
or
thing
as
a
pre,
URL
or
a
I,
don't
know
if
beta
dot,
G,
CRI,
dot,
IO
or
something
right,
some
some
URL
that
lets.
B
You
know
that
it
is
clearly
not
the
current
version
right
that
might
be
okay.
We
also
have
to
make
sure
that,
as
we're
going
about
doing
some
of
the
stuff
that
we're
pretty
intently
documenting
how
prod
test
functions
may
be
because
the
button
so
we're
using
right
now
are
the
projects
that
we're
using
right
now,
Kate's
staging
release
test.
It's
almost
in
the
name
that
it's
like
a
test
of
staging.
B
It's
not
quite
staging
I,
don't
know
if
the
intention
was
for
this
to
be
a
test
of
staging
or
for
it
to
be
absolutely
because
if
it's,
if
it's
the
former,
then
this
this
could
get
interesting
and
then
there's
also
Kate's
release
test,
prod
or
prod
test.
Right
and
and
again
the
name
implies
that
it's
clearly
not
the
real
prod
right.
It's
just
a
playground
for
us
to
figure
out
what
the
workflow
is.
B
So
we
have
to
make
sure
that,
while
we're
doing
this,
I'm,
essentially
the
way
I've
configured
that
prototype
er
right
now
essentially
points
it
points
everything
at
staging
right.
So
it
assumes
you
have
access,
because
you
should
have
access
that
stating
that
that's
they
didn't
creep
out
right.
So
it's
landing.
It's
lining
what
are
quote-unquote
prod,
artifacts
in
staging
bucket,
which
which
is
fine
for
now,
but
you
know
once
once
we
have
more
access
once
IPR
emerges,
I'll
have
more
access
to
actually
land
things
within
the
product
right,
so
I'm
not
actually
sure.
If
the.
B
B
A
B
A
On
those
files,
the
all
of
those
things
all
those
published
artifacts,
our
API
people
depend
on
them,
so
we'd
either
have
to
prove
that
we
have
all
of
these
things.
These
things,
these
things
that
we
don't
want
to
have,
or
we
we
make
some
sort
of
a
split
where
you've
got
the
old
deprecated
way
for
the
next
year
or
something
in
addition
to
the
new
preferred
way.
B
We
added
like
essentially
some
conditional
logic
that
will
check.
If
it's,
if
it's
version
it
will
keep
publishing
I
believe
it'll,
keep
publishing
the
md5
and
sha-1
until
I
think
I
set
it
to
cure
92
118
per
so
it'll
keep
publishing
them
until
so
I'm
I'm
I'm
not
too
concerned
about
I'm,
not
too
concerned
about
like
individual
artifacts.
B
What
I'm
concerned
about
is
like
the
presentation
of
the
entire
blob
right
so
for
people
who
are
who
have
scripts
that
are
configured
to
go,
look
for
things
in
DL,
Katie,
oh
great,
and
our
new
Prada
is
not
DL
Kate's
that
I.
Oh
then,
they'll
have
problems
right
if,
if
people
have
scripts
configured
to
look
for
things
which
I
in
and
Kate's
GCRA,
oh
right
for
container
images
which
actually
I
ran
into
you,
because
something
basically
like
if
I'm
doing
the
no
mock
flow
and
it's
pointing
at
the.
A
B
Yeah
I
would
love
to
not
have
to
maintain
these
things
for
a
year.
To
be
honest,
I
would
love
to
see
the
part
that
concerns
me
is
that
is
that
cut
when
we
actually
do
the
cut
over
like
what
does
it
look
like?
What
does
it
even
look
like
like
and
part
of
it
is
going
to
be
changing
TNS
records
right
and
that's
kind
of
that's
just
the
name
of
the
game,
but
I
don't
know
like
because
we
also
like
there
are
certain
parts
of
this
flow
that
I
can't
test.
B
Until
those
records
are
changed,
which
I
guess
we
can
mock
out
the
records
to
have
a
yeah
we
can,
we
can
mock
out
the
records
to
this
is
not
I,
don't
think
it's
impossible.
We
just
have
to
think
through
a
few
more
pieces.
I
know
that
I
know
that
jail
dockets,
that
I,
oh
and
and
and
Kate
said
she
cRIO
are
gonna,
be
are
gonna,
be
problems
are
going
to
be
things
that
we
have
to
solve.
I,
don't
I,
don't
think
they're
problems.
A
We
got
another
question
back
on
the
the
feature
branch
newly-created
and
the
the
the
warnings
in
the
sock
channel
about
the
branch
fast-forwards.
Is
there
a
way
that
we
could
so
I?
Guess
first
wishes
them.
Well,
we
should
talk
about
how
that
branch
is
gonna,
be
managed,
and
another
kind
of
meta
issue
I'd
like
to
talk
about
is
how
others
of
us
can
get
involved
in
the
hacking
that
you're
doing
you're
one
of
the
PR
is,
you
show
it
has
12
commits
and
you're
you're
marching
forward,
but
is
there
work
here
that
others
could
contribute?
A
B
B
If
you
are
a
release,
manager
associate
right,
so
you're
kind
of
shadowing
learning
all
this
stuff
and
starting
to
kick
down
kick
down.
Prs
and
stuff,
like
that,
I
created
a
new
role
on
the
kubernetes
release
test
project
that
will
give
you
access
to
view
the
cloud
bills
and
and
some
of
the
the
GCSE
stuff
right.
So
that
means
the
the
previous
issue
that
we
add
of
people
wanting
to
shadow
this
role
and
not
having
the
requisite
access
to
at
least
even
look
at
logs,
like
that
should
be
gone.
B
So
the
the
branch,
the
branch
management
and
patch
release
team
folks
have
access
to
do
the
things
that
I'm
doing
right
now
and
in
the
indicates
staging
release
test
project
and
also
make
sure
that
we
allow
the
release
manager
associates
to
view
the
stuff
that
I'm
doing
right.
And
then
the
additional
role
of
the
admins
would
give
us
access
the
the
release
chairs
access
to
edit
the
edit,
the
KMS
stuff
right
to
add
secrets.
B
So
this
will
come
up
later
when
we
actually
add
a
GPG
key,
so
that
we
can
sign
the
Deb's
and
rpms
before
pushing
them
somewhere
right.
So
we
need.
We
need
access
to
do
that
before
we
can.
We
can
do
some
of
that
stuff
to
lay
the
groundwork
so
that
everyone,
and
once
that
phase
ii
role
and
once
the
the
kate's
dot
io
PR,
is
merged
and
once
that
phase
0
lands
that
will
be
then
you'll
be
able
to
work
on
the
prototype
branch,
there's,
probably
a
lot
of
stuff.
B
That
is
unexplained,
which
I
can
walk
through
with
people
or,
if
they're
interested
in
contributing
to
that,
and
there
are
also
probably
things
that
you,
if
you're,
not
a
release
manager,
you
won't
have
access
to
you
just
a
heads
up
for
the
people
on
the
call
that
might
also
be
interested
in
and
working
on
this.
So
this
is
yeah.
This
is
prime
they're
really
going
to
be
a
work,
that's
done
by
release
manager,
associates
and
and
the
rest
of
the
release
managers
just
to
be
aware
of
that
ahead
of
time.
B
A
B
B
B
Okay,
all
right,
so
my
next
topic
we
have
been
discussing
this
for
a
while,
and
it's
just
kind
of
been
in
the
background
and
we're,
like
you
know
one
day
would
be
really
really
nice
to
do
this,
but
we'll
see
when
we
get
there
is
refactoring
the
refactoring,
the
all
the
Bosch
tools
that
we
have
and
kubernetes
release
and
turning
them
into
go
tools.
So
Daniel
and
Sasha
have
Krisha
Slee
volunteered
as
tribute
to
start
work
on
that
I.
B
So
Sasha
I
think
last
week
put
up
a
PR
for
a
skeleton
of
the
push
build
tool.
I
also
have
a
kind
of
test
PR
for
the
branch
fast
forward
tool,
so
I
was
going
to
write
it
up
in
an
issue,
but
I
guess
I'll
say
it
now.
My
vision
for
this
Tim
and
I
were
talking
about
this
last
week.
I
think
what
I
would
like
to
see
is
something
of
a
toolbox
right,
something
a
singular
tool
that
does
a
bunch
of
these
functions,
so
sub
commands
on
the
same
tool.
B
So
something
like
a
crew
cube
release
right,
key
release,
FF
right
so
cube,
release
FF
would
do
the
fast-forwarding
Huber
Elyse
push
build
or
release
push,
or
something
like
that
would
do
the
push
build
functionality
right
reason:
I'm
thinking
of
aggregating
these
into
a
single
command
is
so
that
we
can.
You
can
see
that
there
are
and
I
speak
as
someone
who
is
not
like
mega,
go
hacker
or
anything
like
that.
B
I
just
kind
of
bang
my
head
on
this
stuff
until
it
works,
so
you
can
see
within
the
release
repo-
and
you
know,
as
well
as
across
a
bunch
of
different
repos.
Everyone
has
kind
of
a
different
style
right
how
they,
you
know
how
they
organize
functions,
how
they
organize
library,
it's
how
they
organize
how
they
decide
what
you
know
even
like
down
to
choosing
a
logger
right,
and
you
can
see
that
some
of
those
decisions
are
reflected.
Are
there
multiple
reflections
of
decisions
within
the
kubernetes
release?
Revo?
B
What
I
want
to
make
sure
of
is
that
we
don't
we
we
do
our
best
to
not
duplicate
work
right,
so
where
we
can
use
the
same
thing.
We
do
right
and
if
we
have
exactly
exactly
Georgia
style
guide,
right,
I
don't
have
enough
style
to
write
a
style
guide,
but
I
think
we
can
probably
beg,
borrow
and
steal
from
a
few
other
places.
B
But
you
know
the
idea
would
be
to
have
something
similar
to
you
know
if
you
think
about
the
way
that
keeps
ETL
is
designed
right,
it's
essentially
or
if
you
think,
of
clustered
or
CGL
right,
they're,
essentially
designed
where
there
is
a
top
level.
You
know,
there's
a
route
and
then
they're
a
set
of
sub
commands
within
that
right,
whether
it's
apply
or
run.
Or
you
know
our
editor,
you
know
so
on
and
so
forth
doing
the
same
with
the
release
tools
right.
So
that
way,
we
we
wire
up
a
single
logger.
B
We
use
similar
libraries,
the
branch
fast
forward,
PR
that
I
have
open.
You
can
start
to
see
that
we
break
out.
We
break
out
into
a
package
util
that
has
get
and
then
common
and
then
the
idea
is
there
are
to
replicate,
replicate
the
parts
that
we
need
from
the
tool.
The
the
libraries
that
are
in
bash
that
are
already
in
Lib,
so
Lib
common
SH
Lib
get
Lib
des
H
and
live
release,
live
SH
write.
Those
libraries
are
imported
into
all
of
the
Bosch
tools
that
we
used
stage
release
push.
B
So
it's
important
that
we
it's
important
that
we
consider
that
design
as
we
move
into
the
go
stuff
to
write.
The
part
of
what
I
want
to
do
is
make
sure
that
we
don't
try
to
replicate
all
of
it.
I
want
to
let
the
tools
that
we
are
building,
define
the
functions
and
methods
that
we
need
within
those
libraries
on
the
go
side
right.
B
So
you
can
also
see
that
tools
that
we
don't
necessarily
use
in
the
primary
release
process
or
the
staging
the
the
build
stage
and
release.
It's
also
have
need
for
similar
functions
right
so,
like
the
the
release
notes
tool
right,
the
release,
notes
tool
has
a
gate
library
within
it
right
and
some
of
it.
You
know
some
of
some
of
the
the
functions
that
are
defined.
B
There
are
the
same
as
the
ones
that
we
need
for
the
for
the
Bosch
tools
are
for
the
the
rewrite
of
the
Bosch
tools
right,
so
you
know
something
that
I
plan
to
do
within
that
fast-forward
pr's,
essentially
just
unified
those
two
right
and
then
kind
of
so
again
and
we're
gonna
we're
gonna,
build
the
toolbox.
I
think
is
I.
Think
that's
the
right
move
to
do
so
that
we're
not
we're
not
contending
with
conflicting
styles.
B
B
Cool
George,
if
you
don't
mind,
would
you
open
a
ticket
for
our
ticket
issue?
Jira
flashbacks
she
of
an
issue
and
kubernetes
release
regarding
a
style
guide.
Just
so
we
don't
lose
track
of
that.
I
think
that's
a
great
idea
now,
once
the
so
Sasha
made
the
suggestion
that
I
should
just
land
my
PR
so
they'll
be
go
I'll
collaborate,
it
collaborate
on
it
and
I
agree.
So
I'm
going
to
be
wrapping
that
up
next
week
this
week,
sorry
this
week
and
and
pushing
that
up.
B
B
That
I
think
is
probably
the
most
important
one
is
push
build
push
build
is
the
tool
that
is
responsible
for,
as
the
name
implies,
pushing
builds
of
kubernetes
up
to
the
staging
rebuilds,
so
that
tools
as
leverage
to
heavily
in
in
the
kubernetes
build
and
the
CI
kubernetes
build
and
CI
kubernetes
build
fast
jobs
as
well
as
in
a
bunch
of
other
place.
Is
that
maybe
we
can't
track
down?
Maybe
it's
used
by
external
tools
as
well,
so
I
think
it's,
it's
probably
the
next
most
important
one
to
take
to
take
down.
B
B
Libraries
that
I
was
mentioning
before
so
getting
its
the
point
where
it's
a
go
tool
and
it's
important
would
be
pretty
cool
because
then
people
are
no
longer
dependent
on
well
they're,
still
dependent
on
our
repo
but
they're
no
longer
dependent
on
libraries
that
are
written
in
bash,
so
I
think
it's
I
think
it
would
be
a
step
forward
from
there.
We
have
to
you
know,
I,
think
that
I
think
the
next
ones
will
probably
have
to
look
at
or
starting
to
think
about
Anagha
and
Angie's
to
be
manager.
B
So
for
people
who
are
not
aware
of
an
ongoing
GCP
manager,
a
nagato
is
our
release
tool.
It's
our
granddaddy
release
tool
that
that
is
1800
lines,
1868
lines
last
I,
checked
of
of
batch
and
then
chiefly
major,
actually,
a
wrapper
for,
like
Google
cloud,
build
substitutions
on
top
of
a
Naga
write.
What
I
see
happening
now
that
I
have
taken
a
look
at
the
of
the
image
builder
is
that
we
try
to
chip
away
at
we
try
to
chip
away
at
the
pieces
of
GCB
manager
and
replace
them
with
that
image.
B
Builder
right
I
also
had
a
chat
with
Katherine
about
the
image
filter
or
what
I'm
gonna
I'm
gonna
refer
to
now,
as
GC
builder.
At
this
point,
because
I,
you
know,
I
realized
that
the
way
we've
tweaked
it
it's
generally
useful,
now
I
think
I
think
what
we're
gonna
end
up
doing
is
we'll
we'll
have
a
co
maintained
and
GC
builder
from
between
cig
release
and
thing
testing,
we'll
pull
that
out
into
its
own
repo
right.
So
this
will
be
a
tool
that
you
can
use
to.
Essentially,
essentially
it's
just
sugar.
B
B
It's
also
not
in
place
that
we
like
conversion
right
so
be
nice
to
have
the
tool
in
one
place
where,
like
you
understand,
the
set
of
dependency,
if
they
don't
depend
on
won't,
have
to
depend
on
everything
that
lives
in
test
infra
because
tested
for
is
a
lot
and-
and
you
can
kind
of
require
the
top-level
go
mod
protestin
for
it
to
build
that
tool.
Alright.
So
moving
away
from
that
having
a
cleaner,
cleaner
side
of
dependency
is
on
that
tool
and
then
working
through
our
test
jobs,
because
this
has
the
potential
to
replace.
B
If
you're
familiar
with
the
scenarios
in
sintra,
it
has
the
huge
potential
to
replace
those
scenarios
right.
So
I've
already
proven
that
we
can
replace
the
kubernetes
build
scenario
with
that
builder
and
then
the
next
one's
we
probably
look
at
doing
like
the
kubernetes
ete
and
the
kubernetes
and
the
execute
scenarios.
B
So
if
anyone
has
no
idea
what
I'm
talking
about
the
scenarios
and
testing
for
it,
so
kubernetes
slash
tests,
infra,
slash
scenarios,
yeah
and
then
there's
a
set
of
scenarios
that
are
Python
scripts,
that
that
basically
are
the
underlying
tools
for
a
lot
of
our
build
jobs
right.
So
the
scenarios
are
built
into
the
bootstrap
image
and
if
you're
familiar
with
Reshef
image,
the
bootstrap
image
has
been
deprecated
for
some
time,
but
we
still
heavily
depend
on
that
image.
It's
also
the
image.
B
So
there's
like
a
there's,
a
there's
quite
a
bit
of
there's
quite
a
bit
of
mess
underneath
some
of
these
things
and
if
we
use
this
tool
and
we
clean
this
tool
up
and
we've
got
a
bunch
of
people
working
on
it
and
I
think
we've
got
the
opportunity
to
finally
deprecated
things
like
bootstrap
things
like
cubic
ins,
E
and
what's
kind
of
cool
about
this-
is
that
we
get
away
from
running
jobs
within
prowl,
not
not
strictly
correct.
Freight.
B
We
get
away
from
running
executing
proud
jobs
that
run
on
a
proud
cluster
right,
and
then
we
moved
to
a
point
where
we're
executing
proud
jobs
that
execute
Google
cloud
build
runs
then
run
on
ephemeral,
infrastructure
right
and
that's
the
place
that
we
want
to
be
right.
We're
still
getting
to
read
out
the
logs,
but
we
have
a
consistent
framework
for
submitting
these
jobs.
B
We
have
less
of
a
dependency
on
people,
building
images
and
Murphy
dependency
and
more
of
an
allowance
to
essentially
define
just
specify
an
image
that
is
already
built
right,
not
depend
on
any
of
the
testing
for
an
infrastructure
outside
of
submitting
the
job
right
and
I.
Think
that's
a
pretty
cool
place
to
be
at
so
I'm
just
I'm
throwing
stuff
at
people
now.
But
if
you
have
questions
from
you.
A
I'm
gonna
throw
a
couple
of
links,
I
guess
for
for
Daniel
and
Sasha
I,
don't
know
if
you've
seen
these
things,
I'll
put
them
in
the
in
the
minutes,
but
Bart's
Nicola,
who
is
doing
work
on
Kate's
infra,
was
also
looking
at
some
of
this
common
Bosch
code
and
was
trying
to
figure
out
what
it
does.
Initially.
He
thought
he
was
gonna
kind
of
help,
fix
some
issues
or
do
some
cleaning
up
and
refactoring,
and
it's
such
a
mess
that
it's
hard
to
even
understand.
What's
there,
so
he
was
looking
at
the
documenting.
A
What's
there
and
I
think
to
some
extent,
maybe
we
don't
want
to
just
take
something:
that's
there
as
a
Bosch
function
and
make
it
a
go
function
like
the.
We
want
to
I
think
that
a
theme
today
is
like
split
cut
over
there's
a
lot
of
things
we
want
to
just
leave
behind,
but
at
the
same
time,
there's
a
lot
of
when
you
look
at
that
Bosch
code.
A
I
think
you
you
probably
come
will
come
away
with
the
sense
that
there
is
actually
some
intelligent
thought
that
went
into
it
and
we
don't
want
to
just
completely
disregard
that
blindly
and
fall
into
traps
and
eventually
in
a
year
to
discover.
Oh,
we
we
have
go
code
that
exactly
matches
the
old
Bosch
code
and
are
unhappy
so
like
we
were
gonna,
be
able
to
learn
from
it,
but
the
how
to
do.
That
is
complicated.
A
So
he
created
a
set
of
Doc's
describing
what
he
sees
there
and
the
the
interdependencies
of
things,
and
that
also
will
inform
a
bit
as
we
try
to
decouple
things.
It
should
give
help,
give
a
reminder
of
other
things
that
are
using
maybe
point
at
some
patterns
that
we
could
replace,
maybe
with
something
for
a
better
pattern.
I,
don't
know
so,
there's
like
six
or
seven
links
all
pasted
into
the
notes.
Yes,.
B
I
mean
yes,
so
the
big
part
about
this
stuff
is
is
again
the
libraries
there
are
a
lot
of
things
that
we
do
a
lot
of
things
that
we
do
in
those
libraries
that
are
consistent
across
each
of
the
tools
that
we
use.
So
if
you
take
the
common
library
stuff
that
we
do
is
like
we
do
a
log
in
it
right
and
like
we
don't
need
law,
we
don't
need
to
log
in
it
anymore
if
we're
using
a
logger
in
go
right,
yeah.
B
So
that's
like
a
function
that
goes
out
of
the
window
right
and
then
you've
got
things
like
clean
exit.
Okay,
clean
exit,
basically
wraps
wraps
exit
codes,
make
sure
that
things
are
provides.
A
provides.
A
reasonable
exit
code
provides
some
sort
of
error
message
and
you
know,
and
that's
stuff
that
they,
you
know,
is
handled
by
the
AOS.
You
know
the
OS
import
right
on
the
go
side
right,
you
know
and
then
you've
got
you
take
the
get
Lib
and
get
lib
like
people
have
written,
get
libraries
already
and
go
right.
B
B
You
know
leveraging
the
the
G
cloud
CLI
tool
or
the
GS
util
CLI
tool
right,
so
those
are
ones
that
do
functions
within
within
Google
cloud
platform,
whether
it's
copying
copying
things
from
one
bucket
to
another
or
from
your
local
to
a
bucket
or
from
the
bucket
to
local
or
uploading.
You
know,
authenticating
against
using
your
Google
cloud
credentials,
authenticate
against
the
GCR
TCR
repo
of
registries
right
or
making
sure
that
your
docker
client
within
the
GCP
run
can
leverage
your
credentials.
Things
like
that.
B
B
How
our
are
we
staging
artifacts
right
and
then
that
should
again
inform
the
way
we're
building
these
libraries
it
shouldn't
be
like,
like,
like
Tim,
said,
you
know,
we
really
do
not
want
to
get
into
a
place
where
we're
like
hey,
Lib,
common,
ok,
copy-paste
comments
out
all
right.
What
is
the
go
equivalent
of
this
of
this?
The
structure
right?
We
want
to
make
sure
that
we're
not
doing
that
kind
of
stuff.
This
is
not
a.
This
is
not
a
copy
pasting.
B
D
I
just
have
a
condo
that
I
mean
I'm
full
in
on
rewriting
those
those
things
and
making
them
as
small
as
possible
and
I.
Think
if
we,
if
we
manage
to
do
that,
we
can
also
kind
of
or
we
should
aim
to
use
as
much
as
we
can
from
the
tools
we
have,
for
example,
from
GCB
I
could
imagine,
for
example,
a
tool
that
does
nothing
else.
Then,
as
you
said,
pushing
something
to
GCSE
or
pulling
something
down,
and
then
we
can
even
like
create
reusable
GCP
tasks
or
whatever
it's
called
in
that
context.
D
B
If
you
look
at
the,
if
you
look
at
a
release,
build
cloud
amyl
that
is
the
that's
the
the
buildconfig
that
we
submit
to
suggest
to
me
via
the
the
proud
job
right
and
again,
if
you,
if
you
scroll
up
in
the
notes
all
of
that
stuff,
it's
linked
how
I
connected
each
of
those
pieces,
it's
linked
because
I
didn't
know
how
to
do
it
before
so
again.
A
lot
of
the
stuff
is
me
just
bashing
my
head
or
something
until
it
works,
but
yeah
so
like.
What's
what's
really
cool
about
that,
I
was
like.
A
C
Yeah,
it
sounds
like
a
good
plan.
I
definitely
think
that
translating
The
Bachelor
to
do
it
would
not
be
a
long-term
solution
and
potentially
not
even
a
good
short-term
solution.
So
it
sounds
like
there
might
be
some
design
work
that
needs
to
kind
of
like
preempt
any
of
actual
code
work
that
happens.
A
Extent
we
may
end
up
doing
that
as
a
little
more
agile
iterative
thing.
Some
of
the
stuff
is
complicated
enough
to
understand.
What's
there
today,
so,
yes
define
a
small
thing,
we
need
get
consensus
that
it
would
be
useful.
Get
going
on
that
I,
don't
see
us
being
able
to
do
the
a
whole
global
picture.
Here
are
all
the
tools
we
believe
we
need
up
front
but
yeah.
Definitely
we
want
to
see
design,
work
happening
and
documentation
test
cases
all
of
those
things
that
beyond
just
landings
of
go
code,
MPR's,
yes,.
B
B
Is
that
it's
again,
we
like
they're
scary
tools,
they
have
the
potential
to
break
a
lot
of
things
and
we
don't
understand
how
to
use
them.
Some
of
them
are
documented.
Some
of
them
are
not.
Some
of
the
documentation
does
not
match
what
they
currently
do
right.
So
there
are
lots
of
problems
there.
So
if
you're
working
on
something
new
or
you're
working
on
something
old
and
you,
you
found
a
thing
that
you
all
of
a
sudden
understand,
then
you
didn't
before
write
it
down
file
a
PR
great
that
old.
B
That
will
go
a
long
way
for
the
next
person.
That's
coming
after
you
for
the
release.
Managers
on
the
call
I
owe
you
an
onboarding
doc
I'm
going
to
include
I'm,
going
to
include
the
image
builder,
the
image
builder
code
base
for,
as
well
as
the
recording
of
the
117
alpha
3
cycle,
and
since
the
code
base
were,
and
the
and
the
staging
part
of
the
cycle,
let's
so
well
I'm
thinking
that,
maybe
we,
if
people
come
up
with
the
topics
we
can
maybe
spend
the
last
half
of
all
of
the
release.
B
B
We
also
have
the
we
also
have
the
beta
zero
coming
up
tomorrow.
The
beta
zero
is
an
interesting
release
because
it
also
involves
the
branch
cuts.
It
also
involves
the
creation
of
the
test
jobs,
so
I'm
thinking
that
we
can
I'll
do
another
recording
right,
but
this
time
the
recording
will
start
from
the
release
stage
instead
of
the
stage
the
release
part
instead
of
the
stage
part
right,
so
we'll
get
more
of
the
flow,
and
then
we
can
walk.