►
From YouTube: K8s Release Engineering Subproject 2019-09-30
A
Hello,
hello,
happy
Monday
everyone.
This
is
the
September
30th
edition
of
the
cig
release,
release
engineering
sub-project
meeting.
This
is
a
meeting
that
is
recorded
and
available
on
the
internet.
So
please
be
mindful
of
what
you
say:
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
in
general,
will
just
be
a
really
great
person.
So
we've
got
one
agenda
item.
A
A
B
B
But
there
were
of
question
marks,
however,
and
some
of
them
we
have
to
ask
seek
release,
but,
for
instance,
my
first
question
is
because
Kip
caro
is
already
trailblazing.
This
well
they're
using
staging,
but
they
already
have
plans
to
move
out
and
use
a
different
version.
Ink
did
you
guys
have
discussion
with
the
see
my
folks
in
terms
of
how
are
we
going
to
release
Kim
Carroll.
A
A
B
A
A
The
state
of
the
game
right
now
is
essentially
they
all
get
built
together.
The
way
our
the
way
our
packaging
tools
work.
It
almost
requires
you
to
build
everything.
At
the
same
time,
the
way
everything
is
kind
of
like
chained
together
so
I.
So
the
second
agenda
item
actually
is
a
PR
for
me
that
separates.
So.
If
you
look
at
pull,
kubernetes
release,
pull
884,
it's
a
PR
that
actually
starts
to
separate
the
at
least
on
the
deb
side.
A
Additional
problem
there
is
that,
essentially,
if
the
spec
file
changes
right
now,
it
changes
across
all
versions.
Right,
we
don't
have
any.
We
don't
have
any
sense
of
versioning
there.
So
if
I
build,
you
know
if
I
build
kubernetes
113
today
Deb's
or
our
Pam's
for
Gerber
Nettie's
113
today,
and
so
that
that
definition
could
be
wildly
different
from
the
way
we
built
one
13.0
right.
A
So
the
idea
behind
this
PR
was
essentially
to
lock
in
to
allow
so
so
one
to
be
able
to
build
arbitrary
versions.
So
if
I
say
to
the
dead
builder
right
now
to
build
a
cow
packages,
cube
CTO
and
cube
a
diem
and
CRI
tools
or
something
right
channels,
stable
and
testing
our
channels.
What
are
we
calling
them?
Release
and
testing
right
I'll
get
a
set
of
packages
for
the
release
and
testing
channel
across
for
cube,
CTL,
cube,
ADM
and
CRI
tools
right.
A
So
the
idea
there
was
to
be
able
to
do
exactly
something
like
this,
as
someone
wanted
to
build
arbitrary
versions
for
arbitrary
packages,
for
arbitrary
versions
for
arbitrary
channels,
they'd
be
able
to
write
I
wanted
to
prove
out
that
it
could
be
done
on
the
Deb
side,
which
is
the
harder
side
to
do
before,
like
the
the
the
RPM
should
be
fairly
easy.
Tim
actually
had
a
PR
open
for
this.
A
A
So
it's
and
it
looks
simple
enough
to
do,
but
you
know,
essentially
what
we
we
want
is
that
being
able
to
kind
of
hermetically
seal
some
of
the
definitions
across
different
versions
so
that
when
we
move
through
when
we
do
a
new
release
and
we
we
change
the
way
dependent
to
be
changed
like
the
version
of
a
dependency
or
something
like
that,
we
can
still
lock
it
in
per
perversion
right.
So
one
of
you
know
so
one
of
the
the
issues
that
we've
had
in
the
past
and
looming
here,
I
think
you're
aware
for
Ricci.
A
Oh,
you
might
have
also
been
on
on
this
issue
where
I
think
kubernetes
115
114
came
out
and
there
is
a
dependency
change
that
caused
breakage
in
kubernetes
CNI
because
it
was
locked
to
occur.
Brunetti
CNI,
zero,
six,
zero.
Our
dependencies
were
equal
to
zero,
six
zero,
so
basically
new
versions
of
kubernetes
CNI
that
were
in
the
stream,
which
included
the
zero
seven
five
when
it
came
out,
cause
breakage
for
anyone
trying
to
install
cube
ADM
right.
A
A
A
Right
so
there
are
an
infinite
amount
of
of
combinations
that
have
been
untested
by
the
community,
which
makes
it
I
was
going
to
say
dangerous,
but
inadvisable
to
publish
new
packages
of
certain
of
certain
dependencies.
So
what
we
need
to
do-
and
Tim
and
I
have
been
talking
about
this
and
we
have
a
doc
that
is
around,
and
this
is
what
I
was
referring
to.
That
needs
to
get
migrated
into.
Keps
is
here.
A
So
this
is
the
release
engineering
brainstorm,
and
this
is
basically
the
work
product
of
I,
think
Tim
and
I
sitting
on
the
phone
for
like
six
hours
over
a
few
days,
as
well
as
the
conversations
that
we've
had
with
multiple
people.
I,
don't
think
it
was
meant
to
be
six
hours
we
just
got
to
chatting
and
enjoyed
ourselves
like,
but
but
yeah.
So
this
is
a
lot
of
a
lot
of
that.
A
Essentially,
we
assume
that
all
of
our
builds
are
based
on
the
version
marker
files
right
and
so
for
people
who
are
not
familiar
with
the
version
marker
files,
essentially
what
we
do
after
every
build,
whether
it
is
so,
if
you're
curious
about
looking
into
the
stuff,
it
is
the
CI
kubernetes,
build
job
in
test
grid
and
that
maps
to
that
maps
to
the
push
build
script
within
K
release,
which
essentially
the
the
last
mile
of
that
script,
is
to
push
a
version
tag
for
for
the
repo.
So
essentially
we
could
do
it.
A
We'd
have
to
build
something
or
we'd
have
to
fit
it
into
the
structure
that
we
have
today.
We
could
talk
about
ways
to
make
that
better,
I
think
I
think
today,
right
now,
it's
not
bad.
Essentially,
if
you
ask
for
a
version
of
kubernetes,
you
will
so
if
it
recognizes
the
version
of
kubernetes
it
will.
Let
me
let
me
go
back
it.
It'll
it'll
give
you
four
or
four
if
it
if
it
doesn't
recognize
the
version
of
kubernetes.
A
Essentially
what
it's
doing
is
going
to
download
that
cait's
io
/
version
marker
file,
slash
release,
slash
version
marker
file
right,
so
some
of
the
files
that
you
might
see
in
our
latest
text.
Alright,
so
the
latest
text
or
Kate's
stable
text
right
latest
dot
text
will
give
you
the
latest
CI
build
the
latest
and
it
could
either
be
a
amd64
build
or
a
cross
build
right.
A
A
Essentially,
a
lot
of
a
lot
of
the
the
script
assumes
that
that
version
gets
passed
from
that.
That
version
is
used
in
cube
a
DM
cube,
CTL
and
and
the
cubelet
right,
and
then
the
kubernetes
II
and
I
and
CRI
tools,
packages
think
about
that
version
right.
They
may
use
it
too
so
like
right
now,
I
have
I
have
something
in
in
that
PR
to
essentially
hard
code.
The
not
hard
cut
well
yeah
hard
code,
the
dependency.
A
So,
if
you
so
like
for
the
cubelet
right,
we
have
a
discussion
about
removing
kubernetes
c
and
I,
or
rather
moving
the
dependency
of
kubernetes
c
and
I
from
from
the
kubernetes
ii
and
I
package
itself
and
moving
that
down
to
the
cubelet
right.
So
that
way
people
can
still
leverage
the
CNI.
The
default
CNI
plug-ins
people
who
are
already
consuming
the
deb
packages
can
use
the
cni
plugins.
A
The
default
C&I
plugins
without
any
breakage
is
great,
so
we're
essentially
planning
on
deprecating
that
kubernetes
CNI
package,
but
continuing
to
publish
the
things
that
are
inside
of
that
package
so
like
in
that
case,
there's
tiny
magic.
That
essentially
looks
at
version
and
says,
like.
Oh,
are
you
below?
Are
you
below
117,
okay?
Well,
I'm,
going
to
lock
your
version
of
CN
I-220
75
right,
because
everything
beyond
that
has
not
been
tested
right
so
there
and
then
also
for
CRI
tools.
Cri
tools
is
essentially
a
sarah
tools.
A
What's
ends
up
happening
and
that's
kind
of
a
a
fault
of
our
tool
is
that
the
CRI
tools
packages
are
actually
not
published
unless
they
are
the
minimum
version
that's
set
in
those
scripts
right
so
like
right
now
in
our
stream.
We
have
because
that
minimum
version
has
also
used
as
part
of
the
dependency
right
so,
and
that's
that's
essentially
to
make
sure
that
you
know
right
now.
A
Cri
tools,
1:13
right
and
that's
to
make
sure
that
1:13
works
right,
but
because
we
have
because
we
have
this
essentially
delay
in
package
versions
for
like
CRI
tools.
That
means
that
that
means
that
the
tools
that
use
that
may
leverage
CRI
tools
within
kubernetes
kubernetes
use
1:13
as
well
right,
which
means
CRI
tools,
1:16,
it's
not
tested
with
1:16
right,
so
there's
magic
in
that
PR.
That
will
also
prevent
you
from
upgrading
to
you
know,
upgrading
to
versions
that
we
haven't
tested
right.
A
So
there
are
lots
of
little
things
that
we
can
tweak
definitely
want
to
hear
more
from
everyone
about
what
we
think
we
should
do,
because
it's
stuff
that
we
have
been
thinking
about,
but
some
definitely
makes
more
definitely
makes
sense
to
have
as
many
minds
in
the
room
as
possible.
Now
that
we've
been
chewing
on
it
for
a
little
bit.
B
So
originally
we
were
thinking
that
for
cube
idiom,
we
should
match
the
release
process
with
the
kubernetes
release
process.
Cube
kernel
to
my
knowledge,
are
going
to
a
couple
completely
involve
volition,
incan,
the
basic
the
schedule
of
the
release
so
I
like
the
idea
with
the
markers,
but
maybe
we
can
simply
when
we
pour
a
repository,
we
can
get
the
latest
stack
and
that's
you
know
the
community
strain
is
not
going
to
wait
for
the
project
if
the
latest
track
tag
is
already
up,
we
get
that
in
we
released
kubernetes
with
this
particular
version.
Okay,.
A
Yeah
I
mean
that's.
That
seems
fine,
I
know
I'm
gonna
answer
you
and
then
I
know
4
B,
4,
B
Theo
has
his
hands
up
so
yeah,
no
I
think
that
would
be
fine
I.
Actually,
the
last
commit
that
I
did
for
that.
Pr
does
stuff
to
figure
out
what
the
tag,
what
tag
you
should
match:
cri-kee
olesja,
so
it's
it's
totally
possible
I.
A
Think
it's
fairly
simple
to
add
pieces
to
search
for
the
latest
version
or
build
on
commit
build
on
commit
is
close
enough
to
what
we
do
for
for
the
CI
builds
today
right.
So,
if
you
ask
for,
if
you
ask
for
latest
it
does
pull
the
version
marker,
but
it's
not
too
much
of
a
stretch
of
the
imagination
to
instead
use.
D
Because
I
understand
all
the
mechanics
do
you
are
talking
about,
but
I
see
the
release
process,
I'm,
saying
more
from
a
developer
perspective
and
I'm
from
a
developer
perspective,
the
release
process,
as
some
implication
like
the
branch
strategies,
one,
the
new
branch
is
creating,
like
the
code
phrase,
called
toe
and
all
the
cherry-picking
stuff.
So
my
consideration
is
okay.
D
We
are
moving
but
mean
out
of
three,
but
we
think
that
it
is
beneficial
for
the
community
having
kuchen
mean
part
of
the
main
package,
and
we
want
to
really
to
covered
me
in
continuing
being
part
of
the
main
package.
That
means
that,
even
if
covered
mean
is
an
aspirated
report,
we
would
like
to
have
the
same
ArrayList
process,
including
all
the
mechanics
that
the
developer,
our,
let
me
say
quickly,
led
to
the
true
by
the
release,
team
and,
of
course,
arriving
today,
all
the
tooling,
which
is
necessary
to
do
the
build.
D
This
is
a
one
one
first
point,
and
my
second
point
is
that
it
will
be
I,
don't
know
if
you
are
considering
these
in
on
top
of
your
list,
but
it
will
be
great
if
we
can
ever
we
can
get
to
merge
the
CIC
give
me
the
process
and
the
release
process,
because
basically
we
are.
We
have
our
all
our
end
to
end
at
senior,
based
on
a
different
release
process
now.
A
A
There
are
essentially
package
definitions,
rpm
spec
files,
so
on
so
forth,
in
kubernetes,
kubernetes,
that's
incur
Burnett,
easier,
burn,
eighties,
slash,
build,
slash,
Deb's
or
rpms,
choose
your
poison
and
those
are
available,
and
they
essentially
those
run
through
CI.
So
the
versions
that
we
test
against
and
I
believe
right
now
it's
single
architecture,
amd64
right
basil,
doesn't
handle
cross
building
right
as.
A
Right,
so
you
know
you
take
that,
and
you
know
you
compare
it
with
the
actual
specs
that
are
used
for
publishing
kubernetes,
at
least
for
the
the
devs
and
rpms,
and
those
are
in
ke
release.
Slash,
build,
slash,
Deb's
and
rpms
right.
So
the
you
know,
the
stuff
that
we
we
are
using
is
you
know,
is
not
the
same
as
the
stuff,
we're
testing,
which
is
bad,
and
we
want
to
change
that.
There
is
an
open
there's,
no
of
an
issue
that
it
believe
is
called
KK
is
the
canonical
source
for
all
something
artifacts.
A
That's
I
believe
that
to
be
true,
I
think
parts
of
that
should
be
true,
however,
just
kind
of
the
way
we're
set
up
today,
kind
of
if
we,
so
if
we
put
the
release
tools
in
if
we
put
the
release
tools
in
kubernetes
kubernetes,
it
starts
to
make
me
question
whether
or
not
we
need
a
coruña
days.
Release
repo
right
if
we
so
part
of
the
part
of
the
reason
I
like
having
a
separate
repo
for
the
release,
tools
means
that
we
can
move
on
our
own
cadence
and
releasing
these
things
I.
A
Think
if
possible,
we
have
a,
we
define
the
specs,
maybe
incur
Bernays
kubernetes
and
pull
them
in
to
the
repo
pull
them
into
the
release,
repo
and
maintain
the
tooling
there,
but
it
almost
makes
sense
for
the
tooling
and
the
specs
to
live
in
the
same
place,
so
maybe
kubernetes
would
in
doing
in
doing
CI.
The
CI
should
not
be
based
on
the
CI,
the
CIA
should
it
should
be
based
on
a
commit,
but
it
should
be
CI
targeted
against
the
release,
repo
that
runs
on
on
commits
of
the
kubernetes
repo
right.
B
Definitely
so
the
main
points
there
is
that
the
team
st.
Clair
wanted
kubernetes
should
be
the
source
of
truth,
but
I
think
that
people
are
going
to
agree
that
we
just
need
a
single
location
for
both
CI
and
the
release
process.
But
can
I
quickly
comment
on
Fabrizio's?
First
point
matching
the
two
link
between
KK
and
cake
medium,
for
example.
B
Basically,
and
also
we
can
I
think
we
could
simplify
the
process,
we
don't
have
to
follow
the
process
exactly
the
main.
The
main
point
is
that
we
should
the
tooling
should
be
able
to
pick
the
right
commits
from
the
cubanÃa
branches
to
build,
for
instance,
cube
ADM
114
latest
to
be
able
to
test
in
CI
that
that
I
think
is
the
trickiest
part.
Egg.
A
Exactly
right,
so
I
I
feel
like
today,
it's
easier
if
we
keep
the
builds
together
but
I
sound
z'
like
it's
sounding
more
and
more
like
we're
at
a
point
where
we
need
to
gather
requirements
from
both
the
cube
ATM
folks,
as
well
as
the
cube
CTL
folks,
because
the
Tim
and
I
have
had
ideas
and
those
ideas
need
to
match
what
the
what
the
community
is
hoping
for.
We
need
to
build
around
that
so.
D
Thank
you
for
the
answer.
I
just
comment
on
the
point
of
the
process.
I
think
that
okay,
we
can
simplify
the
process
for
Google
mean,
but
I
feel
so
far
away
in
having
in
the
same
process,
and
because
this
process
has
strengths,
it
forces
the
developer
to
stabilize
things.
We
fast,
we
far
well-defined
currents
and
indeed
avoid
the
risk
that
that
when
the
bill
is
created,
that
is
something
not
correcting
in
the
process.
So
there
are
pronghorns
I'm,
of
course,
open
to
to
find
a
way
that
that
fits
all
the
stakeholder
around
this
table.
D
But
we
should
be
careful
to
introduce
a
different
release
process
for
cubed
mean
because
simply
otherwise
we
have
to
document
it
to
make
the
developer
aware
and
synchronized
with
the
release
team,
they
have
to
manage
to
two
different
early
cycle,
potentially
three,
because
the
cattle
will
have
its
own.
So
having
oh
you
say,
all
the
project
is
the
true
goal
in
the
same
bond
or
follow
the
same
process.
It
maybe
is
beneficial
for
all
the
actors.
D
A
So
you
are
actually
you're
helping
tim
and
I
build
our
keep
con
talk,
so
we're
gonna
be
talking
about
the
state
of
release
engineering,
tooling
and
all
that
good
stuff
yeah,
and
I
think
that's
right
now
today,
like
we
have
enough,
we
have
enough
tricky
and
sticky
and
things
to
think
about
that.
It
makes
a
lot
of
sense
to
keep
all
of
the
tooling
together,
keep
the
the
way
that
we
release
together
in
in
the
future.
A
What
I
would
love
to
see
is
we
write
a
tool
right
and
I
can
I
can
provide
it
any
package
being
reasonable
right.
I
can
provide
it
any
package
and
it
can
or
any
definition
right
and
it
and
it
will
spit
out
a
package
right,
and
you
know
hopefully
this
could
be
used
for
like
why
not
build
a
tool
that
everyone
can
use
right.
A
It
would
be
cool
if,
like
every
repo
could
use
this
right-
and
this
is
I
know
this
is
like
far
in
the
future-
but
like
writing-
a
tool
that
essentially
like
here's,
what
we're
going
to
do
with
the
tool
right,
the
building
building
any
one
artifact
for
for
a
repo
and
kubernetes
fairly
simple
right.
You
write
a
script
in
your
repo,
you
tie
it
up
with
CI.
You
landed
in
a
GCS
bucket
right,
nothing
too
complicated
there.
A
The
the
second
part
of
that
is
then
consuming
those
artifacts
and
turning
maybe
turning
them
into
different
artifacts
right,
so
the
devs
and
rpms,
it
would
be
lovely
if
we
said
like
these
are
the
expected
inputs
for
the
tool,
and
this
is
what
you
can
get
out
of
it
right.
If
it's,
if
it's
devon
RPM
packages,
we
are
far
away
from
being
able
to
arbitrarily
provide
that
for
any
repo
I'm.
A
You
know
I
sure
some
people
may
say
that
that
is
maybe
an
on
goal
overall
for
some
of
our
tooling,
but
I
think
that
today
it
makes
sense
to-
and
this
is
part
of
the
talk
talking
about-
is
kubernetes
a
distribution.
Now
right
is
like
the
the
things
that
we
so
we've
you
know
kubernetes
as
the
kernel
and
the
things
that
we
build
around
it
right,
so
everything
that
we
publish
on
the
Devon
rpm
side,
so
the
cube
CTL
cubelet,
cube,
ADM,
kubernetes,
CNI
and
and
CRI
tools.
A
We
we
packaged
as
a
bundle,
and
we
toss
it
in
the
same
repo
right
is
that
is
that
what
we
would
consider
to
be?
You
know,
a
distribution,
the
cloud
native
distribution
now
right.
So
that's
part
of
what
the
talk
is
going
to
talk
about,
but
you
know
like
today.
It
makes
sense
for
all
of
these
things
that
we
continued,
that
we
consider
to
be
that
bundle
to
be
in
the
same
place
to
be
built.
The
same
way
to
use
similar
specs
right.
So
I
I've
been
talking
a
lot.
Do
people
have
opinions
on
this?
B
Nobody
has
opinion,
I
have
one
well,
this
is
in
an
ideal
world.
We
can
have
the
same
tooling
for
order.
It
was
all
those
sub
projects,
but
the
problem
is
that
we
keep
Carlo
is
moving
pretty
soon
there.
The
recent
update
said
that
they
are
almost
done.
They
are
moving
to
staging,
which
means
that
tagging
and
branching
is
going
to
be
handled
by
the
publishing,
but
for
now
but
I
guess
in
the
future.
They
want
to
completely
decouple
the
release
process
they
want
to.
B
They
don't
want
to
have
Kanto
is
going
to
be
a
completely
separate
up
on
the
cube
ATM
site.
Fabrizio
wants
to
have
very
similar
release
process,
but,
realistically
speaking,
this
is
more
of
a
question.
You
guys
think
that
we
can
enable
our
coolant
tooling,
to
support
this
separate
repository
scenario.
By
the
time
one
18
is
released.
My
guess
is
going
to
be
no,
which
means
that
we
have
to
release
and
manage
medium.
You
know
somehow
different
process
I'm,
also
seeing
things
like
the
manual
fast-forwarding
that
currently
happens
in
KK.
B
A
So
there
there's
there
there's
a
reason
for
that
right
and,
and
that's
actually
four
we
could
we
could
wire
this
up.
We
could
wire
this
up
to
CI
fairly
fairly
easily
very
easily,
but
you
know
part
of
it
part
of
the
the
goal
of
having
a
manual
process
around
the
branch
fast
forward
is
so
that
a
person
can
introspect
on
the
commits
that
are
being
fast
forwarded
into
the
into
the
into
that
branch
and
I.
C
There
would
potentially
be
that,
as
we
move
towards
things
that
are
more
branched
and
get
more
Cadence's,
that
you
have
more
of
a
sort
of
feature:
branch
type,
workflow,
where
larger
things
are
developed
for
a
period
and
tested
and
then
there's
somebody
who's
a
merge
master
instead
of
right.
Now
we
have
these
github
based
pull
requests
that
are
really
about
merging
a
commit
or
a
small
number
of
commits
and
cherry-picks.
C
Similarly
to
the
stable
branches,
you
might
imagine
a
workflow
where
there's
a
human
who's,
a
merge
master
and
is
there's
merging
in
a
whole
tree
of
hundreds
of
commits
once
they're
proven
ready
and
there
it
might
be
a
totally
different
style
of
automation
that
helps
do
that
in
a
consistent
way
and
give
some
quality
card
rails
like
that
fast-forward
automation,
because
right
now
the
fast-forward
automation
from
the
outside
I
think
people
say
well.
Why
does
that
even
exist?
Why,
during
this
freeze
period?
Why
are
we
what
what
are
we
doing
and
the
end
goal?
C
They're,
like
Steve
instead
is
quality
and
that's
a
human
kind
of
process,
so
the
different
projects
could
and
I
say
different
projects,
because
I
think
we're
really
talking
about
splitting
out
to
be
a
family
of
projects.
Those
other
projects
with
a
cute,
cuddly,
ATM
or
anything
else,
they're
gonna
handle
that
portion
of
their
release
flow
differently,
potentially
and
I
feel
like
the
tooling
might
not
be
super
common.
There
I.
D
I
got
at
the
point
of
point
from
Liuba
me
and
that
probably
we
are,
the
official
truly
will
not
be
ready,
but
also
the
the
point
front
team
that
each
problem
each
problem
project
might
add
a
specific
needs.
So
what
do
you
I
might
be
bigger?
My
major
concern
is
that,
as
of
today
for
code
mean,
let
me
say
the
release
processes,
it
goes,
it
follow
them
the
main
release
process.
So
we
are
a
small
team,
we
don't
have
to
investor
sources
yep
and
we
follow
up.
D
A
Also,
there
is
also
an
introduction
of
like
a
different
process
altogether
right,
so
you
know
there
may
be
certain
guarantees
around
the
way
that
we
publish
certain
artifacts
right.
That
may
not
exist
for
your
team
right
or
you
know
the
fact
that
certain
things
can
only
be
done
by
certain
people
right.
So
people
who
have
access
to
the
the
kubernetes
release
test
bucket
GCS
bucket
or
the
kubernetes
release
test
project
overall
are
the
ones
who
can
run
cloud
builds
on
cloud
builds
on
that
project
which
essentially
do
the
staging
and
releasing
for
Cooper
Nettie's
right.
A
So
right
now
that
it
that
that
list
of
people
includes
the
cig
release
shares
the
patch
release
team
and
the
branch
management
side
right,
in
addition
to
the
build
admins
on
the
Google
side
right.
So
there
are
a
limited.
So
could
we?
There
are
certain
things
that
we
do
and
don't
want
to
open
up
and
we're
still
trying
to
assess
that
there
are
certain
things
that
we
can
pass
through
to
automation
as
well,
some
of
which
we
already
do
today.
A
But
in
doing
you
know
in
in
doing
some
of
that,
we
have
to
consider
the
the
amount
of
hands
that
that
may
touch
certain
artifacts
and
whether
or
not
that's
okay
right.
That's
not
just
for
that's
not
just
for
process
sake,
but
it's
also
to
protect
the
project
and
insofar
as
considerations
around
releasing
for
CVS
and
stuff
like
that
right,
there
are
only
certain
people
who
should
be
doing
some
of
this
stuff
right
now.
A
I
know
I
had
a
second
point
and
it's
gone
No,
so
yeah
so
I,
you
know
and
in
terms
of
timeline
for
for
118
I
I
would
love
to
be
optimistic,
but
I'm
going
to
be
realistic
instead
and
say
that
you're
right,
it
probably
won't
happen
for
117.
One
of
the
the
big
things
that
I
want
to
take
down
is
getting
a
workflow
around
around
building
a
building
repos
right.
So
how
do
we
build
app
app
tinium,
repos
right
and
then
also?
A
How
do
we
do
the
signing
for
the
packages
that
may
live
in
those
repos?
So
that
is
it's
an
open
question?
It's
I
think
it's
it's
going
to
be.
My
white
whale
for
for
117
and
I,
see
that
actually
Linus
is
on
the
call.
Linus
I
I'm
curious
to
hear
your
perspective,
especially
as
a
solicitor
for
people
who
are
not
familiar
Linus's
on
the
team
at
Google.
A
F
A
That
is
me
basically,
what
are
your?
What
are
your
thoughts
are
on
all
of
us,
because
a
lot
of
you
know
a
lot
of
what
we're,
if
we
get
to
a
point
where
we're
just
in
you
know
we're
just
in
ciencia,
for
kubernetes
community
managed
app,
tinium
repos
that
are
split
out
by
versions
or
and
and
channels
so
nightly
stable
or
a
nightly
release
and
testing.
That
gives
us
a
little
bit
more
flexibility
and
what
we
do
and
what
we
land
where.
A
E
I've
been
trying
to
piece
together
the
various
notes
here
so
I
actually
have
not
been
following
the
exact
vocal
conversations.
That's
been
going
on
because
I
joined
meeting
quite
late.
As
you
know,
I
have
noticed,
but
anyway,
I
looked
at.
For
example,
your
PR
number,
eight
eight
four
I'm,
assuming
that's
what
you're
talking
about
at
this
exact
moment,.
A
E
E
A
E
A
A
Yeah,
so
really
right
now,
you
know
I
think
I
think
the
whole
back,
so
we
have
or
the
whole
backs
from
shifting
too
much
in
the
process,
especially
this
is
especially
true
with
you
know.
You
take
rapture
right,
which
is
I,
know
very,
very
little
about
rapture
right
right,
but
essentially
the
rapture
has
a
set
of
expected
inputs
and
outputs
that
we
should
we
should.
We
should
talk
about
maybe
this
week.
A
So
we're
currently
held
back
on
a
earlier
version
of
the
repo
which
which
we
can't
let
continue
so
we
yeah
we've
got
a
we've,
got
to
sync
up
on
that
stuff,
but
I
think
you
know
figuring
out
it
especially
like
if
we
have
a
signing
key
that
is
so
like
GC
GC
P
supports
storing
things
in
an
HSM,
so
I'm
thinking
a
package
signing
key
that
lives
and
in
a
scene
CF
for
kubernetes
release,
release
GC
p
project.
We
can,
if
that
you
know,
if
that
happens,
we
you
know
we
can.
A
A
E
A
That
would
clarify
for
us,
like
the
the
stuff
that
we're
building
right,
like
once
it
lands
on
your
your
computer
or
whether,
wherever
you
decide
to
build
Deb's
and
rpms,
what
happens
afterwards
right?
How
are
things
signed?
How
do
they
get
published
up
to
the
repo
right?
This
provides
like
valuable
insight
into
how
we
should
build
our
thing.
All
right.
C
E
A
E
Yeah
yeah
I
guess
so
for
I
guess
for
those
of
you
who
will
be
seeing
this
from
an
recording,
I
guess.
The
point
point
is
that,
because
after
you
build
the
depth
in
rpms,
there
are
some
additional
steps
that
have
to
be
done,
such
as
signing,
etc.
We
want
to
make
sure
that
the
new
code
more
or
less
Falls
what
we
have
been
doing
so
far
I
mean
because
it's
like
a
prior
art
right.
E
So
you
want
to
make
sure
it
lines
up,
yep
and
yeah,
and
and
also
forth
for
those
of
you
who
are
familiar
yeah
like
this
code
that
we
do
to
published
up
in
rpms
that
sits
inside
Google's
internal
source
code
repo.
So
it's
not
public,
so
people
have
n.
This
call
have
been
like
piecing
together,
the
various
like
what
goes
on
like
I,
don't
know
like
what
does
it
do
so
I
think
exposing
that
would
be
great
yeah
I
just.
A
E
A
Yeah
yeah,
so
that's
that's
kind
of
it
hit
the
nail
on
the
head,
all
the
stuff,
that's
kind
of
been
on
our
minds.
Essentially
we
we
produce
a
set
of
outputs
for
a
tool
that
we
have
no
clue
about
and
yeah
and
and
we
need
to
figure
that
out
so
I
think.
That's
all
been
said:
I
will
plan
to
schedule
a
meeting
for
a
time
that
works
for
maybe
you
Tim
sue
me
and
I
make
sense.
Okay,
yeah
yeah,
so
we've
spent
they
spent
most
of
the
meeting.
A
A
Okay,
cool
all
right,
so
the
last
agenda
item
was
I
took
a
first
pass
and
Micah.
It
was
atrocious,
so
so
I
should
put
that
disclaimer
here.
I
took
a
first
pass
at
rewriting
one
of
the
release
tools
and
it's
not
completely
done
yet
again.
First
pass
of
rewriting
one
of
our
one
of
our
shell
based
release
tools,
two
to
go
and
it
went
okay,
there's
still
a
few
things.
I
need
to
figure
out
like
how
we
do
so.
The
tool
in
question
is
actually
the
branch
fast
forward
tool,
but
we
were
talking
about
earlier.
A
So
essentially,
what
branch
fast
forward
does
is
it
finds
a
same
place
same
point
in
time
same
commit
to
compare
against
two
branches
fast
forwards.
The
contents
of
one
of
those
branches
on
on
to
the
other
runs
a
set
of
scripts
and
then
allows
a
human
to
inspect
and
then
publishes
or
pushes
that
to
get
to
some
git
branch
right.
A
So
we
have
a
problem
or
is
it
a
problem?
Yes,
problem
that
a
fair
amount
of
our
release,
tooling,
is
written
in
bash.
That
is
not
a
problem
in
and
of
itself,
but
the
the
complexity
that
we've
we've
reached
with
some
of
these
tools
would
it
would
be
nice
if
some
of
these
things
were
written
in
in
a
different
language
and
an
actual
complete
program,
more
complete
programming
language.
A
So
you
know-
and
you
get
these
niceties
like,
like
writing
sane
tests
right
as
opposed
to
as
opposed
to
trying
to
introspect
on
some
of
these
tools
based
on
based
on
running
them
right.
So,
if
you
take
a
Nago
which
is
essentially
our
our
it's,
the
brains
behind
the
operation
for
for
some
of
release,
engineering
Anagha
is
an
1800
line.
A
1800
plus
I
think
1857
align
bash
script
right
now,
and
that
is
not
not
the
best,
especially
because
I
feel
that
a
lot
of
the
time
we
have
a
lot
of
the
time
that
these
tools
have
been
around.
There
has
been
minimal
improvement
to
these
tools,
because
people
are
scared
to
touch
an
1800
line
bash
script
and
that
totally
makes
sense.
If
you
want
to
see
what
an
auger
looks
like
that's
there,
Anagha
is
then
wrapped
by
GCB
manager.
A
Gcp
manager
handles
all
of
the
bits
that
a
nago
does
and
moves
that
to
and
wraps
it
in
Google
Cloud
build
right.
So
that's
what
we
used
to
do
the
staging
and
releasing
of
of
kubernetes
version
of
the
individual
kubernetes
versions,
and
we
also
use
that
to
do
the
promotion
of
those
of
those
staged
artifacts
into
buckets
that
the
public
would
access
right.
A
So
all
of
that
to
say
you
know
these
tools
like
a
nago
JCB
manager,
prin
find
green,
build
a
bunch
of
the
other
ones
that
you
would
find
in
the
release.
Repo
also
source,
a
set
of
bash
libraries
right
and
those
bash
libraries
have
similar
similar
limbs
right.
So
there's
the
common
library
which
does
a
lot
of
common
stuff
setting
up
setting
up
blogs
set.
You
know,
setting
being
able
to
rotate
logs
being
able
to
have
you
know,
common
exits,
common
traps
for
things
and
so
on
and
so
forth.
A
Then
there's
like
the
get
Liggett,
Lib,
library
and
and
and
release
live
library.
So
those
do
what
they.
What
the
names
kind
of
suggest
right,
one,
you
know
figures
out
things
for
get
like:
what's
the
right
get
off,
URL
or
something
as
well
as
and
then
the
release
one
does
okay.
Well,
this
is
the
function
to
upload
to
GCS
right.
So
you
take
all
this.
A
You
take
all
those
things
and,
and
so
we
have,
you
know
one
piece
where
we
have
to
rewrite
the
tools
right
and
then
we
have
a
piece
where
we
have
to
rewrite
the
libraries
as
well
right.
So
the
idea
behind
this
PR
was
to
take
a
basically
take
a
small
chunk
of
some
of
the
stuff
and
show
what
it
might
look
like
to
abstract
the
libraries
that
we're
currently
using
and
bash
into,
go
as
well
as
write
a
tool
around
some
of
that
stuff
right.
A
So
that
is
very,
very,
very
early
pass
of
this
work
or
what
we
can
potentially
do
there
and
I
think
that
by
starting
on
this,
maybe
I'll
start
the
the
wheels
in
motion
of
people
getting
excited
about
potentially
rewriting
another
one
of
these
tools
right.
So
that's
for
any
of
the
release
manager
associates
that
are
on
the
call.
That
is
one
of
the
that's
one
of
the
potential
things
that
you
could
work
on
all
right.
A
We
just
need
to
figure
out
a
good
flow
for
people
getting
involved
in
that
and
and
what
it
could
look
like.
So
I
think
I
wanted
to
use
this
PR
as
kind
of
like
a
framework
for
like
a
this
is
what
we
could
do
do
we
have
questions?
How
like
is
the
design
wrong?
It's
it's
simple,
but
it's
it's
admissible
forward.
So
if
anyone
asked
questions
comments
concerns
if
they're
interested
in
working
on
some
of
that
stuff,
we
can
figure
out
how
to
chunk
that
up
and
give
to
you.
A
A
So
any
topics
remaining
any
anything.
Anyone
wants
to
discuss.
I
realized
we
did
not
do.
We
did
not
do
introductions
I
see
some.
This
is
a
relatively
new
call,
but
I
see
some
new
faces
on
the
call
I
see
some
I
see
some
usernames
that
I've
seen
on
github,
but
never
seen
in
real
life.
So
it's
nice
to
see
all
of
you.