►
From YouTube: Kubernetes Release Engineering 20200929
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
To
do
the
formals,
this
is
the
september
29th
2020
sig
release
subproject
release
engineering
meeting
we
are
recording,
the
recording
will
go
to
youtube
after
the
meeting.
We
ask
that
everybody
adhere
to
our
kubernetes
code
of
conduct
and
be
good
people.
A
All
right,
so
just
some
of
you
have
already
seen
this
document
it's
a
ways
of
working
agreement.
Is
there
anybody
here
who
has
not
looked
at
it
yet.
A
Yeah?
Okay?
So
let's
look
to
let
the
actual
document,
because
there
are
some
gaps
in
it
and
we
don't
have
to
necessarily
resolve
all
of
these
issues
today,
but
there
are
of
them
so
yeah.
I
guess
I
can
share
my
screen
and
then
I
just
want
to
pop
up
the
right.
A
A
A
A
B
A
B
B
A
C
I
would
just
like
to
add
one
thing
here:
hello,
hello,
so
with
the
update,
golang
versions,
monthly,
kubernetes
week,
what
we
can
do
is
like
not
monthly.
Whenever
new
release
comes,
we
can
like
have
some
times
time
period
like
it
could
be
a
month
that
we
adapt
that
golang
release
in
our
kubernetes
repo
and
use
go
fix
tools
to
in
the
ci
or
some
automated
way
to
fix
all
of
those
or
improve
those
updates
and
stuff.
A
B
Certainly,
if
there's
a
newer
version
than
what
we
have
built
and
released
with
that
opens
the
the
potential
for
people
to
question
our
choice
and
those
conversations
then,
are
one
of
the
things
that
we
see
consuming
time,
but
there's
also
a
communication
aspect
there,
I'm
not
sure
if
or
how
we
would
make
it
visible
when,
when
we,
in
conjunction
with
the
kubernetes
security
folks,
have
decided
this
one's
not
urgent,
it's
gone
through
a
triage
process,
but
we've
decided
we're
either
not
picking
it
up
right
now,
because
there's
other
issues
or
we're
going
to
pick
it
up
next
month.
B
B
E
I
was
wondering
I
I
saw.
I
saw
you
steven
when
you
when,
when
you
did
the
the
update-
and
I
think
it
was
much
harder
to
do
when
the
miner
bumped
at
just
a
a
patch
release-
I
mean
it
was
harder
to
do
to
115
right.
I
don't
know
if,
if
that's
more
urgent
or
it
is
the
same
to
do
a
regular
bump.
D
Yeah,
the
minor
bump
it
it
depends,
the
minor
bump
can
be
much
harder.
I
think
I
think
what
we
run
into,
or
what
we
will
run
into
every
time
is
that
we
have
to
consider
the
basilis
that's
involved
with
the
with
the
bump
as
well.
So
every
every
update
to
go
requires
a
similar
update
to
the
bazel
rules
underscore
go
so
basically
every
every
time
a
new
version
of
go
comes
out.
Bazel
updates
rules
underscore
go.
D
We
have
to
pick
it
up
in
repo
infra
and
then
use
the
repo
in
for
a
bump,
basically
bump
it
in
repo
infra
tag,
repo
infra
and
then
come
back
and
use
that
in
kubernetes
kubernetes.
So
it's
really
I
mean
that
is
going
to
happen
every
time.
I
know
that
jay
conrod
is
working
on
helping
that
not
be
a
thing
anymore,
so
that
rules
rules
go,
can
arbitrarily
pick
up
any
value
of
of
the
sdk
without
having
to
actually
bump
the
version.
D
I
know
that
is
a
little
ways
out,
but
yeah
there's
kind
of
a
intricate
dance.
That's
required
every
time
we
do
this.
The
minor
versions
are
a
lot
more
likely
to
include
things
in
the
release,
notes
that
we
have
to
consider
that
are
potentially
code
changes
to
kubernetes
kubernetes,
in
addition
to
just
the
bump
process.
D
So
I
think
it's
going
to
depend
the
you
know,
release
to
release,
but
yes,
the
minor
ones
can
be
a
lot
tougher
than
than
the
patch
release.
A
So
I
think
if
we
maybe
we
I
just
have
to
go
through
this
talk
faster
with
more
prompts
to
sidebar
some
of
the
conversations,
or
else
it
will
take
up
the
whole
meeting,
and
I
know
we
have
a
bunch
of
other
topics
to
get
through.
So
why
don't
we
just
do
that?
I
will
flag
anything.
That's
highlighted!
A
Does
that
work
for
everybody
yeah,
because
we
want
to
get
to
other
topics
so
updating
the
core
kubernetes
base
images
pitching
weekly
as
an
idea?
Is
that
sound
good
or
do
we
need
to
resolve
this
through
discussion.
C
C
I
asked
her
that
what,
if,
rather
than
we
make
kubernetes
based
images
weekly,
what
we
can
do
is
whenever
there
is
a
fcd
email
image
released.
We
can
like
create
that
at
that
time,
that
this
doesn't
random
idea
like
so.
D
There
are
a
variety
of
base
images
that
we
depend
on
that
are
not
hinged
on
the
scd
version.
So
it's
again,
it's
it's
going
to
kind
of
depend
on
what
comes
out
when
w
and
base
images
are
dependent
on
cv
content.
Various
other
factors
might
be
dependency,
changes
within
the
base
image
that
we
might
want
to
pick
up
so
it
it's
not
gonna
just
be
fcd,
it's
to
be
a
lot
of
different
things
and
we
can't
faithfully
map
all
of
those
right
to
second.
So,
let's.
C
A
So
stephen
will
would
you
like
to
resolve
this?
Would
you
like
to
own
the
resolution
of
this.
A
Okay,
yeah,
that
makes
sense
all
right
and
then
we're
moving
down
to
the
agreement
section
around
team
roles.
A
So
if
you
have
this
partic
one
of
these
particular
roles,
please
fill
in
anything
that
you
believe
is
missing.
A
The
idea
here
is
to
give
a
fairly
good
grasp
for
others
of
what
your
role
is
without
you
know,
breaking
it
down
into
endless
granularity.
A
So
since
that
was
a
sub
initiative
that
marco
is
driving,
if
you're
an
associate,
would
be
great
for
you
to
add
whatever
you
think
is
missing
here,
because
there
was
the
desire
for
more
definition
for
that
role.
Specifically,
so
here
is
your
chance
to
craft
that
role,
description
for
it
for
your
peers
as
well.
A
Moving
on
down
to
metrics
we
care
about
so
here
are
a
number
of
suggestions,
regressions
introducing
patch
releases
time,
cherry
picking
time
spent
cutting
a
release.
Are
we
all
good
with
those
because
highlight
means
question?
If
we're
good
with
those
metrics,
as
starters,
we
can
un-highlight
them
and
just
call
them
call
them
metrics.
We
care.
A
A
Whatever
do
you
all
need
to
be
able
to
do
work
fairly
autonomously,
so
there's
dependencies
and
then
there's
documentation
means
there's
nothing
here
yet,
and
we
know
that
there's
a
lot
of
well
at
least
several
of
both.
A
Does
anybody
here
have
time
to
help
marco
start
that
list,
based
on
your
own
experience
like
when
you
were
blocked
by
getting
something
done,
and
you
needed
something?
What
was
that
thing?
That's
the
kind
of
information
that
we're
looking
for.
G
A
Okay,
so
how
about
the
two
of
you
work
together
there
and
you
could
post
in
the
release
management
slack
channel
as
you,
you
know
that
you
are
looking
for
more
input,
and
you
know,
if
you
add
your
own
to
this
doc
and
you
want
some
feedback
like
try.
You
know
to
engage
the
others
who
might
also
have
their
own
ideas.
So
thank
you.
D
I
have
a
strong
preference
here
for
release
manager
associates
to
be
working
on
the
knowledge
exchange
goals.
I
think
this
is
an
extension
of
the
the
work
that
marco
is
doing
and
they're
the
ones
that
are
going
to
require
the
knowledge
exchange.
First
sure.
A
But
maybe
in
terms
of
just
maybe
we
pair
them
up
with
eddie
and
nacer
to
to
work
with
our
associates.
Absolutely.
A
Yeah,
that's
that's
a
good
idea
and
then
here
so
I'll
just
change.
My
comment
here.
B
This
is
marky
I
can
help
here.
If
that
helps.
A
A
You
all
right
moving
on
down
to
norms
and
guidelines,
so
this
is
basically
the
gist
of
the
agreement,
making
decisions.
A
D
Yeah,
I
would
say
that
a
poll
request
seems
close
to
final
and
that
maybe
a
discussion
in
the
meeting
here,
which
doesn't
seem
to
be
mentioned,
should
be
one
of
the
first
things
that
they
potentially
do,
especially
if
the
the
user
or
the
potential
submitter
does
not
have
all
the
contacts
that
they
need
to
make.
That
change.
A
D
Yeah,
so
I
think,
what's
missing
here
is
discussing
in
this
meeting.
D
So
it
you
said,
I
think
that
section
says
making
decisions
so
so
yeah
make
it
yeah
discussing
it
at
the
meeting
should
be
one
of
the
ways
that
we
make
decisions,
not
just
prs.
That's
all
I'm
pointing
out.
A
Oh
okay,
so
yeah
then
decision
making
in
the
meeting
overlaps
with
priorities
and
process
changes.
A
Those
might
not
be
meeting
friendly,
you
know
we
might
not
bring
those
to
meetings.
So
how
would
we
should
we
address
decision
making
right
now
in
this
document
about
how
like,
for
example,
promoting
someone
or
how
they
would
become
an
associate?
Maybe
that's
already
documented
elsewhere,
and
we
would
just
pull
that
in
as
links.
It
is
not.
D
The
process
is
has
been,
I
guess,
semi-secretive
hand
wavy
up
until
now,
the
becoming
an
associate
is
not
necessarily
easy
or,
and
the
process
has
not
been
documented,
because
we
are
minimizing
the
pool
of
associates
that
come
in,
so
what
it
has
been
to
date,
and
we
can
document
this
and
we
should
document
this.
What
it
has
been
to
date
is,
you
have
been
a
member
of
the
release
team
for
some
time
and
you've
shown
aptitude
and
interest
in
potentially
working
on
the
release
management
tools.
D
D
So
that's
that's
the
I
mean
the
huge
bar
here
is
that,
with
with
that
access
or
being
on
that
path,
the
expectation
is
that
you
would
eventually
be
one
of
the
sub
sub
15
right
now,
people
or
so
that
have
access
to
cut
kubernetes.
So
it's
not
a
that's,
not
a
role.
We
bring
people
into
lately.
A
Right
exactly
okay,
but
just
in
terms
of
crafting
this
document,
we
don't
have
anything
written
out
about
that
at
the
moment.
No.
H
A
D
Yeah,
I
think
that
the
again
discussion
in
the
pr
discussion
in
the
pr
is
part
of
it,
but
again
it's
also
talking
in
this
meeting
right.
B
D
About
change
decisions,
and
I
think
that,
given
a
large
enough
change
in
scope,
our
decision
that
has
high
impact
to
multiple
people,
then
we
start
moving
up
the
the
net
of
which
groups
do
we
contact
right?
Is
it?
Is
it
release?
Managers?
Is
it?
Is
it
sig
release
leads?
Is
it
the
release
team
as
well?
Is
it
k,
sig
release?
D
Is
it
k?
Dev
will
will
depend
on
the
situation,
but
there
are
bands
of
bands
of
concern
right
that
that
are
local,
but
some
are
local.
Some
are
multiple
sub-project
impacting
for
six
reliefs.
Some
are
all
of
sig
release
and
some
are
multiple
teams
outside
of
stig
release
like
all
within
kdf
right.
A
Who
would
like
to
help
pull
that
together
and
if
you
don't
know
what
this
racing
matrix
is
it's
it's
basically,
what
we're
describing
if
you've
not
heard
this
acronym
before
it's
just
your
concentric
circles
of
folks,
that
need
to
be
informed
of
something
based
on
their
relation
to
the
to
the
issue.
H
D
Yeah
so
I
was
going
to
say
this
should
definitely
go
to
the
chairs
and
the
technical
leads
should
be
working
on
this.
So
tim
sounds
like
your
point.
A
Yeah,
so
all
right
so
I'll
reach
out
to
you
and
we'll
we'll
do
that
availability.
Unless,
oh
should
that,
does
anybody
have
any
questions
about
making
decisions?
Yeah.
B
I
want
to
make
sure
that
we
bias
towards
doing
that
in
an
asynchronous
mechanism.
I
think
it's
really
important
that
we
do
have
synchronous
conversations.
Meetings
can
be
a
really
good
way
to
do
high
bandwidth
decision
making
and
discussions
ahead
of
that,
but
we
also
know
that
we
have
a
strong
request
from
people
to
be
more
time.
Zone
friendly
and
inclusive
around
the
planet
and
synchronous
shared
time
on
the
calendar
is
really
hard.
D
For
sure
for
sure
I
just
wanted
to
make
sure
that
meetings
were
mentioned
as
a
decision
making
process,
because
prs
are
not
really
the
only
ways
we
make
decisions.
G
A
Yeah
yeah,
not
everything
can
be
handled
in
a
github
issue,
so
all
right
availability.
So
this
is
basically
come
down
to
the
working
on
weekends
and
after
hours.
There's
an
effort
now
on
the
release
team.
As
of
yesterday
to
clarify
this
there
with
two
or
three
sentences
of
documentation
tweaking
if
needed
in
existing
docs.
A
E
A
But
if
you
commit
to
doing
something-
and
you
can't
tell
us.
I
I
I
So
I
think
the
the
key
is
just
to
I
don't
know
if
I'm,
if
I'm
going
to
cut
the
kubernetes
and
that
I
announce
it
with
enough
time
up
front
to
ensure
that
everyone
is
on
board
and
people
who
joined
new
like
me
can
also
find
time
to
shadow
this
whole
process
and
learn
from.
D
I
sorry
I'm
reading
through
so
first
off
as
a
as
a
leady
cherry
person
know
that
you
are
not
required
to
work
on
the
weekends.
B
B
D
Want
to
right,
there
are
rare
events
and
they
are
exceedingly
exceedingly
rare
now
that
there
may
be
some
something
that
pops
up
right
and,
and
usually
that
will
be
release
related.
But
I
think
that's.
I
think
that
over
I
think,
discussions
from
the
117
116
group-
and
I
am
I'm
certainly
guilty
of
working
on
the
weekends
and
working
weird
hours.
Some
of
that
is
to
connect
with
people
on
different
time
zones,
and
some
of
that
is
just
when
inspiration
sparks
but
yeah
this
this.
D
Actually,
this
actually
is
one
of
the
reasons
that
we
don't
cut
releases
on
mondays
anymore,
right
to
make
sure
that
no
one's
trying
to
stage
anything
over
the
weekend
and
the
various
people
are
are
not
being
informed
or
expected
to
watch
their
their
email
during
our
slack
during
the
weekend.
The
one
way
to
handle
some
of
this.
I
think
that
I
think
that
if
you
are
working
nights
or
weekends
or
what
would
be
someone
else's
day
or
something,
please
feel
free
to
open
issues,
prs
tag.
D
The
people
that
you
need
to
tag
just
make
sure
you
have
no
expectation
of
getting
a
response
until
their
working
hours
right,
and
I
think
that
you
know
as
part
of
this.
What
would
be
good?
It
was
something
that
was
kind
of
on
the
mental
list,
but
we
should
document
is
what
the
release
managers
working
hours
are
within
the
release:
managers
md
right.
So
if
you're,
maybe
looking
for
someone
at
a
certain
time.
B
D
You
can't
find
them,
then
you
kind
of
know
that
you
can
like
oh
release
manager
sheet.
They're,
not
you
know,
they're,
not
working
right
now,
right
so
don't
bug
them
right,
but
yeah
again
do
not
work
on
the
weekends.
A
B
A
Yeah
I
can,
I
can
continue
doing
that
yeah,
let
me
just
get
back
to
so.
I
think
the
next
topic
was
from
sasha
right.
E
So
I
I
was
looking
at
the
values
that
the
current
then
I
go
backscript
uses
and
I
was
looking
at
the
possible
values
that
could
go
into
the
subcommand.
So
I
I
did.
Let
me
find
my
pr.
I
did
a
a
a
pr
showing
the
the
possible.
I
just
dropped
it
in
the
chat
in
the
chat
using
the
the
possible
command
line
interface
to
the
subcommand,
and
I
just
wanted
to
see
because
it's
I'm
kind
of
that
part
of
the
the
release
process.
It's
a
bit
nebulous
to
me.
E
E
So
I
saw
the
generate
release
version,
which
is
it
defines
most
of
the
environment
variables
that
anago
uses,
and
I
saw
that
there
are
several
release,
keys
and
and
the
branch
app
for
example,
a
question
I
had
is
it
defines
it
pushes
a
branch
and
also
another
a
branch
in
a
master
branch.
So
I
don't
know
if
I
have
to
read
that
from
an
existing
branch,
determine
it
or
just
read
it
from
someplace
else
or
perhaps
it's.
It
needs
a
little
bit
of
more
in-depth
review
from
someone
to
answer
that.
D
It
depends
on
the
type
of
release,
that's
happening
at
any
one
time
so
for
if
we
look
at
look
at
the
the
official
releases
first
right,
an
official
release
will
cut
an
official
release
as
well
as
the
next
rc
on
that
branch
right
for
for
rc's
rc's
will
rc's
will
only
cut
rc's
right
if
you,
unless
you
are
requesting
an
rc
for
a
branch
that
does
not
exist
right,
then
the
rc
will
cut
the
branch
as
well
as
produce
the
rc
right.
D
The
rc
type
for
for
krell
right,
the
alphas
and
betas
are
are
also
single
releases,
so
requesting
an
alpha
will
just
produce
an
alpha
requesting
a
beta
will
only
produce
a
beta.
The
the
the
only
special
behavior
happens
during
the
official
releases
in
the
rc
types
right.
So
by
the
time
a
nago
hits.
This
push
git
objects,
it's
kind
of
like
in
this
bubble.
It's
encapsulated
to
that
specific
key.
D
So
if
I'm
saying,
if
the
release
that
I'm
doing
has
a
special
condition
like
cut
a
branch
or
cut
an
additional
release,
you'll
be
you'll,
be
kind
of
walking
through
the
workflow
for
a
single
key
okay
right.
So
it
essentially,
it
doesn't
need
to
pass
a
you,
don't
need
to
pass
it
a
set
of
release,
ordered
keys.
You
just
need
the
you
just
need
to
pass
it
whatever
key
that
it's
working
on
at
the
time.
Okay,.
E
And
if
that
makes
sense,
yeah
and
and
the
logic
for
that
should
be
inside
the
sub
command
or
it
should
be
in
an
inaugural,
sub-command,
yeah
yeah,
I
mean
I
have
to
capture
that
into
into
a
an
algorithm
in
there,
or
should
it
just
do
as
instructed
from
outside.
D
Options,
yeah
you'll
be
yeah,
you'll,
be
taking
that
that's
yeah,
that's
essentially
what
what
you'll
be
doing
so
like
once.
D
D
It's
only
taking
the
it's
only
considering
the
the
bits
for
that
key
right,
then
anago
is
also
looking
at
okay.
Well,
I've
done
this
one.
I
had
it.
I
had
multiple
keys
in
my
list
right,
so
I'm
going
to
hit
the
next
one
right
and
once
it
goes
through
to
the
next
one,
then
it
sees
oh,
I
also
have
to
push
objects
for
that
one
too.
Okay,
I
should
expect
my
sub
comment
to
be
called
multiple
times
for
each
of
them.
E
Yeah,
okay,
okay,
and
also,
if
anyone
has
suggestions
on
the
type
of
let's
say,
recoverable
recoverable
errors
that
we
may
hit
when
pushing
the
supplements
like
network
timeouts
and
so
on.
I
just
I
still
have
to
investigate
when
it
reports
us
and
how
it
reports
back
the
errors
to
see
which
ones
can
be
retrieved.
B
On
the
prior
topic,
still
I
wanted
to
throw
one
thing
in.
I
think
it's
important
that
we
that
this
is
a
good
question.
You
asked
because
it's
important
that
we
standardize
the
answer
in
anago.
It
was
kind
of
implicit.
There
was
a
set
of
environment
variables
that
were
global.
Everything
had
access
to
them.
B
As
we
decompose
things.
We
need
some
semblance
of
shared
state
and
a
mechanism
for
making
sure
that
we're
aligned
on
that,
because
if
we
don't
every
sub
command
ends
up
with
something
for
a
parser,
taking
a
version
string
and
pulling
out
the
parts
like
you're
asking
like
do.
I
have
to
do
that.
We
don't
want
to
do
that
in
multiple
places,
even
if
it's
a
library
that,
where
the
shared
their
shared
code
to
do
that,
it's
going
to
make
our
runtime
potentially
slower.
B
B
B
D
Agreed
there
I
should.
We
should
also
note
that,
since
this
is
a
krell
inaugur
sub
command
right
of
it
like
eventually,
the
idea
is
that
this
would
go
away
right
or
parts
of
this
would
go
away.
D
E
D
Yeah,
I
would
say
that
the
bulk
of
the
logic
you'll
want
to
be
in
the
package,
as
opposed
to
as
opposed
to
in
the
command
state.
Right
that
way
we
can.
We
can
manipulate
and
reuse
as
we
need
to,
as
as
anago
starts
to
fall
off
the
face
of
the
earth.
B
And
then
I
diverge
backwards.
Your
question
was
on
failure
scenarios,
the
main
one
I'm
thinking
of
is
it's
been
relatively
common,
although
perhaps
more
so
on
the
pull
side
than
the
push,
but
we
have
random
failures
that
are
inexplainable.
We
try
to
talk
to
github
and
we
don't
so.
I
anything
that
goes
out
across
the
network
is
something
where
we
need
to
expect
failure
and
then
have
a
strategy
for
recovery
that
we've
we've
seen
two
types
of
things
there
we
we
talked
to
github
and
we
got
an
error
back
and
we've
talked
to
aaron.
B
B
You
get
a
success
indication,
but
nothing
happened.
You
get
a
failure
indication,
but
something
happened
and
so
figuring
out.
How
much
of
what
you
did
is
tricky
the
next
things
that
you
retry
may
fail,
because
the
prior
thing
succeeded
in
github's,
not
letting
you,
for
example,
add
a
duplicate
tag,
so
the
the
conditional
logic
around
the
calls
is
going
to
be
complicated
on
the
error
handling.
I
suspect.
B
Yeah
you're
mocking
it
this.
That
would
be
the
strategy
for
anything
that
you
do:
allow
failure
and
success
with
persistent
or
not
persistent,
change
that
causes
future
things
to
fail
or
succeed
like
it.
It's
a
it's
quite
a
forking
and
there's
there's
no
way
to
do
that
without
mocking
it
like
these
have
been
maybe
one
out
of
ten
one
out
of
twenty
releases.
We
see
some
weird
thing.
D
Yeah,
so
I
think
you
know
two
of
the
things
to
try
would
for
sure
be
that
the
happy
path,
the
failures
in
failures
and
network
connectivity
so
watching
and
capturing
those
errors,
also
failures
in
in
authentication
against
github
right.
We
have
we,
we
have
those
pretty
rarely
but
they're,
usually
due
to
some
change
in
inaugural
or
some
change
in
the
libraries
that
that
interpret
the
you
know
that
interpret
our
our
github
token
right.
D
So
watch
for
at
the
at
the
baseline,
those
two
things,
because
those
are
fairly
easy
to
map
via
status
code
and
then
everything
else
and
then
the
kooky
mix
of
all
the
things
that
tim
said.
I
would,
I
would
say
that
plan
for
one
of
those
failures,
but
have
a
essentially
have
the
retry,
be
I
receive
no
state
whatsoever
or
I
have
no
idea
what's
going
on.
D
J
All
right
now,
I
would
like
to
give
you
an
update
about
krell
push
and
the
main
target
was
to
move
this
push
build
script
into
a
golang
based
variant,
and
usually
this
script
is
not
used
in
our
repository.
So
it's
not
used
for
cutting
releases
or
something
like
this,
but
it's
used
in
our
ci.
J
So
in
test
intra,
it's
still
heavily
used,
and
we
have
some
bits
down
here,
for
example,
and
we
have
this
locally
staged
release
artifacts
function,
which
is
kind
of
huge,
and
the
first
thing
I
had
to
do
in
the
past
week
was
to
move
over
all
those
base
bash
code
into
golang
based
code
and
yeah.
This
has
been
done
and
it
should
work,
but
now
I
was
going
to
integrate
it
somehow
into
test
infra
and
I
created
a
dedicated
issue
for
that.
J
So
all
in
all,
we
have
three
types
of
usage
of
this
script
and
test
infra.
So
if
we
look
so
now
here
we
are
in
test
infra
and
if
we
look
at
push
builds,
then
we
can
see
that
we
have
a
duplicated,
build
scenario
which
is
kinda
heavily
used.
J
J
We
can
just
use
k,
release
as
go
module
and
then
we
just
have
to
yeah
create
our
push,
build,
pass
down
the
options
we
want
to
provide
there
and
run
push
so
and
that's
it,
and
that
was
my
main
intention
of
the
past
refactorings
on
the
whole
push
logic
and
yeah.
This
can
work,
but
I
showed
you
that
we
also
have
something
like
this
locally
state
release,
artifacts,
which
is
also
used
in
anago,
for
example,
but
in
a
different
way
and,
for
example,
in
ngo.
J
We
push
also
our
container
images
as
release
tables
to
the
gcs
storage.
This
is
something
which
is
not
not
done
by
push
build,
so
there
is
a
slight
difference
between
between
the
behavior
of
both,
and
my
next
plan
is
to
build
something
like
a
krell
and
I
go
push
sub
command,
and
this
yeah
also
reuses
the
source
code.
We've
already
been
written
and
then
run
screl
and
I
go
push
and
yeah
passes
down
the
version
into
the
directory
and
later
on.
J
I
can
also
move
over
those
bits,
for
example,
pushing
the
release
artifacts
and
creating
the
docker
pushing
the
docker
artifacts
as
well.
This
will
be
later
on
migrated
up
to
this
sub
command
as
well,
and
it
looks
like
this
that
we
don't
have
to
do
much.
We
just
have
to
run
a
new
push,
build.
We
pass
down
the
right
options
and
then
we
run
stage,
logo
artifacts
and
that's
my
plan
and
I
hope
it
works.
I'm
testing
nah.
This
failed
funny
enough.
J
I'm
testing
via
google
cloud
platform,
the
integration
and
adapt
the
actual
paths,
but
I'm
pretty
optimistic
that
we
can
reuse
the
source
code
in
k
release
very
soon,
but
we
still
have
issues
in
migrating
over
push
build
or
getting
rid
of
the
push,
build,
sh
script
and
all
dependent
bash
implementations.
So
one
other
idea
could
be
that
we
move
over
the
bash
parts
to
test
infra,
but
I'm
not
sure
if
they're
happy
about
this.
This.
H
D
So
so
awesome
work
really
really
awesome
work.
I
know
that
I
know
that
dan
kicked
us
off.
You
picked
it
up
and
and
ran
with
it.
So
thanks
to
both
of
you
for
for
for
doing
that.
D
This
is
one
of
the
more
interesting
scripts
that
we
have
in
in
our
repo
because
it
touches
so
many
things
and
it's
kind
of
like
obfuscated
the
way
that
it
touches
those
things
so
now
that
you've
gone
through
all
of
the
winding
paths,
the
I
think
the
test
in
for
one
is
the
hardest
to
solve.
I
know
there
was
a
conversation
happening
even
on
the
cube
test,
2
issue
our
pr
earlier
today
and
that
actually
dovetails
into
some
of
the
work
that
dolph
is
doing.
D
So
the
discussion
was
about
cube,
ctl
util
editor,
I
think
cube
ctl
package,
util
editor
or
something
and
and
that's
one
of
the
packages
that
we're
using
for
the
new
release,
notes
editor
flow
right
or
editing
flow.
D
So
one
of
the
concerns
with
integrating
this
with
cubetest2
was
that
now
we're
starting
to
pull
in
api
machinery
and
and
usages
of
staging
repositories
right
that
are
in
or
you
know,
staging
repositories
are
repos
within
kubernetes
kubernetes
under
the
staging
directory
that
essentially,
that
eventually
get
published
out
to
separate
repositories
right
when
you
start
pulling
in
anything
that
is
in
kubernetes
kubernetes,
you
are
potentially
in
like
about
to
be
in
pain,
direct
kubernetes,
kubernetes
imports
are
forbidden.
D
You
should
not
do
direct
kubernetes
imports
cube,
ctl,
less
bad,
it's
because
it's
a
staging
repository,
but
ideally
we
pull
in
nothing
from
kubernetes
kubernetes.
So
one
of
the
questions
there
was,
could
we
potentially
or
could
sig
cli,
potentially
pull
pull
the
util
editor
out
into
its
own
bits
right
that
would
that
would
solve.
D
So
we
have
two
options:
if
we
want
to
minimize
minimize
the
the
import,
the
imports
for
kk
release
to
make
cubetest
happy
with
integrating
the
krell
push
would
be
one
talk
to
sig
cli
about
getting
the
editor
moved
somewhere
else
right.
That's
you
know,
there's
there's
an
overall
want
to
to
take
cube
ctl
out
of
tree
right.
D
I
don't
know
when
that's
planned
for
currently,
but
we
should
talk
to
them
about
that
overall,
so
see
how
we
can
help
there,
but
bringing
the
editor
out
into
a
utility
that
can
be
used
by
multiple
people
or
maybe
the
entire
util
package.
I'm
not
sure
right.
That's
a
discussion
for
for
sig
cli,
but
doing
that
would
minimize
the
imports
that
we
have
from
kubernetes
kubernetes
and
put
us
in
a
happier
path
for
for
integrating
with
cubetest
q.
D
Second
bit
is
to
just
a
second
option
is
just
to
not
use
util,
editor
right
and,
and
that
is
more
owning
our
own
destiny.
If
we
can
find
a
replacement
for
util
editor
that
that
is
maybe
the
quickest
path
but
yeah
we
can
go,
we
can
potentially
go
both
routes.
Adolfa.
E
Yeah,
what
I
could
do
is
just
strip
it
out
and
have
our
own
like
fork
copy
and
because
it's
not
that
big,
a
big
I
actually
talked
to
sasha,
because
I
was
concerned
that
using
that
it
was
pulling
a
lot
of
crap
into
into
our
tooling.
So.
But
I
can.
I
could
definitely
do
like
a
fork
or
maybe
find
an
alternative
and
use
that
it's
not
a
big
of
a
deal.
D
Yeah,
so
let's
the
easy
win,
especially
if
are
are
people
already
using
the
the
editing
flow.
E
Not
yet
actually
they're
scheduled
to
do
their
first
session
sometime
this
week
and
I
was
invited
to
it.
So,
okay
wait
one
more
week.
It's
all
right.
D
Well,
I
would
say
the
least
breaky
way
to
do
this
would
be
to
fork
it
and
call
it
a
day
right
and
then
we
can
look
at
potential
other
options
from
there,
but
I
think
the
two
big
imports
we
want
to
be
concerned
with
are
sat
ctl
as
well
as
I
believe
we
pulled
an
api
machinery.
D
I'm
not
sure.
I'm
not
sure
that
that
was
you
specifically
adolfo,
but
we
should
look
at.
We
should
look
at
a
little
go
mod
y
of
of
api
machinery
too,
and
make
sure
that
we're
we're
pulling
in
as
little
of
the
staging
repositories
as
possible.
G
D
D
This
is
not
well
prioritized.
It's
really
just
toss
everything
on
the
board.
I
think
we
can
go
through
a
prioritization
session
for
the
stuff
in
a
future
meeting
and
I'm
happy
to
have
anyone
who's
interested
in
and
some
of
these
projects
to
to
help
run.
That.