►
From YouTube: Kubernetes SIG Release 20200127
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I'm
starting
the
recording
so
welcome
everybody.
This
is
the
Monday
January
27th
2020
cig
release
meeting
somewhat.
Amazingly,
we
have
nothing
on
our
agenda
formally
today
in
terms
of
things
that
people
will
put
in,
but
we
do
have
a
standing
set
of
stuff.
We
would
typically
go
through
the
first
sub
project
that
we
would
talk
about.
Is
the
licensing
sub
project
and
I'm,
not
sure
if
anybody
from
that
is
here
today.
A
B
A
Yeah
we
haven't,
we
haven't
had
a
meeting,
probably
since
we
really
got
to
the
bottom
of
what
was
going
on
there
and
I
wouldn't
say
we
have
it
resolved.
So
a
no-go
is
a
couple
thousand
lines
of
bash
for
anybody
who
isn't
familiar.
It's
it's
very
interesting
bash.
It's
probably
some
of
the
most
beautifully
complex,
bash,
I've
ever
seen
or
actually
just
straight
definitely
is
the
the
most
complex
bash
I've
ever
seen
it.
A
So
you
you've
got
what
one
might
normally
think
of
as
a
list
of
things
you
need
done
to
make
a
relay
a
release,
and
rather
than
doing
one
and
if
it
succeeded
moving
on
to
the
next,
if
it
succeeded
moving
on
to
the
next
the
top
level
control
logic
is
structured,
that
it
goes
across
all
of
them
and
potentially
fires
a
bunch
of
functions
to
start
them
working
on
the
things.
There's
some
problem
zone
that
the
things
are
somewhat
dependent.
A
normal
build
and
release
process.
A
You
have
a
series
of
events
because
one
event
builds
on
top
of
the
prior
event
or
you
sort
of
have
output
and
input
criteria
and
successful
output
of
one
phase
leads
to
input
criteria
being
met
for
the
next
phase.
Code,
isn't
structured
this
way.
That
makes
it
a
little
tricky
to
debug.
So
you
have
a
set
of
things
that
speculatively
get
started
and
some
of
them
get
skipped
because
they're
known
to
be
dependent
on
other
things,
so
they
get
left
for
a
later
Luke.
A
That
calls
all
of
the
things
again
and
tries
to
make
up
for
things
that
it
got
skipped
earlier
and
the
way
that
this
works
is
that
the
code
is
designed
or
is
intended
to
be,
perhaps
modulo
some
bugs
to
be
stateful
and
re-entrance.
So
you
can
call
the
same
function
that
does
a
particular
task
like
there's
a
function
that
will
tag
the
repo
it
might
get
called
three
or
four
times.
A
Some
of
those
things
also
have
parallelism,
so
bash
is
firing
things
in
the
background
and
potentially
waiting
in
addition
to
things
being
stateful
and
being
sort
of
cleaned
up
later
with
subsequent
runs
some
things
also
fire
things
in
parallel
in
weight.
So
you
have
this
really
complicated
sort
of
code
that
maybe
never
quite
runs
the
same
twice
depending
on
the
state.
It's
running
against,
very,
very
difficult
to
debug.
A
Now
it
turns
out
that
the
code-
that's
looping
across
some
of
the
things
that
need
done
is,
is
not
looping
across
a
list
of
actions
or
a
list
of
things
that
need
acted
upon
in
terms
of
state
variables,
but
an
associative
array
and
it
traverses
them
as
if
it's
going
to
deal
with
them
linearly,
and
it
has
a
built
in
assumption
that
they
are
in
a
particular
order.
So,
for
example,
we
have
a
set
of
strings,
we
might
be
doing
a
beta
alpha
release,
a
beta
release,
an
RC
release
or
an
official
release.
A
If
you
loop
across
all
of
the
keys
this
fall,
we
change
the
underlying
container
for
our
build
to
a
newer
one,
because
what
we
were
using
was
ancient
and
probably
had
a
gazillion
C
V's,
and
we
should
have
a
clean,
build
stage
when
we
we
work
right,
makes
sense
well
under
that
bash
for
changed
a
bash.
Five,
because
Bash
five
is
the
latest
bash
and
our
associative
arrays
started
happening
being
traversed
in
a
different
order,
because
this
is
something
you
never
depend
on
in
computer
science.
A
A
If
he
was
gonna
go
ahead
and
do
it
or
if
maybe
somebody
else
could
start
working
on
it
was
whether
we
could
unroll
those
top-level
loops
and
instead
of
calling
things
somewhat
randomly
an
order
that
we
call
them
once
in
the
order,
with
the
specific
fixed
arguments
that
we
want.
Initially,
this
would
lead
to
some
duplication
of
code
because,
instead
of
one
beautiful
generic
function,
you
have
a
couple
specific
ones
that
do
just
the
parts
that
are
necessary
for
alpha
or
for
beta
or
official
release
and
and
so
on.
A
A
We'd
have
a
series
of
things
that
are
working
with
known
inputs
and
outputs,
and
each
of
them
could
be
separate
tools
eventually,
so
maybe
not
even
deep
duplicated
code
but
specialized
focused
code
and
then
all
of
those
things
could
run
in
separate
jobs,
and
we
we
have
like
a
more
short
running
thing
where
we
can
get
the
output
from
it
and
look
at
it,
and
if
it's
buggy,
we
could
iterate
on
just
that
part.
Instead
of
a
whole,
build
being
one
monolithic
thing
and
each
of
those
little
things
could
be
unit
testable
as
well.
A
A
Events
to
be
and
implement
this
clean
and
and
move
forward,
but
we
with
this
bug,
we've
had
to
dive
into
it
more
than
we
intended
and
then
lumière,
even
of
on
the
call
today
from
sick
cluster
lifecycle,
he's
put
forward
a
cap
to
move
cube,
ATM
out
of
the
cake,
a
repo
into
its
own
repo,
and
then
it
still
needs
released
in
conjunction
with
the
other
cake,
a
code
cubelet
for
example,
and
cube
cuddle
and
the
API
server,
and
we
initially
the
the
intent
is
to
version.
Then
that's
the
same
as
well.
A
So
it
makes
sense
to
try
and
reuse
our
existing
build
and
release
code
because
the
set
of
stuff
the
cube
ATM
has
for
build
and
release
requirements
is
no
different
than
when
it
was
in
KK
and
in
theory.
Our
code
and
sig
release
covers
that
use
case
and
practice.
It's
really
really
difficult.
But
if
we,
if
we
got
this
little
bit
of
cleaning
up
at
the
top
level,
control
logic
and
Anago,
it
would
make
it
easier
for
us
to
insert
some
tweaks
that
allowed
us
to
also
build
cube
ATM
in
a
predictable
way.
A
B
A
C
So
if
you
have
the
time,
please
review
my
final
amends
to
the
cap,
we
ideally
we
should
merge
it
today.
I
also
asked
for
BGO
from
sequester
lifecycle.
Take
a
look
so
I
think
we
have
plenty
of
discussion
on
that
cap,
something
that
I
started
considering
like
last
minute
is:
maybe
we
can
start
building
Hubei
diem
from
the
KK
reporter,
instead
of
from
animal,
because
the
tools
that
prepare
tar
balls
are
already
in
Kiki.
C
A
So
the
way
I've
thought
about
this
is
what
what
is
the
final
goal
and
on
what
timeline
so
and
then
kind
of
compare
that
against
where
we
are
because
we
know
we
have
an
ago,
and
we
know
we've
been
working
on
something
for
a
replacement,
but
an
ago
we
have
it,
it
works
and
sort
of
week.
Air
quotes
and
the
the
replacement
that
we're
working
on
is
coming
along,
hopefully
to
be
in
a
fair
amount,
better
State
this
release.
A
But
if
the
cube
ADM
work
is
intended
to
be
done
in
this
cycle,
then
that
time
boxes
things
a
lot.
I,
wouldn't
trust
that
our
replacement
for
an
ago
is
in
time
to
to
depend
on
for
kube,
ATM
I
think
there
would
be
some
risk
there
and
knowing
that
it
would
truly
be
done
and
that
the
two
wouldn't
with
one
thing
plan
and
another
thing
propose
that
over
subscribes,
potentially
so
I
risk
I
worry
about
the
risk
of
missing
on
that
target
date,
so
that
has
mean
I'm
backing
up
to
like
well.
A
C
Yeah
I
heard
that
this
proposal
was
part
of
the
alternatives
in
the
Cape,
but
I'm
willing
to
send
a
proof
of
concept
PR
at
some
point,
and
we
don't
have
to
merge
it
for
KK.
We
we
can
just
see
feedback
from
others
on
this
and
if
they
ask
like
okay,
where
is
this
originated
from,
we
can
just
point
them
to
the
Cape
as
an
alternative
approach.
But
honestly
I'm
trying
to
say
here
that
maybe
I'm
going
to
send
the
alternative,
quote-unquote
alternative
PR.
First.
A
A
A
None
of
us
are
I,
don't
think
anybody's
excited
about
changing
or
modifying
an
ago,
especially
as
we're
trying
to
get
away
from
it,
but
this
has
the
benefit
of
at
least
if
it's
changing
and
modifying
something
else,
it
might
smooth
things
for
the
end
user
it.
It
could
also
result
in
things
that
are
reusable
as
we
shift
away
from
an
ago.
C
C
Think
what
we
should
start
doing,
like
maybe
an
actionable
item
from
that
camp,
like
the
first
item,
is
to
send
an
email
to
Kiev
and
see
that
if
we
have
any
users
of
the
CIA
depths
and
rpms
that
are
currently
being
generated
from
KK.
Maybe
some
of
you
are
familiar
with
that.
We
generate
from
the
cobra
disco
leaders
repository
using
basil,
pepsin
rpms
that
are
supposedly
uploaded
to
our
our
CI
buckets,
but
we
don't
have
any
evidence
of
any
potential
users
there.
C
So
I'm
willing
to
send
a
mail
to
gather
feedback
from
the
community
and
then,
if
nobody
replies,
we
can
just
send
a
pair
for
that
to
remove
them
and
maybe
in
the
future,
like
I
outlined
with
the
cap.
This
already
we
can
consolidate
the
CI
bills
and
the
release
builds
of
demson
RPM
rpms
in
the
same
place,
make
potentially
key
release.
As
a
repository
from
that.
A
That
makes
sense
to
me
I
suppose
you
also
probably
because
it
means
it's
under
your
control
like
it
is
we
start
to
whittle
out
what
dependencies
and
funkiness
there
is
in
certain
places,
like
it's
logical,
to
think
initially,
like
we'll
just
use
what
sake
release.
Does
we'll
reuse
that
and
then
oh,
it's
not
everything
we
might
have
hoped
it
would
be
so
it's
hard
to
reuse.
This
gives
you
the
ability
to
get
just
what
you
need.
C
In
terms
of
owning
I
guess
my
original
question
when
I
joined
the
project
is,
why
do
we
even
have
digital
games
in
the
first
place,
but
then
I
realized
that
this
is
like
what
people
go
for
when
they
want
to
install
like
the
basic
package,
just
like
Caro
actually
keep
most
people
just
download
it
as
a
binary
but
copulate
ki,
beti
em.
They
prefer
using
packages
because
they
pull
some
extra
dependencies
and
I
realized
the
benefits
from
using
packages.
C
So
now
it
is
like
I
am
starting
to
understand.
I
prefer
that
we
have
packages,
it
also
the
maintenance
aspect.
This
has
been
this
issue
where
sequest
lifecycle
does
not
maintain
properly
the
dips.
Not
being
so
long
seek
release
maintain
diem
but
I
think
in
the
future.
They
should
be
like
a
better
communication,
and
maybe
if
signal
is
became
potential
also
give
owners
shipping
some
folders
where
the
specs
are.
These
topics
need
discussion.
A
The
alternatives
being
just
sort
of
documented
and
leave
it
to
the
user
to
get
all
the
things
that
they
want
or
us
putting
in
an
opinionated
way,
all
of
it
in
a
container.
All
of
those
have
downsides
the
the
path
that
things
are
on
right
now,
from
a
cig
release
perspective
is
to
common
eyes
onto
a
go
based
tool.
The
building
sasha
is
there
anything
that
you
would
want
to
share
with
Lumiere
about
that,
like
the
high
level
thinking
and
progress
where
it's
going
as
well.
D
As
a
my
first
question
would
be,
how
do
we
want
to
start
and
when,
though,
we
are
currently
in
a
broker
store,
we
can
already,
for
example,
generate
a
change,
look
for
the
patch
releases
and
also
for
the
minor
releases,
but
we
have
to
come
to
a
point
where
we
actually
use
it
in
production
and
we
should
create
a
timeline
for
it.
I
think
so
and
then
try
it
out
also
regards
regarding
the
testing
repository.
We
want
to
set
up
and
stuff
like
that.
C
Something
that
the
future
tools,
or
maybe
the
work
that
this
collective
going
has
to
be
present
about-
is
that
our
what
more
projects
are
going
to
split
from
kubernetes
qualities.
So
if
you
make
the
tools
reusable,
for
instance,
if
you
have
a
release,
notes
tool
and
the
mania
project
can
use
the
same
release.
Notes
too.
C
You
know
for
us
to
like
even
check.
If
the
our
release
lock
looks
fine.
This
is
going
to
be
beneficial
because
I
collect
a
problem
that
is
seen
with
an
ago.
Is
that
its
bind
its
binded
to
the
kubernetes?
For
this
repository,
there
is
no
way
to
change
that.
So
I
think
our
tools
have
to
be
flexible
in
terms
of
supporting
these
multiple
repositories.
I.
A
Would
prefer
if
we
were
more
flexible
than
that
that
we
we
designed
for
an
assumption
that
the
the
ork
name,
the
repo
name
and
the
branch
names
are
all
variables
the
benefits
of
doing
this
is
that,
then
me
is
an
individual
contributor.
I
can
run
the
official
tool
against
github.com,
/t,
pepper,
slash,
kubernetes
branch,
crazy
idea
and-
and
it
should
be
expected
to
work
as
long
as
we
have
built-in
assumptions
around
naming
and
org
naming
we
sort
of
get
ourselves
into
a
position
where
we
can
only
build
against
the
official
things.
A
If
that's
the
the
only
way,
we
can
do
things
in
the
short
work
short
term
I
mean
I,
guess,
I'm,
okay
with
it,
but
I,
think
long
term,
that
that
makes
our
life
harder
and-
and
we
should
be
thinking
about
ways
to
make
it
easier
for
other
people
to
run
this
code
test
it
and
participate
in
its
development,
as
opposed
to
it.
Only
being
a
corset
of
people
who've
gone
through
a
fairly
long
contributor
ladder
progression
to
get
to
the
point
where
they're
known
and
trusted
to
to
run
the
official
tools.
C
A
A
C
Yes,
I'm
already
watching
this
particular
ticket,
so
I
really
am
looking
forward
to
the
proposal
from
Steven
getting
merged.
We're
against
this
need.
This
needs
security,
sure
discussion
where
we
need
a
testing
repository
for
multiple
testing
repositories.
We
are
working
clone,
kubernetes
and
experiment
with
tagging
and
basically
creating
branches
things
like
that.
It's
I
think
it's
very
important
to
have
this
I'm
quite
surprised.
We
don't
have
this
yet
in
the
project
it's
this
is
really
pending.
C
A
I'm
curious
to
see
what
architecture
think
sigh
I
see
this
being
a
potentially
slippery
slope
kind
of
like
I
mentioned
the
potentially
everything
needs
a
secondary
shadow
repo
upon
which
to
experiment,
and
that's
not
a
great
pattern
and
part
of
the
reason
we
may
have
we
may
not
have.
That
is,
is
an
assumption,
as
with
other
open
source
projects
that
it's
possible
to
run
the
project
ci
on
your
own
fork
of
the
project,
we've
I
think
we've
repeatedly
built
down
a
path
to
where
that's
effectively
impossible.
C
Yes,
I
agree:
the
tool
should
support
custom,
a
branch
is
custom
repositories
and
one
of
the
tools
that
already
supports
that
is
the
cherry-pick
tool.
I
was
surprised
that
I
was
originally
thinking
that
it
does
not
support
custom
repositories,
custom
Forks,
custom
remotes,
but
actually
this
you
can
set
up
everything
and
you
can
actually
report
you
can
actually
set
the
cherry-pick
through
your
own
fork,
so
in
force.
That's.
A
C
It
works
on
the
principle
of
targeting
remotes,
so
you
can
set
up
your
remotes
and
locally
on
on
your
Chrome
and
then,
when
you
execute
the
two,
you
can
pass
the
remotes
that
you
want
to
use
and
it
just
works.
I
think
potential
testing
on
the
user
side
for
our
build
process
should
be
done
in
a
similar.
So
when
you
drive
on
a
build,
you
can
actually
sort
of
dry
running
a
kubernetes
build.
C
D
I'm
I'm,
really
in
favor
of
that.
The
question
is:
how
do
we
want
to
transfer
and
ago
and
to
into
a
cooling
base
tool?
So
do
we
want
to
do
it
in
one
rush,
or
do
we
want
to
replace
parts
of
an
ago
and
put
some
go
code
in
would
be
possible?
I
had
tried
it
all
out
last
week,
but,
as
you
already
mentioned,
the
complexity
of
the
script
is
so
high
that
it's
really
really
possible.
That
I
break
something
up
then,
and
this
would
affect
every
release,
which
is
really
a
bad
thing.
D
A
Agree,
especially
on
the
in
relation
to
testing
front
I,
do
think
if
we
have
point
tools
that
are
well
tested
and
demonstrated
to
behave
in
the
way
we
expect
them
to
replace
a
function,
call
whose
implementation
is
a
bunch
of
bash
with
the
same
name
function.
Call
that
just
shells
out
to
run
the
go
command.
I
would
like
to
think
that
that
would
be
something
we
could
do
safely
with
Antigo
the
the
problem
we
get
again.
There
is
testing,
so
that
leaves
us
between
now
and
the
118
release.
A
We
have
a
few
points
during
which
we
could
do
beta
releases
and
get
some
sense
of
whether
those
small
changes
were
effective
and
start
moving
forward
with
not
just
saying
that
something's
deprecated,
but
actually
removing
that
bash
code.
I
like
that
idea.
In
that
it
it
means
we're
demonstrating
some
forward
progress
and
we're
simplifying
Antigo
with
things
that
we
we
now
know
and
understand.
I
fear
that
the
the
monolithic
switchover
is
gonna
take
more
time
than
where
I'm
at
that's.
Why
I
kind
of
liked
the
incremental
approach,
the
but
yeah?
This
is
a
it's.
A
A
tricky
trade-off,
I
think,
based
on
kind
of
the
last
six
to
twelve
months
of
discussing
change
and
starting
to
get
going
on
change
and
thinking
like,
let's
just
dive
in
and
do
it
and
get
it
switched
over
and
time
flowing
by
I'm,
starting
to
lean
much
more
towards
the
incremental
approach
just
to
be
moving
forward.
I
I
don't
see
the
clear
acceptance
criteria
for
doing
it
completely
by
scratch
having
been
safely
and
fully
met
to
where
we
confidently
switch
over
in
the
next
release
or
to
where
it
becomes
easier.
D
I
mean
form
for
the
changelog,
for
example.
We
already
could
do
it
so
I
could
prepare
a
PR
where
we
replace
the
real
note
special
script
with
the
change,
lock,
creditors
up
comment-
and
this
would
work,
but
at
a
point
where
we
yeah,
where
we
actually
bring
it
into
the
master,
then
it
has
to
work
forever
so
and
I'm
not
sure.
If
this
how
we
can
test
this.
A
I'm
not
too
worried
about
that
one
like
it.
It's
unit
testable
on
its
own
right.
Like
you,
you
keep
updating
your
github
issues
and
showing
like
here's.
The
new
output,
like
you,
you
are
showing
that
anybody
and
any
of
us
can
run
it
and
see
the
output
and
understand
it
like
I
am
comfortable
with
that.
A
It's
something
that
we
don't
have,
or
we
didn't
have
as
much,
and
it
sets
that
precedent
for
saying
like
let
this
is
the
type
of
thing
we
want
for
stuff
that
goes
into
an
ago
and
maybe
and
ago
then
starts
to
become
just
sort
of
this
top
level
control
logic.
Shell,
even
though
already
I
said
like
let's
get
rid
of
the
tough
level
control
logic
do
like
that.
A
Fart
needs
to
go
and
be
dramatically
simplified,
but
I
see
a
viable
path
towards
doing
this
piecemeal
and
for
something
like
that,
the
changelog
part
being
able
to
do
that
with
confidence
based
on
the
bugs
we
found
over
the
last
while
I
think
we've
done
some
simplifying
to
the
tagging
part
and
that
could
be
made
simpler
yet
and
I
can
see
kind
of
decomposing
it.
This
way
as
viable
and
the
things
that
then
become
prerequisites
to
to
other
things
like
packaging
is
pretty
far
out
there
compared
to
tagging
and
yeah.
A
One
of
the
things
I
talked
about
quite
a
bit
in
the
kept
issue
for
Cuba
diem
is,
is
actually
that
taggings
right
now.
One
of
the
things
that
we
discovered
is
we
were
looking
through
an
ago
and
trying
to
debug
I've,
never
actually
looked
too
closely
at
what
we
get
out
of
the
tool,
but
basically
forever
because
of
a
chicken-and-egg
problem.
The
tool
has
tagged
the
repo
and
then
based
on
that
tag.
Ich
has
started
doing
things,
one
of
the
the
things
that
it
does
is
built
the
changelog,
because
we
wanna.
A
If
we,
if
we're
building,
say
one
not
18,
not
zero,
we
want
to
build
d
changelog
for
one,
not
18.0.
What
is
one
980?
Not
zero?
Well,
it's
the
stuff
up
to
the
tag,
but
that
means
that
every
one
of
our
tags,
the
corresponding
changelog,
comes
after
the
tag
in
the
history
and
unless
you
build
from
something
other
than
the
tag
or
some
commits
after
tag,
you
don't
actually
have
the
changelog
in
the
release.
A
So
there's
a
few
little
things
like
that,
like
lube
America
called
out
like
well,
we
don't
want
to
attack
you
baby
yeah
until
KK
has
been
tagged
in
theory,
you
could
think
of
them
as
independent.
Whenever
cube
ATM
is
ready
to
release,
they
could
tag,
but
then
you've
got
to
have
some
sort
of
handshaking
and
who
goes
first,
who
blocks?
Who,
if
they're
not
ready,
is
a
it's
a
bit
of
an
issue
that
I'm
curious.
What
people
have
for
thoughts?
A
A
C
It
feels
to
me
like
we
need
this
generic
mechanic
of
automatically
tugging
and
branching
kada
repositories
based
on
KK,
and
this
is
not
pressing
for
the
cube
alien
project
but
I'm
pretty
sure
we're
going
to
need
it
for
other
projects
too,
and
I
saw
a
comment
by
Michelle
you
from
Google,
who
is
a
tech,
lead
for
six
storage
and
they
have
something
similar
in
mind
for
for
one
of
their
repositories.
They
want
to
tag
in
branch.
Their
repository
is
similar
to
KK,
but
they
don't
want
you
staging
and
to
clarify
here.
C
Staging
is
something
that
the
so
called
publishing
BOTS
can
automate.
So
if
you
had
your
repository
as
part
of
staging,
the
publishing
BOTS
can
pick
your
changes
and
publish
them
to
remote
repository
by
also
branching
and
tagging.
This
remote
repository,
but
six
storage,
similarly
to
cube
ADM
does
not
want
to
do
that.
They
want
this
external
process
of
branching
and
tagging
and
I'm
going
to
start
experimenting
pretty
soon
on
this
area
by
creating
the
so
called
possibly
job.
So
the
idea
is
to
create
a
plow,
possibly
job.
C
So
every
time
KK
creates
a
new
branch
or
new
tag.
The
possible
job
is
going
to
trigger
collect
a
list
of
tags
and
branches
that
are
not
present
on
a
certain
remote
repository
and
create
them
so
maybe
like
maybe
in
the
future.
This
same
process
can
be
used
for
a
list
of
repositories
instead
of
only
the
cube
ADM
requester.
But
this
was
my
experimental
idea
after
discussing
with
the
parameters
who
said
that
I
don't
need
a
problem
for
that
or
something
else.
C
A
A
So
I
consider
the
case
where
we've
gotten
out
of
sync,
some
somehow
and
you've
got
three
releases
on
KK
that
have
just
happened,
and
those
branches
and
the
tags
need
mirrored
over
2k
cube
idiom.
What
would
be
the
order?
Obviously
you
create
the
branches.
Do
you
populate
three
change
logs
tag
them
all?
Do
you
populate
a
change
log
tag?
A
It
popular
change,
log
tag,
get
populated
change
like
tag
it
or
make
the
three
tags,
just
as
you
make
the
branch
and
then
add
three
change
logs
after,
like
what
would
you
expect
as
a
logical
ordering
of
events
there
so
that
somebody
looking
at
the
git
repo
would
see
content
in
a
commit
basis
and
at
a
tag
basis?
That
was
logical,
but.
C
I'm
sure,
but
if
in
three
years
time
I
see
the
comedian
project
having
this
problem,
for
instance,
I
would
consider
the
cubanía
project
and
maintain
in
this
one.
So
maybe
we
should
move
it
to
a
sunset
organization,
something
like
that.
My
part
is
that
the
publishing
bots
right
now
currently
does
not
allow
this
scenario
to
happen.
If
it
happens,
it
triggers
multiple
people
like
deems
Nikita,
immediately
trying
to
fix
the
problem.
I
think.
A
We
should
it's
been
stuck
for
days
or
maybe
even
as
much
as
a
week
where
it
wasn't
publishing
and
if,
for
some
reason
we
were
still
releasing
on
the
KK
side
say
we
had
we're
just
released,
but
then
got
a
news.
Surprise,
CVE,
I,
I
think
it's
in
any
week.
It's
relatively
likely
that
this
could
happen.
C
C
Personally
think
that
we
should
not
include
change
walks
in
the
maybe
in
the
sub
repositories
the
this
cube
ATM,
maybe
in
the
cub,
the
cub,
let
the
cube
Colonel
Forster's
I'm,
not
convinced
that
we
need
to
change
walks
for
those
yeah
if
they
are
part
of
the
official
release.
This
is
really
like
a
detail
that
we
can
clarify
in
the
future
like
see
how
it
goes
so.
I
cannot
give
you
an
exact
order
right
now.
I.
C
A
A
C
This
is
one
of
the
ways
it's
possible
to
create
a
change
walk.
The
other
way
like
is
to
individual
projects,
writing
change
work
in
each
of
these
repositories.
I
think
this
is
also
a
good
way,
but
which
is
still
not
clear.
What
is
the
best
way
to
proceed?
I
think
I'm.
In
favor
of
the
first
option,
I
said
the
release
to
having
the
potential
to
pick
any
arbitrary
repository
and
collect
the
same
labels
and
basically
create
a
subsection
with
a
certain
project
like
hey.
C
Some
projects
that
cube
Carol,
maybe
I'm
not
even
going
to
have
a
change
walk
that
is
bound
to
the
kubernetes
release
cycle.
So
it
might
not
be
appropriate
for
some
projects
to
use
this
tool,
but
maybe
the
couplet
is
going
to
be
bound
to
the
kubernetes
release,
because
it's
quite
core
quite
essential.
So
maybe
we
are
going
to
have
a
section
with
complete
changes
in
the
change
work
being
gathered.
C
This
tech
this
allows
us
to
not
have
is
potential
drift
between
the
styling
of
the
change
works.
If
it's
the
same
change
of
tool
and
the
same
like
basically
virus
from
the
tool,
we
can
generate
the
consistent
change
work
for
the
whole
project,
but
if
qadian
decides
to
maybe
fork
release
to
and
do
its
own
change
work,
then
we
are
going
to
have
this.
Maybe
difference
in
styles
or
maybe
bugs
I,
don't
know.
A
A
All
right,
I'm
gonna,
move
on
to
the
next
thing
on
the
agenda,
which
is
just
to
get
a
readout
on
release
team
related
things
for
the
for
this
forum,
since
it
is
different
than
the
release
team
meeting
and
only
happens
every
couple
of
weeks
is
there
anything
important.
The
cig
release
should
know
about
status
on
118
release.
A
Alright
well
they're
meeting
in
12
minutes.
So
if,
if
anybody
who's
on
the
call
right
now
wants
to
get
an
update
on
118,
that
meeting
is
about
to
happen.
Anybody
on
the
video
who
was
looking
at
cig
releases
curious
about
that
specific
1:18
release
is
moving
along.
That
will
also
be
uploaded
to
video
on
to
YouTube
the
video
from
the
meeting.
That's
gonna
happen
in
a
little
bit
here.
So
that's
where
you
would
find
out
about
that.
A
A
Maybe
it
was
Steven
since
it
was
his
issue,
so
the
the
CI
signal
sub
project
is
an
idea
to
to
break
yet
another
thing
out
of
the
release
team.
So
we've
had
a
problem
perceived
at
least
around
continuity,
where
each
quarter,
when
we
make
a
new
release,
the
release
team
turns
over
new
people,
get
involved
and
there's
not
perhaps
continuity
between
those.
In
reality,
we
have
people
who
are
shadows
and
leads
and
there's
sort
of
a
progression,
and
people
do
hang
around.
A
The
people
in
the
project
more
broadly,
haven't
been
able
to
to
key
into
because
of
the
amount
of
noise
in
test
results.
So
that's
the
general
thinking
there
there's
a
it's
an
RFC
at
this
point
and
the
issue
its
kubernetes
sig
release,
issue
number
966
and
all
they
could
also
in
or
it
is
linked
in
the
minutes.
But
if
anybody
has
lots
of
ideas
there,
it
would
be
great
to
comment
on
the
issue
and
get
up.