►
From YouTube: Kubernetes SIG Release - 2019-03-26
Description
A
Ya
jinda
is
in
the
usual
place,
but
I
will
paste
it
into
the
zoom
chat
here.
As
always,
I
would
ask
that
everybody
adhere
to
the
kubernetes
code
of
conduct
and
be
good
people.
So
first
things
first
on
the
agenda.
We
we
made
a
release.
Yesterday,
yayyy
team
114
is
out
there.
115
things
now
are
starting
to
ramp
up.
We
do
have
a
bug.
That's
come
up
in
the
last
24
hours,
but
we'll
get
to
that
a
bit
later.
So
the
first
thing
on
the
agenda
is
Maria.
Who
is
up
very
late?
B
So
I
would
like
now
that
things
are
a
little
bit,
though,
that
were
past
the
114
release.
I
would
like
to
start
on
that
thread
again
and
move
forward
with
a
making
sure
that
114
looks
like
master,
but
also
any
other
branch
dashboards.
Look
like
master
going
forward,
so
I
have
put
some
steps
together.
B
That
I
could
think
of
in
the
document
that
have
links
in
the
agenda,
which,
essentially,
like
my
my
approach,
would
be,
let
me
or
did
the
114
jobs,
see
what
the
overlap
is
with
the
ones
from
master,
arrange
them
into
the
same
kind
of
words,
which
is
blocking
informing
and
upgrade
and
then
iterate
from
there.
There
is
a
couple
of
there's
a
couple
of
suggestions.
B
One
of
them
is
to
fold
a
great
into
blocking
and
just
have
one
Quanah
plugging
in
one
informing
but
per
branch,
and
there
are
also
a
couple
of
questions
and
I
guess
blockers
that
I
have
at
the
moment
the
main
one
being
that
there
isn't
an
exact
mapping
between
jobs
that
we
look
at
for
branch
for
branch
see
I
compared
to
to
master.
Some
of
them
are
GK,
for
example,
some
of
them
just
have
additional
jobs
that
we
don't
look
at
that
this
area
that
we
don't
have
is
equivalents
for
in
master.
B
C
So
I
would
say:
let's
move
all
the
upgrade
jobs
in
the
master
blocking
and
then
maybe
we
can
have
a
conversation
about
whether
those
upgrade
jobs
are
actually
or
I.
Don't
know,
I'm,
sorry,
as
I'm
saying
this
out
loud.
Maybe
we
want
to
have
the
conversation
about
whether
those
upgrade
jobs
are
something
that
meet
the
criteria
we
have
laid
out
for
blocking
jobs.
They
take
a
really
long
time,
they're,
not
scheduled
super
frequently
and
they're
incredibly
flaky.
C
So
you
are
hearing
me
decide
out
loud
that
perhaps
instead
we
should
move
all
the
upgrade
jobs
to
be
informing
dashboard.
Then
we
only
have
two
dashboards
blocking
SuperDuper
care
about.
Automation
can
be
pointed
at
it
informing
nice
to
know
about
and
then
just
copy
that
for
all
of
the
other
release.
Branches
I
also
feel
like.
C
It
would
be
a
really
good
idea
for
us
to
enforce
this
with
some
kind
of
automation
or
planting
tool
to
make
sure
that,
like
everything,
looks
the
same,
you
could
maybe
go
as
far
as
like
having
them
be
generated
by
a
thing,
but
I
would
say
just
to
start
with
some
enforcement
of
hygiene.
That
would
be
a
good
idea.
We
already
do
this,
for
example,
by
saying
that
if
you
had
a
new
crowd
job,
it's
a
job
to
show
up
somewhere
in
testing,
so
that
those
test
results
are
being
sent
off
to
oblivion.
C
D
B
F
We
are
still
in
the
planning
phase
of
what
to
do
and
what
is
the
best
way
to
do
it.
I
cannot
give
you
a
concrete
response
of
how
the
cube
ATM
upgrades
are
going
to
look
at
like
possibly
somewhere
else,
two
weeks
after
the
so
the
release
is
out,
probably
three.
Two
weeks
after
now,
we
are
going
to
have
a
plan.
We
are
actually
discussing
right
now
in
the
in
a
Google
Talk.
What
what
is
the
plan
and
do
cover
like
a
separate
question
related
to
that.
A
And
we
we
have
the
difference
between
things
there
in
the
cluster
directory
and
things
that
are
happening
with
Cuba
diem
I
want
to
ensure
that
we
don't
lose
track
of
the
site
that
we
we
need
something
from
ideally
meaningful
signal
on,
install
and
upgrade,
and
that
within
this
variation,
it's
easy
to
forget
which
we're
looking
at
at
times
across
the
the
branches
or
the
changes.
As
we
shuffle
things
around.
Oh
yeah,.
G
F
So
there
is
look,
there
is
a
document
that
explains
how
the
users
should
do
it,
and
it's
kind
of
it's
not
super
clean.
How
to
do
it,
because
you'd
only
have
to
upgrade
the
couplet
last
and
there
are
multiple
caches
in
this
document
like
if
you
mess
up
a
step.
It's
the
question
is
broken
and
we
we
can
automate
this.
You
know
upgrade
test,
but
still
every
every
single
provider
like
gke
or
AWS
they
they
might
do
upgrades
completely
differently
it
in
a
different
topology
unit.
C
What
I'm
trying
to
drive
toward
here
is
to
kind
of
go
ahead
on
at
the
we
have
a
bunch
of
upgrade
tests,
but
then,
like
cluster
lifecycle,
refuses
to
support
them
because
they're
made
of
bash
and
not
coot,
ADM,
okay,
fine,
but
I'm,
a
member
of
the
community
who
would
really
like
to
see
there
be
like
a
canonical
upgraded
story,
and
it
doesn't
feel
very
great
to
hear
that.
Oh
is
this
like
super
legacy,
deprecated
thing
that
only
Google
supports
and
if
I
can
be
honest,
like
Google
does
not
SuperDuper
support
it.
C
It
is
not
well
staffed,
I'm,
pretty
sure
anybody
who's
attending
this
meeting
right
now
knows
that,
and
so
we're
left
with
the
situation.
We
have
incredibly
noisy
and
possibly
ineffective,
upgrade
testing
that
supposedly
nobody
really
uses
this
stuff.
This
they
end
up
clusters
anyway,
I
don't
know
I'm
really
confused.
So
that's
why
I'm
more
comfortable
at
least
moving
it
over
to
informing
bored
for
now,
because
I
don't
think
it's
anything
that
we
reasonably
block
on.
C
We
actually
still
had
perpetually
failing
upgrade
tests
only
taught
114,
let's
call
it
what
it
is
and
let's
not
make
it
blocking
until
it's
blocking
that
said,
I
know
that
kind
of
super
fast
and
that
maybe,
with
the
work
that
you're
doing
with
kabini
em,
we
could
get
something
that
at
least
proves
that
we
can
move
from
one
version
of
kubernetes
to
another.
I.
Just
think
there
are
all
of
these
corner
cases
around
making
sure
you
know.
C
C
Do
this
thing
we're
like
no
keep
applications
running
and
make
sure
that
they're
active
in
live
Wow
an
upgrade
is
happening
so
as
nodes
drop
down
and
come
up
like
it's
still
making
sure
everything's,
okay
and
like
standing
up
kind
and
quickly
bumping
it
from
one
version
of
kubernetes
to
another,
will
not
solve
that.
So
it
kind
of
comes
down
to
what
is
the
level
of
coverage
that
we
as
a
community
are
comfortable
with
and
I.
Think
like
how
cloud
providers
choose
to
run
upgrades
for
their
hosted.
F
Like
a
quickly
response
like
what
is
the
view
of
qadian
cluster
in
the
first
place,
like
the
go
of
cube,
ATM
is
to
be
the
set
up
to
the
official
set
up
tool
that
provides
a
minimum
viable
cluster
of
kubernetes
and
from
there
we
don't
really
support
quad
providers.
We
do,
but
the
code
prevalence
might
do
something
completely
different
in
their
products.
F
For
instance,
tomato
ala,
Jiki
Jiki,
is
doing
the
API
server
on
a
separate
note,
which
is
it's
a
control
plane
component,
but
it's
not
on
the
same
node
as
the
schedule
around
the
controller
manager.
Cube
ATM
is
doing
all
the
three
components.
On
the
same
note,
so
the
control
plane
is
isolated
of
the
node
and
we
believe
that
this
is
the
canonical
topology.
C
Like
I
think
I
want
to
live
in
a
world
where
there's
a
clustering,
API
driven
thing,
because
cluster
API
is
totally
open
source
and
that
might
represent
the
open
source
community
cloud
agnostic
way
of
standing
up
a
cluster
and
then
upgrading
it
exercising
that
best
practice
or
that
best
order.
We
just
don't
quite
have
that
today,
yeah.
F
The
story
with
acoustica
is
that
they
they
have
a
lot
of
decisions
to
make
still
ready
to
how
they
want
architecture,
because
the
architecture
picture
there
isn't
clear
and
also
the
API
is
objects
that
are
clear.
The
tools
that
they
use
to
bootstrap
questions
are
clear
and
I,
don't
see
any
upgrade
tests
from
coastal
API
in
the
115
cycle,
correct.
C
Which
brings
me
back
to
like,
so
we
have
a
big
pile
of
bash.
That's
the
only
thing
that's
really
providing
us
upgrade,
so
you
know
it
sure
would
be
cool
if
we
could
get
more
people
actually
helping
with
that
or
if
we,
as
a
community
could
collectively
decide.
We
would
rather
have
zero
upgrade
signal
than
flaky,
barely
supported,
we're
not
really
sure
if
anybody
uses
it.
This
way
upgrade
signal
and
I.
Don't
need
to
decide
that
today
and
I
feel
like
I've
taken
us
far
afield.
I
just
wanted
to
say.
A
H
I,
don't
believe
that
part
is
true,
I
mean
there's
a
supported.
Kubb
ADM,
upgrade
command
I.
Think
to
your
point,
Lumiere.
There
are
some
differences
between
how
the
how
a
user
sets
up
a
cluster
with
Canadian
relative
to
how,
for
example,
a
cluster
is
provisioned
internally
by
Google
for
gke
I.
Think
to
Aaron's
point
that
those
differences
I
mean
for
the
open-source
project
are
not
necessarily
super
important
like
we
for
the
prop
for
project
goodness,
whether
the
kubernetes
is
ready
to
release
or
not.
H
At
least
cig
release
should
be
basing
our
decisions
and
our
architecture
for
designing
the
upgrade
test
to
the
subjects
is
based
on
committee
m
and
it
and
actually
supported,
upgrade
workflow,
but
based
on
open
source,
rather
than
either
a
a
bunch
of
bash
in
the
cluster
directory
or
some
hypothetical
proprietary
code.
They
live
someplace
else
used
by
a
cloud
provider
so.
F
H
But
as
a
project,
we
need
a
a
opinion,
asked
you
how
to
spin
up
a
kubernetes
cluster
and
how
do
I
break
at
that
cluster,
for
testing
or
to
Aaron's,
I,
guess,
straw.
You
know
non-argument
decide
that
as
a
project,
we
don't
care
about
whether
you
can
upgrade
between
kubernetes
versions
before
we
release
a
new
version
with
kubernetes
I.
F
See
well,
in
any
case,
I
think
we
should
start
moving
away
from
the
pile
of
brush
that
is
cubed,
so
my
proposal
is
that
for
115
is
that
we
me
and
my
colleagues
across
the
lifecycle.
We
can
try
to
get
the
upgrade
signal
working
for
cube
ADM
properly
and
we
can
continue
keeping
I
mean
the
upgrades
already
suggested.
We
should
remove
them.
F
It
really
depends
on
the
who
can
maintain
this
from
sequence
lifecycle
we
have
only
just
in
Santa
Barbara
at
this
point,
who
is
willing
to
maintain
these
separatists
and
I
am
Not
sure
that
anyone
else
is
going
to
agree.
So
it's
probably
a
seek
release
decision
that
whether
we
should
keep
this
ability
sort
of
for
any
spoken
movement.
Yes,.
C
But
there
is
not
anything
that
is
Google
specific,
like
you
are
anybody
in
the
community,
it's
capable
of
running
these
scripts
themselves,
standing
up
across
to
the
exact
same
way
and
doing
the
same
amount
of
troubleshooting
like
I,
hear
and
I'm
sympathetic
to
the
fact
if
it's
like
well,
but
it
requires
spending
money
on
Google
Cloud.
Maybe
we
can
find
ways
around
that.
C
With
anything
that's
stood
up
and
with
all
the
horrible
piles
of
bash
you
get
all
the
logs.
You
can
SSH
into
all
the
nodes.
You
can
see
all
the
things
you
are
just
as
capable
of
troubleshooting.
These
things.
I
was
folks
within
Google.
It's
clearly
it
purely
just
been
a
stance.
There
say
cluster
lifecycle
only
wants
to
support
anything
that
uses
cube
ATM
there.
If
you
used
to
touch
anything
that
does
not
use
cute
baby
on
the
bash
does
not
use
cube
ATM.
Unfortunately,.
F
F
F
You
know
with
the
the
goal
problem
we
we
had
to
pump,
go
as
well
and
I
looked
at
the
diff
and
all
the
resources
in
there
were
Google
specific
objects,
Google
specific
resources
and
encuesta
lifecycle.
You
only
have
Jesse
Santa
Barbara
at
this
point.
You
know:
Robert
Bailey
moved
to
another
area,
he's
no
longer
in
the
city,
so
we
basically
only
have
Justin
now
it's
so
it's
not
like
a
question
of.
We
don't
want
to
we
just
we
don't
even
know
what
these
objects
are,
because
they
are
GU.
No
specifics.
F
C
Here
I
think
we're
in
violent
agreement,
because
I
shared
that
problem
as
well
and
trust
that
I'm
having
those
discussions
in
non
publicly
recorded
mediums
but
I
feel
like
I.
Just
want
to
remind
everybody
that
it
is
something
that
we
can
publicly
discuss
and
publicly
support
out
in
the
open
as
well.
But
I
feel
like
I'd
like
to
get
back
to
the
agenda.
Unless
so.
A
If
we
summarize
this
to
a
proposal
and
kind
of
time
back
for
her
Maria,
it's
seems
like
we're
in
general
agreement
to
to
say
that
upgrade
will
be
simply
informing
for
now
and
that
we
as
a
community,
need
to
decide
what
the
forward
path
is.
If
we
want
something
for
upgrade
to
be
blocking
eventually,.
B
Good
most
likely
starting
tomorrow,
though
okay,
it
sounds
good
to
me
as
well
to
just
fold
it
into
informing
for
now.
I
would
like
to
do
that
at
the
same
time
as
written
clarity
on
criteria
that
would
make
them
blocking
again,
I
feel
quite
uncomfortable
with
not
having
coverage
for
upgrades
or
blocking
coverage
for
upgrades.
Rather.
A
It's
a
worry
and
I
think
that
discomfort
is
healthy
and
good,
but,
as
Aaron
mentioned
like
each
of
us
has
a
release
lead
so
going
back
for
quite
a
while.
His
has
had
to
look
at
that
red
stuff
for
the
flaky
stuff
and
kind
of
accept
it
and
wave
our
hands
around
it
and
we've
always
gone
ahead
and
released.
So
we,
we
haven't,
really
had
a
strong
signal
there
anyway,.
E
Someone
they
try
something
with
a
115
release
team
based
on
some
conversations.
We
had
around
selecting
the
115
release,
team
I,
don't
feel,
and
a
couple
of
other
people
agree
with
me
that
our
apprenticeship
shadow
mentorship
is
universally
working
and
so
my
goal
for
the
115
release
team
is
going
to
be
to
try
to
supervise
the
shadow
selection
process
and
then
the
sort
of
shadow
training
process,
as
in
you
know,
are
the
shadows
getting
what
they
need.
Are
they
learning
how
to
do
their
jobs,
etc?
E
One
of
the
issues
we
had
last
time
was
the
shadows
didn't
get
selected
for
several
weeks
into
their
because
we
were
trying
to
switch
the
new
system,
so
I'd
like
to
actually
go
ahead
and
get
that
running.
Right
away
is
Steven
on
the
call
he
said
he
had
a
conflict
today.
Okay,
so
I'll
bug
him
on
slack
because
I'm
fine
with
I
guess
one
of
those
release
leading
is
the
one
thing
I'm
determined
about
is:
do
we
still
want
to
use
a
forum
for
people
to
volunteer
to
be
shadow?
E
D
E
C
E
Okay,
anyway,
my
goal
is
to
get
that
form
up
either
tomorrow
or
Thursday,
because
we
already
have
people
commenting
on
the
issue.
So
we
need
to
actually
give
them
a
place
to
sign
up.
E
I
E
A
D
D
C
Sounds
roughly
good
to
me,
although
I
feel
like
people
are
I
personally
have
witnessed
a
lot
of
people
doing
PRS
and
work
in
the
enhancements,
Rico,
so
I
think
a
lot
of
planning
has
already
started,
but
those
dates
are
really
useful
to
call
it
like
when
he
calling
the
team
forms
and
you're
done.
You
have
your
team,
that's
it
going
forward
and
then,
when
you
talk
about
what
process
changes,
are
you
thinking
about
introducing?
When
do
you
want
to
kind
of
put
a
stop
of
that
sort
of
stuff?
No
we're!
C
D
And
I
think
for
any
other
changes,
I'm
kind
of
waiting
to
see
what
comes
out
of
the
TRO
I
know
with
the
cups.
Okay.
Well,
let's
not
change
anything
about
the
cups,
but
hopefully
make
it
more
clear
that
having
a
cup
merged
in
an
implementable
state
is
a
requirement
is
clearer.
So
we
don't
need
to
have
the
like
the
freeze
date
and
then
that,
like
buffer
for
cups
to
get
to
that
merged,
implementable
state.
C
A
A
If
that
fall
really
slipped,
you
run
into
a
whole
bunch
of
holidays
and
complexities,
and
the
same
thing
happens
with
the
second
quarter:
release
and
the
pending
fourth
of
July
and
people
having
perhaps
made
summer
plan
summer
vacation
plans
around
things
in
the
northern
hemisphere.
So
it's
it's
one
to
to
keep
in
mind
that
it's
it's
important
that
we
hopefully
make
that
date.
Yeah.
A
D
C
So
I
have
walked
Claire
through
I.
Think
most
of
the
administrivia
I
had
to
do
up
front
I.
Think
there's
a
couple.
Other
PRS
I
still
need
to
do
to
put
Claire
in
the
right,
though
there's
files
I'm,
assuming
I,
should
now
ask
the
security
mailing
list
for
me
to
be
removed
from
that
mailing
list,
because
I'm
no
longer
Ellie
sleep
and
Claire
Claire
needs
to
ask
to
be
added
to
that.
Is
that
right
or
can
I
just
tell
them
roll
me
off
roll
her
on.
A
A
A
C
One
thing
I
tried
to
do
in
anticipation
was
to
review
everything
from
the
113
retrospective
that
he
said
we
do
total
coincidence.
My
name
was
on
worse
than
those
items
and
many
of
those
were
represented
as
github
issues,
so
I'm
kind
of
going
by
the
issues
still
open,
we're
probably
still
working
on
it.
If
it's
closed,
probably
not
I
just
want
to
take
the
time
to
celebrate
like
what
are
the
changes
we
did.
C
That
worked,
and
we
probably
all
have
a
lot
on
our
minds
about
like
what
we
would
do
differently
ever,
but
I
think
you
can't
be
helpful
to
take
some
time
to
reflect
on
like
what
went
well
during
those
four.
The
I
say
this
as
a
release
lead
who
is
in
charge
of
a
release
that
went
out
the
door
like
on
time
with
no
crazy,
last-minute
cherry-picks
or
anything
like
that,
and
yet
still
feeling
like.
Oh
my
god,
that
was
ridiculous.
Nothing
actually
worked.
How
did
he
even
make
it
out
the
door?
C
A
A
This
is
a
sig
release
ongoing
issue,
so
we've
got
multiple
places
where
we
have
build
scripting
and
tooling,
and
the
the
official
things
for
artifacts
that
we
build
and
release
are,
are
generated
by
a
Googler
and
push
to
the
web,
and
now
that
code
is
publicly
visible.
We
can
see
it,
but
some
aspects
of
this,
like
the
rpm
signing
in
the
final
pushing
to
the
official
repos
others
of
us,
can't
do
that's
still
under
Google.
A
We
we
have
a
number
of
keps
open
around
improving
artifact
generation
and
consolidating
things
under
the
KK
repo
and
it's
build
directory
instead
of
K
release,
but
this
all
broke
yesterday.
We-
and
this
is
the
reason
I
guess
this
is
not
114
specific-
is
we
for
for
reasons
that
are
well
for
four
reasons:
I'll
just
leave
it
at
that.
For
reasons
we
all
we
all
so
produce,
builds
of
all
the
other
supported
releases,
branches,
111,
112
and
113
yesterday.
A
So
all
of
all
four
branches
were
were
shipped
yesterday
and
one
of
the
things
that
was
a
part
of
that
was
an
upgrade
to
the
the
minimum
version
of
CNI
that
we
use
and
that's
caused
a
weird
hiccup
in
our
rpms,
and
we
had
we've
got
a
number
of
people,
who've
complained
or
noted
this
Botha
hub
issues
and
on
slack
today,
and
then
also
we
have
a
test.
The
cop
this
Caleb
pointed
this
out
earlier
this
afternoon.
We
actually
have
a
test
case.
They
caught
the
problem.
A
Now
it
it
caught
it
about
24
hours
after
we
introduced
it,
but
that
is
at
least
positive
that
we
have
a
test
case
that
caught
the
problem,
but
the
current
failure.
There
shows
the
issue
that
the
both
of
the
Debian
packages
and
the
the
RPM
packages
on
prior
builds
articulated
a
desire
for
the
the
cni
component
at
version.
A
Oh
six,
zero,
and
specifically
at
that
version
and
all
of
the
the
bits
for
that
are
out
there
still,
but
for
some
reason,
the
package
managers
on
both
sets
of
distros
are
not
correctly
following
that
pinning
of
the
version,
and
they
see
that
there's
a
newer
cni,
oh
seven,
five
out
there
and
they
try
and
pull
that
in
and
there's
a
unresolvable
dependency
conflict.
So
we
we
have
two
options.
A
We
could
either
tell
people
sorry
and
use
the
latest
packages,
because
those
are
the
ones
that
we
support
and
they
work
or
we
could
build
a
a
new
revision
of
the
prior
packages,
so
say
like
looking
at
one
dot
11.6.
For
example,
if
you
look
at
his
version
string,
there's
also
a
dash
to
zero
after
it,
which
refers
to
the
packaging
iteration.
If
we
update
the
package
scripting,
then
that
should
increment,
so
you
have
a
one
dot,
11.6
dash
one
or
or
something
higher,
and
we
could
fix
this
back.
A
Then
this
would
mean
a
whole
bunch
of
building
effort
for
our
sole
Googler,
who
does
all
of
this
building
to
sort
of
back
populate,
updated
packages
that
fix
this.
There
are
also
workarounds
people-
this
is
a
sadly
a
somewhat
common
breakage
in
the
packaging
world
and
something
that
kubernetes
has
also
done
one
or
two
times
before
in
the
last
few
years.
So
people
can
get
around
this
on
their
command-line.
It's
it's
ugly.
It's
not
great,
but
I'm,
curious.
What
people
think
about
what
we
should
do
from
a
support
perspective
here.
H
H
It's
also
not
clear
to
me
so,
like
I've
got
a
VM
here
now
and
certainly
I've
been
able
to
install
previous
versions
of
kubernetes.
With
that
take
a
point:
seven-five
CNI
dependency,
so
certainly
I
mean
there
are
steps
you
can
take
as
a
user.
Even
given
the
current
state
of
the
repository
to
perform
the.
J
E
H
May
need
to
download
the
packages
yourself
locally,
like
you
may
not
be
able
to
rely
simply
on
act
or
you
may
need
to
use
RPM
and
D
package
or
some
combination
of
air
of
to
get
fix
to
patch.
Your
system.
I
think
that
the
way
that
I
have
suggested
makes
that
I
guess
a
little
bit
more
obvious,
because
you
simply
would
not
be
able
to
install
older
versions
of
kubernetes,
because
you
would
have
yanked
the
packages.
A
That's
an
uncommon
action
in
the
ecosystem,
but
I
understand
your
rationale
for
it
and
then
I
wonder
what
we
would
tell
the
so.
For
example,
the
user
today
who
is
reporting
like
I,
used
one
at
11.6,
because
we
have
a
requirement
to
use
that
we
haven't
been
able
to
move
forward
yet
in
our
organization.
What
do
we
tell
them?
Don't
don't
use
our
artifacts
and
I.
C
A
H
I
guess
kind
of
my
belief,
but
I
mean
if
you
are
and
I
think.
Certainly
it
will
be
it'll
be
easier
once
the
there's
only
one
location
where
the
package
definitions
live.
So
if
you
had
a
requirement
for
some
reason
for
that,
you
needed
to
use
exactly
communities,
1
11.6
and
you
also
need
RPMs.
Then
it's
really
on
you
to
figure
out
how
to
do
that.
You
can
build
from
the
source
there
and
figure
out
how
to
patch
yourself
the
project
itself.
H
I
think
our
only
responsibility
is
to
say
that
yeah,
we
we
fixed
the
CNI
vulnerability
and
criminales
one
11.7
that
dot
seven
release
contains,
will
really
only
need
the
validation
of
the
bump
on
the
one
six
one
11.6
tree,
and
you
should
download
the
one
11.7
artifacts
moving
forward,
the
one
the
previous
artifacts
have
been
removed,
as
we
no
longer
have
confidence
in
them
other
than
rebuilding
things.
I
think
it's
dystrophy
the
wrong
thing
to
do,
because
either
you
need
to
bump
the
revision
and
there's
no
easy
way
of
a
chain.
H
There's,
unfortunately,
no
easy
way
of
saying
or
checking
in
the
fact
that
you
bumped
the
revision
for
a
previous
version
of
kubernetes
because
of
the
split
between
K
release
and
kubernetes
kubernetes.
So
it's
like
it
would
be
more
difficult
to
communicate
that
fact
to
posterity.
If
you,
if
we
take
the
general
case
or
general
stance,
that
we
will
bump
the
revision
on
previously
produced
artifacts
today,
it
will
be
easy
to
lose.
H
The
rationale
for
why
that
occurred,
I
believe,
would
be
easier
to
yank
the
artifacts,
and
when
we
announced
that
we
produced
say
criminalities,
one
11.7
say
why
we
have
yanked
the
previous
artifacts.
We
were
announcing
the
release
itself
and
for
I
guess
prior
art
we
mean
we
ain't
stem
cells
for
Bosch
when
there
are
vulnerabilities,
I
mean
not
all
the
time,
because.
K
H
Know
it's
not
a
problem
that
happens
all
the
time,
but
it
certainly
it's
something
we
do
for
or
have
done
for
serious
enough
vulnerabilities,
because
there's
just
no
good
reason
to
allow
people
to
download
the
packages.
You
don't
you
don't
trust.
Iii
have
never
been
convinced,
but
there's
any
good
reason
for
a
project,
an
open
source
project.
To
do
that.
A
H
A
Would
that
be
articulated
somehow
and
like
a
discoverable
or
machine-readable
way,
so
that,
if
like
it
could
be
encoded
in
a
test,
if
I'm,
a
user
and
I'm
like
hey
your
artifacts,
are
missing,
we
could
go,
look
at
a
test
case
and
say
no.
Those
are
supposed
to
be
missing
for
and
here's
the
commit
that
references.
Why
we
we
flag
those
as
deleted
or
how
would
how
would
that
not
appear
really
flaky
or
weird
to
a
user
I
mean.
H
You
could
imagine
well
I
guess
it
depends
so,
like
I
mean
if
you
are
a
shop
that
uses
scentless
when
there's
a
new
version
of
sent
to
us,
the
last
version
for
most
mirrors
just
goes
away.
So
if
you
are
an
editor-
and
you
know,
if
you're
an
enterprise,
you
have
already
figured
out
how
to
deal
with
that
or
you
know
you
just
have
not
so
that's
I-
guess
the
prior
art
there.
So
there
that's
the
extreme
case,
which
I
don't
think
we
need
to
need
to
take
I.
H
Think
because
we
produced
that
you
know,
there's
an
email
that
goes
out
whenever
you
produce
in
release
and
if
we
had
to
take
some
of
these
extreme
steps
like
moving
package,
which
is
no
good
way
of
communicating
that
to
the
open
source
project,
because
its
proprietary
architecture
or
yet
infrastructure
tools,
then
you
just
say
there
that
you
had
to
pull
previous
binary
artifacts,
because
because
of
this
extraordinary
reason
like
presumably
there
was
a
security
release
that
then
trumped
this
thing
out.
I
think
that
that
is
the
general
process
that
you
know.
H
We
really
see
this
version
of
kubernetes
due
to
our
nobility
and
a
dependency.
Please
upgrade
we're
also
taking
this
unusual
step
of
yanking
previously
published
artifacts,
because
we
no
longer
have
confidence
and
was
previously
published
artifacts.
If
you
care
about
that,
you
already
have
them.
If
you
want
to
rebuild
them,
their
shelf,
you
can
go
to
that.
You
can
go
to
the
source.
I
think
is
a
reasonable
stance
to
take
given
a
current
level
of
staffing
and
just
general
familiarity
with
packaging
as
a
problem.
I.
C
Feel
like
I,
want
to
agree
with
Caleb
there,
but
I
don't
know
it's
it's
unclear
to
me
like
how
many
users
actually
consume
Cooper.
That
needs
to
be
at
Debian's
and
rpms,
but
I
kind
of
personally
have
to
say
it's.
The
project
boundary
for
artifacts
stops
at
the
tar
boat,
the
binaries,
the
tar
balls
and
the
docker
images
that
we
produce
in
the
packages
and
stuff
sure
would
be
really
cool
if
the
people
in
charge
of
those
distributions
actually
were
responsible
for
being
painting
packages.
For
that
stuff.
C
I
know,
it's
is
really
squishy
gooey
boundary,
but
I
do
think
it
comes
down
to
who,
in
the
community
is
willing
to
support
this
sort
of
thing.
So
I
think
there's
a
helpful,
healthy
use
discussion
to
have
to
figure
out
like
what
does
the
community.
What
are
the
community
expectations,
but
that
needs
to
be
weighed
against
what
resources
will
the
community
bring
to
bear
to
support
this
sort
of
thing?
C
Jordan
sent
out
a
great
email,
acrostic
release
and
testing
and
architecture
and
the
LPS
working
group
about
you
know
what
are
all
these
external
dependencies
and
what
can
we
do
to
maybe
reduce
our
reliance
on
these
or
how
should
we
think
about
supportability
and
stuff?
With
all
these
things
in
mind,
this
feels
like
a
good
concrete
example
to
talk
about
there
and
have
a
better
discussion
of
generalizing
I.
F
F
H
I
mean
I've
been
making
this
point
for
the
better
part
of
three
years
now,
but
I
mean
people
do
use
these
packages.
I
understand
that
I.
You
know,
I
I
think
that
we
we
just
yeah.
You
know
we
need
to
find
a
way
of
producing
packages
today
that
allows
us
to
not
what
one
keeps
us
from
lying
to
the
users.
H
You
know
we're
kind
of
on
the
hook
for
validating,
but
I
I
would
like
just
a
general
approach
to
just
published
from
the
from
the
from
the
head
of
whatever,
whatever
release
branch
we're
working
with
today
and
that's
really
the
best
we
can,
we
can
do.
We
can
publish
release
head
of
the
release
branches
for
the
current
current
three
and
you
really
shouldn't
expect
so,
but
you
know
anything
else
at
the
people
in
the
community
or
in
general
distributions
want
to.
You
know,
fill
in
that
gap.
H
I
would,
you
know,
obviously
love
for
that
to
be
a
case.
Yeah
really
I.
Guess
the
TLDR
there's
ever
said
more
distributions
were
alike
arc.
If
you
go
look
at
their
installation
instructions
for
kubernetes
and
they
they
just
pull
some
source,
so
I
feel
like
there's
yeah
there's
a
lot
we
can
do
together
with
the
water
community.
Do.
H
I
think
that's
reasonable
I
mean
you
can
also
just
like
collect.
My
various
me
saying
that
for
the
last
three
years
and
just
just
collate
them
all
into
one
place
like
yeah.
This
is
this
is
this
has
always
been
best
effort,
and
you
know
if
we
were
even
getting
getting
into
the
business
of
I
I
know.
This
is
just
kind
of
me
spitballing,
but
I've
always
felt
that
we
should
just
be
producing
the
full
machine
image
from.
H
Like
a
you
know,
a
debian
base
or
four-door
base
slamming
in
our
the
binaries
and
just
really
seeing
those
images.
So
people
can
just
you
know,
import
them
into
their
cloud
provider
of
choice
or
whatever,
rather
than
figuring
out,
like
overlay
like
starting
from
a
clean
base,
which
we
can
do
more
easily,
rather
than
just
breeze
on
the
packages
and
throwing
them
out
into
the
world.
H
But
you
know
it's
like
I
feel
like
if
you're
concerned,
either
about
running
an
air-gapped
environment
or
just
general
startup
time.
Why
would
anyone
be
using
packages
in
this
day
and
age?
I
mean
I?
Guess
you
know
I
guess
perhaps
I'm
too
biased
versus
you
know
how
we
do
our
distribution
for
GCP.
But
you
know
we
do
a
lot
of
work
to
just
expose
the
binary
or
the
architects
you
want,
as
as
things
on
disk
somewhere.
So
your
docker
images
are
a
disc
that
will
be
appeared.
Do
you
want
to
pull
them
from
anywhere?
H
Yeah
but
I
feel
like
as
a
project
it'd
be
way
easier
for
us
to
produce
just
just
take
they
clean.
This
is
what
canonical
produced
for
Bunty
Aur.
This
is
the
latest
step
in
stretch
and
we
just
slam
in
our
changes
and
increase
as
the
image
and
you
as
an
organization
can
consume
that
as
you
want.
But
we
can.
We
can
test
the
images
much
easier
as
a
full
image,
then
as
individual
packages
I
believe.
K
A
while
I
mean
I
typed
it
out,
but
I
kind
of,
want
to
give
context.
I
feel
like
we
don't
actually
define
what
an
external
dependency
actually
is
and
having
done
release
notes
and
having
had
to
go
through
the
list,
it's
kind
of
a
wide
spread,
so
etc.
D
is
obviously
a
tendency
for
us,
but
I
also
had
to
track
down
what
Ruby
library
an
elasticsearch
client
is
using
because
it
happens
to
have
already
been
in
the
external
dependencies
for
release,
notes
and
it's
in
cluster
add-ons.
H
Should
only
be
the
things
required
to
bootstrap,
the
cluster
itself
should
be
nothing
in
that
ons.
Nothing!
None
of
the
examples
dependencies
don't
cater.
It
really
should
be
at
this
point
at
CD
c--,
and
I
cry
tools
and
you're,
pretty
or
I
guess,
whatever
version
of
the
C
vibrant
implementation
that
we
actually
went
through
and
tested,
so
some
versions
of
container
D
or
a
cryo
yeah
I
feel.
C
H
H
Were
tested
together
and
but
whether
also
their
configuration
was
and
all
the
dependencies
you
know
and
go
into
the
cluster,
but
yes
I,
think
I
hope,
and
this
is
now
enough
people
working
on
this
at
enough
companies
that
I
think
we're
gonna
see
some
action
on
it.
At
least
here,
which
would
be
great
I
agree.
C
I
feel
like
it
also,
it's
not
begs
the
question.
That's
actually
a
very
specific
fallacy.
What
am
I
trying
to
say
it
prompts
the
question
of
what
are
we
trying
to
release
here?
Is
it
a
core
kernel?
Is
it
a
distribution?
What's
the
set
of
things
that
we
as
a
project
of
taking
provenance
and
liability
for
versus
just
really
recommending
that
other
distributions
include
because
it'll
go
together.
Super
well
with
us,
yeah
bundles
are
one
way
of
expressing
that
other
distributions
have
other
ones
most
enough.
Yeah.
H
I
C
So
let
me
stop
you
right
there.
Well,
we
need
our
people
staff
to
work.
I'm
super
happy
for
the
working
group
to
provide
the
infrastructure
for
this
stuff
to
run.
I
need
the
people
to
show
up
to
write
the
scripts
to
go
on
the
scripts,
maintain
the
scripts
yadda
yadda
yadda.
We
are
absolutely
working
on
making
sure
there's
a
place
for
all
this
to
live,
but
we
need
people
to
bring
it
to
life
in
order
for
it
to
happen,
and
there
there
are
substantial
caps
out
there
written
about
this.
C
I
was
gonna.
This
is
why
I
created
the
release.
Engineering
sub
project
I
tried
to
push
you
all
to
create.
It
is
I
feel
like
those
are
the
people
who
are
in
charge
of
building
the
tools
in
the
infrastructure
for
accomplishing
the
release,
engineering
and
Kate's
in
front,
because
it
is
a
working
group,
can
I
actually
own
code.
It
is
the
working
groups
responsibility
to
find
SIG's
and
sub
projects
who
want
to
code.