►
Description
Discussing the KEP for debs/rpms packaging. Notes are in https://docs.google.com/document/d/1sSezV-vOSsmrL-Pm79ZJBPCL41KOTp4DJ8DwHgFaGMg/edit
A
Let
me
maybe
start
by
giving
an
overview
to
all
of
you.
What,
where
I'm
coming
from
I
started,
looking
into
how
Cuba
Nettie's
artifacts
are
built
and
published
and
found
that
there
are
multiple
ways:
how
to
build.
Vivian
and
RPM
packages
specifically
and
I,
wanted
to
understand
that,
and
not
a
part
of
the
story
is
how
our
rpms
and
DBM
packages
actually
published,
and
that's
the
area
where
I
have
near
to
zero
insights
and
wanted
to
find
out
about
that.
A
For
the
first
part:
how
to
create
the
packages.
I
created
a
cap
where
try
to
read
up
on
all
the
different
issues
and
pr's
and
slack
messages
and
whatnot,
and
try
to
summarize
that
in
the
cap
and
laying
out
basically
a
goal,
how
we
might
want
to
build
the
tooling
or
whatever
for
the
future,
where
we
want
to
go.
This
cap
is
currently
under
discussion
and
I.
Try
to
update
it
as
comments
come
in.
A
A
So
that
looks
good
to
me,
but
the
other
part
I
said
where
I
have
no
idea.
How
that
is
working
is
the
publishing
side
and
I
would
ask
if
somebody
can
give
me
some
insights
there.
Maybe
the
so.
The
idea
here
is
maybe
I
want
to
create
a
cap
for
the
publishing
part.
Afterwards,
I
just
heard
there
is
already
some
PR
going
on
from
Brendan.
C
D
A
E
I
want
to
step
back
from
going
through
logistics
here
and
understand,
I've
lost
gob
for
a
week,
and
this
is
a
p0
from
Cinco
stroller
cycle
to
at
least
do
the
artifact
generation
and
then
there's
the
publishing
I,
don't
really
care.
If
who
wants
to
do
the
work,
if
panis
wants
to
step
up
and
do
the
work,
that's
totally
cool
with
me,
but
the
this
was.
E
G
E
F
E
From
a
technical
ownership
of
who's,
doing
what
right,
like
I,
don't
know
where
the
care
about
the
technicalities
as
much
as
I
do
about
like
making
sure
the
work
is
federated
cleanly
across
the
SIG's,
because
right
now,
folks
are
lined
up
on,
say,
cluster
lifecycle
to
help
and
they're
already
appropriated
to
the
work.
So
like
they're
they've
talked
to
their
managers,
they've
done
their
due
diligence
right,
so
I
just
want
to
know
who's
doing
what.
F
F
B
Guys
from
the
time
that
I
actually
did
the
RPM
and
there
it
is
yes,
there's
a
part,
there's
a
publishing
port
that
right
now
a
Googler
needs
to
do,
but
most
of
the
time
you
know
most
of
the
time
consuming
part
and
the
parts
that
failed
and
I
need
something
doing
some
fakes
and
stuff.
It's
actually,
this
building
part
not
that
publishing,
but
even
when
the
RPMs,
when
the
artifacts
are
ready,
it's
actually,
if
you
are
a
Googler,
it's
not
a
hard
part
to
actually
push
now.
I
think
I
think
that's.
B
It
would
be
so
first
that
would
be
very
helpful
if
we
actually
add
the
building,
rpm
and
devs
to
the
to
the
to
our
release
tool
so
I
take
it,
they
build
it
and
they
actually
put
it
in
somewhere
in
some
bucket
and
then
that
yes,
the
next
part
is
to
figure
out
where
to
actually
release
the
organs.
But
I'm
saying
is
that
that
would
actually
help
a
lot
for
the
Googler.
That's
actually
pushing
them,
because
it's
the
pushing
part
is
I,
never
fail
for
me,
the
busily
failed
all
the
time
for
me.
So.
E
What
did
it
listed
inside
the
main
requirements
for
the
original
issue?
That's
a
cluster
lifecycle
committed
to
is
p0
is
the
idea
that
every
release
is
a
bill
just
like
every
other
artifact
right,
so
that
we
build
the
devs
in
rpm.
So
we
do
this
today
already
for
for
doing
CI
automation,
but
we
want
those
to
be
the
official
artifacts
and
make
them
cleaner
and
the
requirements
are
listed
there
and
the
details
are
listed
there.
An
issue
and
I
know
Jeff
commented
on
that
issue
as
well
right.
F
F
E
F
Yes,
some
I
mean,
if
that's
what
the
group
from
wants
to
work
on.
That
also
is
fine,
but
I,
guess
what
I'm
saying
is?
If
there's
no
you
should.
We
should
not
scope
the
problem
in
the
way
this
we
have
this
look
today,
where
you're,
assuming
that
a
Googler
has
to
push
the
actual
artifact
to
some
location.
That
is
not
there's
not
done
until
that
is
not
no
longer
true.
I
think.
I
F
I
E
The
the
problem
is
multifaceted:
we've
created
an
incredible
incredible
amount
of
technical
debt
and
there's
like
some
glue
that
exists
inside
of
tests
infra
to
do
two
parts
and
when
we
were
trying
to
do,
we
want
to
do,
is
disentangle
the
glue
and
to
just
make
it
like
every
other
artifact
that
we
build
right.
We
it
the
Deb's
and
rpm,
should
be
no
different
than
like
the
API
server.
It's
just.
E
Then
the
consumption
and
publishing
of
that
it
would
be
a
separate
step
run
by
n
ago.
The
question
what
we
have
is
there's
other
requirements
right
right
now.
One
of
the
things
we
would
like
to
have
is
the
idea
of
an
unlikely
build
right
that
right
now
we
have
traditional
sort
of
release.
Apparatus
have
stable
testing
in
nightly
right
and
the
testing
would
allow
us
to
actually
push
out
artifacts,
for
you
know,
dot
whatever
releases
beta
and
alpha
releases,
because
that's
part
of
the
problem
right
and
the
build
apparatus
is
totally
orthogonal
has
been
mentioned.
E
We
could
fix
that
problem
independent,
but
the
the
release
publishing
ignoring
we
have
today
and
ask
what
we
want
and,
and
just
say
we
want
a
stable
nightly
and
testing
set
of
repositories.
That
piece
of
publishing
doesn't
exists
in
its
current
form.
Today
we
just
have
a
stable
and
that
overrides
for
everything.
I.
E
J
H
F
F
Agree
that
they
both
have
value
but
I,
just
saying
that
if
Phaedo,
if
the
publishing
heart
is
fixed,
then
all
the
changes
we
make
on
the
building
process,
they
can't
meet
they're,
not
it
would
I,
don't
know
how
one
would
consume
that
outside
of
a
Google
or
continue
to
push
and
I
feel
like.
One
of
the
reasons
why
we
are
all
here
is
because
we
do
not
want
to
be
in
a
position
where
a
Googler
have
to
watch.
I
C
E
I
just
wanted
to
understand
who's
gonna
work
on
what
pieces,
because
we've
clearly
identified
the
two
buckets
right.
The
two
buckets
are
well
done
doing,
there's
the
packaging
bit
which
we
want
to
publish
in
the
main
KK
repo,
which
is
underneath
the
build
directory,
which
has
an
issue
supported
with
that.
You
know
unifying
the
packages
to
have
a
unified
format
so
that
you
can
publish
for
both
it's
well
defined
right
then
there's
the
publishing
bit,
which
is
not
well
defined,
there's
some
requirements.
E
A
I
can
I
can
own
that?
That's
actually
why
I
started
looking
into
the
whole
process
of
generating
depths
and
the
RPMs,
because
it
hit
me
as
a
branch
manager,
and
that
was
looking
into
the
thing.
That's
why
I
found
your
issue.
Tim
I
was
not
aware
that
you're
that
deep,
already
into
stuffing
with
people
and
and
and
whatnot
from
souza,
so
I
apologize
for
like
interfering
there
with
my
cap.
But
what
what
what
I
hear
now
is
well,
the
cab
is
kind
of
fine,
but
not
high
priority.
A
Also,
you
are
working
on
that
kind
of
anyway,
but
the
more
interesting
part
is
the
thing
that
I
have
not
looked
into
yet,
which
is
the
publishing
part
moving
that
onto
a
CNC
F
own
thing
and
and
figuring
all
that
out,
having
Knightley's
published
like
in
a
different
repo
than
stable
and
and
all
that
fuss
is.
Is
this
kind
of
correct
did
I
paraphrase
that
correctly?
That's.
E
G
A
G
F
F
I
E
A
It's
ok.
What
what
I
can
do
is
I
can
own
like
the
publishing
power
art.
Do
is
try
to
get
an
overview,
how
it's
currently
working
and
where
we
want
to
go,
and
probably
summarize
that
in
a
cap
I'm
not
sure
what
we
want
to
do
with
the
cap.
I
already
have
do.
We
want
to
continue
there.
It's
mostly
a
question.
I
guess
for
Tim,
as
you
are,
can
already
work
or
have
people
working
on
on
that
anyway.
E
We're
not
working
on
the
publishing
bit,
so
some
people
looked
at
it
and
they
looped
say
cluster
lifecycle.
Folks,
in
it's
up
to
you,
you
can
just
repurpose
the
cap
and
have
it
for
this
specific
area
that
you
want
to
focus
on
I
think
maybe
cross-linking
the
information
so
that
they
know
you
know
like
there's
a.
We
should
probably
close
the
probably
25
other
issues
that
exist
on
this
topic
and
and
link
the
cap
and
the
main
issue
for
114
together,
so
that
they're
kind
of
you
know
they
link
back
and
forth
to
one
another.
E
C
Like
it
is
worth
keeping
the
cap
that
we
already
have
the
packaging
cap
and
you
know
make
sure
we
integrate
all
the
feedback
and
the
discussion
from
the
existing
issues
is
like
you
know
this.
This
definitely
feels
like
it's
cap
worthy
and
you
already
have
a
cap
started,
so
we
should
continue
I
think
to
work
in
the
existing
cap.
Get
this.
You
know
everyone
happy
with
this.
You
know
a
cluster
lifecycle
on
it,
make
sure
everyone's
happy
with
what
it
says.
C
G
B
Didn't
separate
things
these
are,
you
know,
because
we
are,
we
are
thinking
about
mucosal
life,
talking
people
and
test
people
and
release
people,
yes
for
their
release.
The
publishing
size
is
very
important
for
a
test
and
from
the
point
of
course,
of
laxity
of
people
is
that
building
more
than
having
an
official
organism,
more
important
I,
don't
think
they
are
actually
related.
B
F
The
cap
is
posted,
is
supposed
to
talk
about
the
value
that's
delivered
to
the
community
as
a
random
person.
Community
I
want
to
go
to
some
wall
nation
and
download
deads
for
the
kubernetes
binaries.
That's
value
delivered
to
me,
I
think,
as
we
have
described,
that
there
are
two
large
technical
breaks
to
getting
there
and
where
that,
where
that
URL
is
not
owned,
demanded
by
Google,
the
first
part
is
how
we
build
those
rpms
and
deaths.
F
The
second
part
is
how
we
publish
that
I
think
the
value
the
whole
bat
that
is
part
of
one
value
chain,
however,
and
that
value
chain
should
be
what
the
test
they
kept
supposed
to
go
over
multiple
releases
and
implement
multiple
discrete
enhancements.
So
the
first
enhancement,
if
you
want-
because
we
have
staffing
for
that
today-
is
to
work
on
the
how
we
build
depth
in
rpm
and
make
that
reasonable.
F
That's
that's
great,
and
if
you
wanted
to
also
at
the
same
time,
hopefully-
but
you
know
we'll
see,
have
someone
working
on
how
to
make
those
changes
visible
to
such
may
be
more
widely
attended.
How
the
publishing
works
also
also
wonderful,
but
I
think
that
there's
no
reason
is
to
have
that
into
place.
So.
B
The
building
part
is
actually
has
a
publishing
with
it,
but
it's
not
an
official
rpm
Deb
repository.
It's
actually
publishing
them
with
the
same
way
that
we
publish
any
other
artifact.
They
are
people
can
still
download
them
manually
and
install
them
and
test
them
and
do
anything
with
it,
but
the
department
we
need
to
actually
actually
official
repository
for
the
Orpheum
and
devs.
That's
part
is
actually
we
need
to
tackle.
The
problem
of
Googler
needs
to
do
that,
then
we
need
to
have
another.
B
F
I
We
have
automated
builds
in
kubernetes
kubernetes.
That
will
make
it
easier
to
solve
that
other
task.
I'll
also
point
out
that
I,
don't
think
Kate's
impro
work
group
is
actually
quite
ready
to
own
another
piece
of
the
infrastructure,
yet
they
have
just
stabilized
the
DNS
work,
they're,
still
figuring
out
how
they're
going
to
hand
out
and
manage
like
say,
Google
credentials
and
projects
and
things,
and
it's
probably
going
to
be
a
little
bit
longer
term
to
get
a
registry
up
there.
I
F
Not
saying
anything
about
blocking
the
building
work,
I'm
saying
that
if
we
complete
all
the
building
work,
you
still
need
someone
do
something.
We
changed
nothing
else.
You
still
need
someone
at
the
keyboard
who
works
at
Google
to
do
the
publishing
and
from
a
community
standpoint.
That
seems
to
be
a
big
sticking
point,
because
it
introduces
a
delay
between
the
publishing
of
the
rest
of
the
official
artifacts
and
the
Deb's
in
rpm,
which
is
the
position
we're
trying
to
get
out
of
well.
I
F
G
F
G
G
Absolutely
we
shouldn't
have
any
any
humans
specifically
I
feel
guilty,
because
the
past
Phillies
managers
reached
out
to
me
and
others
me
a
lot
of
time
like
I'll,
be
not
around
immediately
to
answer
so
there's
a
lot
of
time
gap
between
the
time
the
password
is
Amanda
says,
build
our
debts
and
rpms
from
the
time
it's
actually
published.
So
we
should
really
remove
that
that
process
and
it
should
be
all
done
either
directly
by
the
practice
manager,
maybe
in
future
all
automatically
like
a
nightly
release.
B
I
completely
agree:
this
is
this
is
really
ideal
to
do
to
remove
the
need
that
somebody
at
Google
do
this.
However,
if
somebody
tell
me,
I
only
have
time
to
do
one
of
these.
Oh
definitely
I,
don't
have
time
to
do
the
build.
I
don't
have
time
to
figure
out
what
to
do
with
the
repository.
If
I
wanted
to
be
that
person
at
Google,
if
I
was
that
person
at
Google
to
hit
a
yeah
I
know
it
the
keyboard
and
hand
around
the
command.
B
B
F
F
Agree,
I
just
I'm,
always
trying
to
think
what
is
the
most
valuable
thing
that
can
be
delivered
to
a
user
of
proven
Eddie's
and
that's
a
value
of
this
value
of
the
cap.
It
should
be
in
terms
that
a
user
would
understand,
and
so,
if
you're
saying
that
yeah
we're
removing
the
gap
between
publishing
the
debs
in
rpm
to
zero
from
the
rest
of
the
release
process.
That's
great
as
the
goal
and
there's
like
a
bunch
of
stuff
we're
going
to
do
best
towards
that
goal.
F
One
is
like
moving
to
moving
the
release
process
for
beds
and
rpms
back
into
the
overall
another.
One
is
increasing
the
frequency
of
filters.
Oh
that's
all
great.
It's
just
that
I
think
we
should
be
clear
about
what
value
we're
trying
to
deliver
to
people
who
use
turbines
and
and
orient
our
work
around
that
now.
Another
bunch
of
work,
strings
and
they're
also
totally
fine,
but
the
value
should
be
clear
and
understandable.
As
a
user
of
project.
B
We
would
have
the
better
alpha
and
final
releases
somewhere
that
all
tools
works
with
and
it's
the
ncf
control.
That's
the
ideal
I'm
not
saying
that's,
not
I'm,
saying
if
you
do
the
build
and
put
them
in
some.
You
know
somewhere
that
you've
done
with
other
release.
You
know
artifacts
right
now
is
he's
still
giving
some
value
to
the
users
they
they
can
test
those
they
can
manually,
download
it
and
test
it
other
other.
A
F
I
A
E
Have
to
do
here's
the
question
of
separation
right
now
when
publishing
means
right,
because
when
we
build
things
for
test
automation,
we
don't
test
on.
You
know
s/390
we
test
on.
We
only
build
the
test,
automation
in
the
continuous
CI
for
testing
for
wholly
builds
for
x86
64
right,
so
we
didn't
we're
not
actually
verifying
those
artifacts
we're
not
even
generating
them
as
part
of
the
build
process.
So
that's
that's
a
separation
between
automated
CI
of
building,
artifacts
and
testing
them
versus
publishing.
Right.
A
A
Okay,
but
I
think
that
that's
that's
fine,
then
like
that
that
is
fine.
For
the
first
part,
what
we
need
to
decide
on
is
like
Canada
handoff
from
building
to
publishing
and
I.
Think
we
just
like
yeah.
We
will
figure
it
out
in
the
cap
or
somewhere
else,
but
what
I
can
do
is
like
start
working
on
the
second
part
on
the
publishing
and
Canada
defining
the
handover
from
the
building
clock
to
the
publishing
clock.
A
G
I
I
also
think
if
we
can
find
how
to
fix
the
built-ins
of
worth,
then
as
we're
doing
this,
we
can
start
talking
with
the
kids
in
forward
group
about
how
we
could
host
a
app
and
buy
an
RPM
registry,
and
then
once
we
have
an
idea
there,
it
will
basically
need
to
copy
the
build
over
and
we'll
need
to
make
sure
that
the
build
is
secure
enough.
I
think
those
like
the
actual
implementation.
Those
can
be
delayed,
but
we
can
think
about
how
the
two
wing
will
help
with
that.