►
From YouTube: SIG Release Meeting for 20221101
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone
and
Welcome
to
our
weekly
Tech
release
meeting
this
meeting
at
yesterday,
company
kubernetes
or
cncf
code
of
conduct,
which
basically
boils
down
to
be
excellent
to
each
other.
I
already
put
the
agenda
in
the
chat,
so
please
add
yourself
to
the
list
of
attendees
and
sure
we
will
take
notes
today.
So
thank
you
for
that.
A
B
Yeah
I'm
having
some
trouble
with
my
camera
right
now,
but
yeah.
It's
my
first
time
here
as
well,
just
wanted
to
say
hi.
My
name
is
Drew.
I
went
to
keep
calm
last
week
and
was
getting
to
know
the
sigs
a
little
bit
better.
This
one
caught
my
interest
so
I
wanted
to
tune
in.
A
C
Okay,
perhaps
again
so
well,
Google
happened,
so
not
a
lot
going
on,
except
for
two
things
and
well
the
second
one.
It's
maybe
more
suited
to
the
release,
manager
and
bracket,
but
okay.
So
the
first
one
is
we
fixed
a
book
that
was
holding
the
release
process
over
the
weekend?
C
Apparently,
there
was
a
change
in
one
of
the
licensing
libraries
that
we
pulled
from
Google,
which
broke
the
release
process
and
and
well
that
should
be
working
now
and
I
saw
Carlos
Rona
test
and
apparently
the
real
process
finished
correctly
and
then
the
other
thing
that
at
least
this
on
my
radar
is,
of
course,
the
obs
effort
going
on
which
Mark
was
going
to
talk
about
I,
think
today
or
about
another
project,
if
you're
not
ready,
but
that's
what
we
that's,
what
I
heard
and
and
finally
there
is
the
issue
of
the
upcoming
package
releases,
which
perhaps
is
better
suited
to
discuss
a
little
bit
further.
A
A
Yeah,
we
also
have
the
I
mean
just
kind
of
refers
to
the
to
the
next
point,
but
we
also
have
this
K
promo
changes
now
in
place
and,
for
example,
we
now
just
promote
the
files
which
have
been
diff,
then
a
PR
or,
for
example,
which
has
been
touched
by
merging
a
PR
for
the
promotion,
but
this
has
been
merged
earlier
today
and
we
had
no
run
in
between
them.
So
I
expect
that
the
alpha.2
for
now
alpha.3
planned
for
the
table
probably
be
the
first
candidate
to
check
this
out
and
test
this.
A
A
I
mean
we
have
around
600
30
images
right
now
and
the
different
locations
where
we
want
to
sign
them,
and
we
will
then
trust
only
do
that
for
30
images
or
something
like
this.
So
it
should
be
significantly
faster
with
the
new
K
promo.
A
F
I
think
maybe
this
is
now
or
release
team,
but
either
way
yeah
126,
Alpha,
3,
I'm
I'm
cutting
that
today.
So
the
issue
has
been
opened,
there's
a
thread
in
slack
and
we're
all
ready
to
go
there.
So
I'm
gonna
be
kicking
that
off
around
9
30
Pacific
today
and
yeah
updates
will
be
in
the
thread.
A
A
G
G
Sure
I
can
give
an
update
so
so
far,
so
good
everything
is
running
good.
So
far
we
don't
have
any
major
problems
or
anything
that
I'm
aware
of.
G
We
send
out
the
mid-cycle
release
team
status,
email
last
week,
so
I
think
also
on
Saturday
before
kubecon,
and
we
will
have
the
first
retro
meeting
scheduled
for
tomorrow
right
after
the
weekly
release
team
meeting.
So
if
there's
like
anything,
you
would
like
to
add
to
the
notes
that
would
be
very
nice
I
think
we
have
a
lot
a
few
topics,
not
too
much
I
have
to
check
again
and
right
and
also
called
Freeze
this
next
week.
A
H
I
have
one
additional
topic:
I
don't
know
if
this
is
the
right
place
if
this
should
be
in
a
release.
Team
meeting
but
Sig
instrumentation
has
a
pull
request
open
to
add
a
document
that
was
generated
documenting
all
from
all
Prometheus
metrics
and
core
kubernetes.
H
H
H
And
I'm
going
to
put
the
pull
request
and
regarding
this
in
this
in
the
chats
and
I'll
bring
this
up
in
the
Retro
as
an
additional
step
that
the
really
stocks
team,
if
they
are
willing
to
take
this
on
right
now,
Sig
instrumentation
is
generating
this,
and
this
pull
request
has
not
merged
into
K
website
yet.
But
this
would
be
an
additional
step
that
is
proposed
for
the
really
stocks
team.
G
Okay,
so
so
this
is
not
not
a
cap
or
anything
right,
so
I
have
it.
A
A
Otherwise,
we
will
now
jump
over
to
the
open
discussion
topics,
so
the
first
one
is
the
so
I
had
a
look
at
the
Rapture
script
split
up
today
and
I
was
creating
a
placeholder
issue
which
now
contains
the
overall
plan
to
move
it
into
Krell
and
I
think
it
should
be
doable
in
a
very
short
amount
of
time.
A
So
this
is
kind
of
a
mismatch
between
how
we
understand,
building
and
Publishing
artifacts
right,
because
the
script
itself
expects
that
the
release
binaries
are
already
on
their
target
destinations.
So
we
can
just
not
run
it
in
credit
stage,
but
as
you
don't
see
it
as
a
big
big
deal
honestly,
because
we
will
swap
out
this
implementation
in
the
near
future,
any
in
any
case
and
yeah.
It
also
produces
the
packages
at
the
local
fire
system.
So
we
can
then
just
push
it
to
a
GCA
or
to
our
general
GCS
bucket
as
well.
I
Do
you
think
that
we
would
do
this
for
this
release?
I
mean
we're
not
coming
packages
for
the
alpha
right
now,
so
it's
not
a
big
deal,
but
do
you
think
that
we
should
change
the
process
for
the
upcoming
parts
of
126?
That
would
have
packages
generated,
or
do
we
want
to
hold
this
until
the
127
release.
A
It
kind
of
depends
I
think
we
can
do
it.
I
think
we
still
have
enough
time
to
play
around
this
release
with
it
and
then
also
just
propose
the
changes
by
the
end
of
the
release.
Okay,
okay,
but
my
next
question,
for
example,
would
be:
do
we
have
anyone
on
the
call
who
is
particularly
interested
in
working
on
this
because
I
see
it.
That
should
be
fine.
If
just
one
person
works
on
it.
A
Yeah,
maybe
one
open
question
would
be:
where
should
we
put
the
the
packages
right
so
that
the
Google
build
admin
can
just
publish
it
but
I?
Probably
this
doesn't
really
matter
right.
So
it's
just
download
so
many
locations.
We
can
specify.
E
Yeah
I
think
probably
we
just
put
them
in
an
unadvertised
GCS
bucket,
so
that
users
don't
start
relying
on
it,
and
we
just
point
the
like
the
Google
build
admins
that
we
can
write
a
little
script
to
like
download
them
and
feed
them
into
the
signing,
step
and
Publishing
step.
E
A
E
Yeah
I
would
say
like
really
any
bucket
that
isn't
the
like
known
public
release
bucket
that
we're
going
to
want
to
be
moving
anyhow
and
I
mean
it
sounds
kind
of
like
Rube,
Goldberg
and
not
great,
but
it's
about
how
it
is
today,
because
the
packages
themselves,
just
fetch
the
binaries
and
I,
don't
think
they
do
any
validation
of
them
or
anything
I
mean
the
the
obvious
next
step
is
that
I
mean
the
tool
needs
to
be
replaced
it
or
iterated.
On.
E
One
little
thing
when
we
were
working
on
bazel,
one
of
the
things
that
we
actually
did
was
have
Devon.
Rpm
builds
like
in
the
main
repo
consuming
for
the
binaries.
The
same
way
we
package,
the
images
I
think
that
might
be
worth
considering,
because
it
also
gives
us
an
opportunity
to
fix
a
long-standing
issue
that,
because
the
specs
aren't
version
per
release,
we
can't
make
any
changes
to
the
system
to
unit
files.
E
A
Yeah
I
mean
this
kind
of
falls
under
the
same
into
the
same
category
of
the
next
topic,
where
Marco
would
like
to
provide
us
an
OBS
demo,
but
we
could
also
discuss.
For
example,
do
we
want
to
build
the
packages
from
the
from
the
sources
like
tabs
and
RPMs
are
usually
built,
or
do
we?
We
probably
won't
still
won't,
have
something
like
a
short
path
right.
We
have
a
binary
somewhere
and
just
put
it
around,
so
we
don't
want
to
build
it
over
and
over
and
over
again
for
every
architecture.
E
Yeah
I
I
think
that
we
should
do
what
we
do
with
the
container
images,
which
is
where
we
take
the
binary
we
just
built,
and
we
like
put
it
into
the
package
but
locally
without
these
extra
Hops
and
bouncing
through
GCS
and
yeah,
and
also
it's
kind
of
nice,
because
you
can
just
check
out
kubernetes
and
build
any
of
the
release.
E
Artifacts
there,
like
you,
know,
base
images
or
something
aren't
there,
but
like
I
I
everything
else
in
the
in
the
release,
I
can
I
can
clone
kubernetes
and
like
run
a
certain
command
and
like
get
those
things.
J
J
A
J
H
J
Okay,
so
this
is
a
demoed
update
about
the
OBS
force
that
we
tried
to
do
over
the
past
few
weeks.
So
for
those
folks
who
don't
know,
OBS
is
open
source
platform
for
building
packages
for
operating
systems
like
Ubuntu,
sentos
and
all
the
other
that
are
Deb
and
RPM
based
and
so
far
we
had
a
solid
success
in
this
platform.
J
This
is
something
that
we
want
to
use
to
replace
the
scripts
that
we
discussed
before
so,
including
Building
images
and
including
publishing
Architects
assigning
everything
is
done
by
OBS
and
OBS
provides
like
it
is.
We
can
treat
it
as
an
API
so
that
we
don't
have
to
take
care
of
keys
like
GPT,
Keys,
ourselves
and
stuff,
like
that.
Everything
is
done
by
the
platform.
So
it
helps
us
a
lot
that
we
don't
have
to
think
about
that.
Much
also.
J
The
officers
folks
want
to
support
us,
so
they
will
sponsor
us
to
use
the
office
Jose
installs
that
they
have
with
OBS
and
we
have
been
using
it
for
testing
it
for
proof
of
concept
purposes
and
I
think
that
it
was
pretty
valuable
there.
The
building
is
quite
quick,
and
you
get
packages
certainly
quickly
like
five
to
ten
minutes
is
something
that
I
had
experienced
with
40
major
platform,
and
we
will
see
about
every
other
platform
that
you
can
so
how
it
looks
like
this
is
the
main
page.
J
J
I
can
also
share
links,
so
it
is
a
service
where
the
home
page-
and
this
is
our
projects
on
obviously
like
it's
called
icv
kubernetes,
and
basically
it
has
all
the
packages
that
you
have
like.
We
can
easily
create
a
new
package
for
now,
I
added,
a
bunch
of
them
and
the
one
that
we
actually
published
right
now.
That's
right
tools,
Cube
ADM,
qctivity,
cni
and
basically
for
all
of
them.
I
got
Ubuntu
girls
working
so
Debs,
and
for
cubelete
I
got
about
that's
the
FPS
work,
how
it
works
like.
J
So
when
it
comes
to
publishing
the
packages
as
you
as
you
can
see,
from
the
root
of
the
project.
When
you
choose,
for
example,
cubelet,
you
can
see
source
files
and
it's
expected
that
we
provide
the
change
log.
So
this
is
keyword
changes.
This
is
usually
change
of
the
packages
or
the
package
itself
out
of
kubernetes,
but,
for
example,
if
we
change
something
to
spec
or
something
like
that,
we
have
description
file
for
Debian.
J
We
have
spec
file
for
RPM
based
visitors,
and
then
we
have
the
star
bulb
that
contains
binaries
and
also
the
spec
files
for
Debian.
So
for
RPM
we
have
this
single
spec.
Finally,
that's
all,
but
for
Debbie,
and
there
is
also
a
bunch
of
other
files.
This
description
file
is
put
in
the
root
like
a
source
file.
Every
other
file
is
put
in
the
startup,
so
basically
we
can
go
through
that
stuff.
So
this
is
the
for
example
of
keyword.
J
As
I
said,
you
have
a
very
basic
change
log.
The
description
file
is
basically
what
we
had
before.
Like
you
put
what
you
depend
on
you
put
files,
and
this
is
the
turbo
that
we
provide
with
and
package
name
and
other
stuff.
It
is
very
simple,
so
nothing
too
complicated
and
the
spec
file
for,
for
example,
for
qubit.
This
is
used
for
RPM
stuff
and,
as
you
can
see,
this
is
from
coming
from
qpg,
so
I
use
the
tool
that
we
already
have.
It
is
based
on
this.
J
This
thing
that
we
have
here,
so
let
me
just
find
it
give
me
a
moment.
Please
yeah
here.
It
is
the
difference
here
is
that
we
have
one
huge
spec
file
that
we
were
using
before
and
then
kinda
creates
all
the
packages,
so
one
specify
for
all
packages.
What
else
we
discussed
on
a
meeting
ago,
or
something
like
that?
We
don't
really
want
that,
because
this
is
hard
to
maintain
this
isn't
hard
to
use.
J
Actually,
we
saw
we
want
to
build
packages,
so
the
idea
is
that
we
created
some
splitted
spec
files
and
basically
this
is
going
to
be
a
cable
attend
only
that
and
it
seems
to
work
very
well
and
what
everyone
tells
the
show
is
actually
install
package,
but
it's
only
human
in
container
release,
not
that
fun
stuff,
okay,
so
how
it,
how
it
actually
works
like
it
looks
like
this.
Is
the
turbo
I
have
created
a
sub
directory
for
each
supported
architecture?
J
It
could
take
stuff
like
cable,
binary,
qubit
environment
file,
so
this
is
something
that
the
band
mentioned
earlier.
So
we
have
those
arcs
and
stuff
that
is
usually
overridden,
but
yeah.
We
provide
that
all
in
the
binary
for
each
supported
architecture,
and
then
we
have
this
Debian
directory,
which
is
basically
the
spec
files
for
building
Debian
packages
that
we
have
with
qpg
and
that
we
actually
have
in
the
release
set
for
as
well
here.
So
it
is
nothing
new.
It
is
just
those
file
taken
put
in
archive
and
we
just
published
it.
J
And
finally,
when
you
push
that,
for
example,
I
can
try
to
bring
some
change.
Let
me
see
if
I
can
do
that.
So,
for
example,
how
it
works
like
I
can
change
something
in
Spec
I,
don't
know.
Let
me
see
for.
J
One
and
if
I
there
is
a
common
light
tool,
that's
called
osc
and
osis
is
used
to
interact
with
open,
build
service
with
the
instance
that
we
are
working
on,
and
it
provides
like
very
simple
interface
that
you
can
modify
those
source
files
here.
So,
for
example,
like
you
see,
the
status
I
can
also
do
from
it,
and
I
will
see,
see
things
demo.
J
And,
for
example,
it
is
usually
building
I
have
now
pushed
before
opening
catalogs,
but
it
usually
takes.
Let
me
see,
can
I
find
build
history
somewhere
important
yeah,
not
sure
about
that
right
now,
but
yeah.
It's,
for
example,
takes
building
that
takes
maybe
two
or
three
minutes.
It
is
quite
quick,
so
yeah
now
what
else
I
could
show,
for
example,
this
is
going
to
be
useful
to
Google
Fox,
at
least
in
the
beginning
activity
without
complete
the
switch
is
that
you
have
access
to
all
files.
It
is
not
here
and
not
here.
J
It
is
here,
for
example,
you
can
go
to
our
project.
I
can
also
share
this
link
as
well
and,
for
example,
if
you
open
sentence,
if
you
open
next,
you
will
see,
for
example,
keyword
package
and
you
can
download
it.
So,
for
example,
you
could
take
that
package
trip
of
the
signature
and
then
use
it
to
publish
it
on
the
Google
infra.
So
that
is
an
option
as
well
and
same
goes
then.
J
For
example,
if
you
go
to
Ubuntu
and
you
go
like
to-
and
you
will
see
all
those
practice
here
and
yeah
I
mean
I-
probably
can
definitely
starting
one
of
them
right
now,
but
you
you
can
believe
me
that
it
works,
so
I
can
actually
try
a
bit.
Maybe
if
I
do
some
public
test
I
will
share
it
with
you
all
and
show
how
it
works
and,
as
you
can
see,
it
works
well
like
I.
J
Don't
really
have
many
questions
about
this
and
the
only
the
biggest
concern
that
I
have
is
was
going
to
be
problem
with
arm
V7
and
s390x
for
power.
Pc
I
think
it
is
because
those
platforms
are
yeah
kind
of
strange
kind
of
power
to
test,
and
it
is,
for
example,
problem
that
builds
might
take
quite
a
while
for
those
platforms,
and
if
we
wait
enough,
we
will
see
free
support.
J
That
builds
are
going
to
fail,
as
you
can
see,
because
for
open
source
operating
systems,
they
have
a
very
strong
requirements
when
it
comes
to
spec
files,
and
you
must
pass
majority
of
RPM
leads
checks
and,
for
example,
we
have
an
issue,
for
example
within
valid
license.
We
have
some
errors
and
until
we
don't
solve
that,
those
wheels
will
fail.
J
I
think
that
the
query
is
jgb
spec
file,
so
this
is
something
we
have
to
be
careful
with,
because
we
want
to
make
sure
that
if
you
change
spec
file,
we
are
not
going
to
break,
for
example,
to
the
density
query
then.
So
this
is
something
the
time
you
have
to
look
into
it,
but
overall
I
think
that
it
is
only
problem
for
those
two
platforms
and
I
think
it
is
solvable,
but
just
to
mention
that,
for
example,
this
build
might
take
quite
a
while,
depending
on
the
day.
J
J
Okay,
so
I
can
spoke
for
some
time.
I
can
show
how
the
platform
looks
like
what
else
might
be
useful
it
first,
for
example,
this
ambitious.
They
are
providing
some,
let's
say,
primitive
type
of
Version
Control.
It
is
not
like
git,
but
it
is
something
that
you
can
look
at
history.
You
can.
You
can
open
sources,
you
can
do
stuff
like
that
and
yeah.
K
K
So
like,
if
somebody
were
to
create
the
app
repository
and
pull
from
this,
they
would
create
something
pointing
at
the
isv
colon
kubernetes,
like
that,
would
be
the
official
artifact
registry,
I
guess,
or
repository
yeah.
J
So
I
can
show
that
as
well.
For
example,
if
you
want
to
take
cables
for
by
Civic
kubernetes
project
and
you
get
such
a,
for
example,
double
this
wrap
a
file
put
it
that
you
can
install
it
afterwards.
So
this
is
the
way
it
works
and
migration.
For
example,
if
you
have
existing
packages,
you'll
like
remove
the
old
file
for
the
new
file,
and
it
just
it
used
to
work.
E
So
how
does
this
work
when
you
have
like
a
new
release
to
publish
like
do
you
have
to
create
this
new
tarball,
and
do
you
have
a
system
for
that
or
I
I,
really
like
that
they're
that
they're
hosting
this
for
us
I
think
that's
a
little
bit
of
an
issue
for
us
right
now.
It's
like
an
obvious
next
problem
is
like:
where
do
we
actually
host
these
things
and
manage
keys
and
whatnot
right,
more
download
traffic,
but
this
looks
pretty
manual.
J
Okay,
so
that's
a
pretty
good
question.
The
way
I
anticipated
that
this
can
work
is,
for
example,
if
you
have
some
stacking,
for
example,
credit
release
that
you
create
that
star
ball
in
that
we
push
it
to
OBS.
The
question
is:
do
we
need
to
save
it?
For
example,
do
we
need
to
put
this
on
GCS
bucket
technical
speaking
for
mobiles
perspective?
It
is,
for
example,
we
can
push
directly.
J
We
can
also
put
to
some
put
it
to
some
bucket,
then
use
the
there
is
a
double
file
service,
for
example,
that's
going
to
pull
it
from
from
the
GCS
bucket,
but
then
it
doubles
the
space
requirements
because
we
have
to
store
this
number
somewhere
for
each.
B
J
I
think
this
can
be
automated
incredible,
because
the
turbo
itself,
as
you
can
as
I
can
show
near,
where
is
pretty
well
I'm,
not
say
not
complicated,
like
it
is
just
a
binary.
So
you
can
easily
create
this
clip
to
pull
each
binary,
put
it
in
the
appropriate
directory
and
then
create
a
terrible
and
push
it
to
OBS
or
to
GCS
bucket.
Whatever
we
decide.
J
C
Sorry
yeah,
it's
related,
so
those
operations
that
you
did
with
the
with
the
CLI
tool.
How
does
the
authentication
work
if
you
want
to
interact
with
it.
J
Oh
yeah,
that's
a
good
point.
I
even
need
to
double
check
that
from
what
I
have
seen
on
profile,
you
have
some
sort
of
tokens
that
you
can
create
a
token
and
it
will
probably
could
try
to
use
that
to,
for
example,
their
stuff
like
release
and
go
out.
Stick
it
with
some
token
and
then
with
orc
like
I,
think
that
we'll
see
can
be
so
secure,
but
I
did
once
and
I
didn't
think
much
about
it.
J
J
So
for
the
beginning,
also
thanks
for
that,
if
anybody
has
a
question
about
that,
please
I
think
the
plan
is
that
you
will
use
osc
tool
itself
in
the
background.
This
topic
we
discussed
before
about
the
idea
is
that
we
don't
have
to
go
into
like
analytics
of
go
API
or
something
like
that,
because
this
will
take
time
you
can
just
you
always
see
directly
invoke
it
from
Krell
and
use
it
like
that.
J
At
least
for
the
beginning
and
then
see
later,
if
they
need
to
create
some
library
or
something
like
that,
because
as
far
as
I
have
seen,
they
don't
have
API
library
for
go
or
something
like
that
that
you
can
use.
E
J
Yeah,
some
necessarily
because
RPM
linked
you
can
recheck
in
the
job.
As
you
can
see
there
is
this
sugar
build,
but
it's
going
to
fail
again,
because
those
RPM
green
stuff
is
consistent.
This
can
be
sold,
so
it
is
not
unsolvable,
but
it
is.
A
question
is
if
this
is
blockary.
This
is
not
blocker.
How
do
we
want
to
proceed
and
stuff
like
that?
Like?
Do
we
even
want
to
go
with
those
packages
in
the
new
info?
E
J
Or
it
is
about,
it
is
because
of
operating
system,
for,
as
you
can
see,
you
can
build
for
each
Operating
System,
since
right
now
only
platforms
is
that,
for
example,
Santos
Debian
Ubuntu
is
not
running.
Rpm
leads,
but
open
source
operating
systems
is
enforcing
that
to
publish
packages.
You
must
pass
RPM
with
checks,
that's
not
the
case
for
other
operating
system,
but
it
is
the
case
for
open
source
like.
D
J
I
would
add,
like
64
bits
for
offices,
it
will
still
fail,
even
if
prices
like
percent
plus
because
they
require
that
there
are
no
RPM
rate
sellers
yeah
and
we
can
see
about
that
diamond
if
there
is
way
to
work
around
that.
But
for
what
I
have
seen
there
is
not
such
a.
C
Yeah
well,
just
maybe
you
haven't
looked
into
this,
but
do
we
know
if
there's
a
limitation
on
the
free
hosting
that
they
provide,
and
sometimes
things
are
pretty
up
to
that
point.
J
I
mean
they're
sponsoring
us,
so
I
don't
anticipate
any
limitations
for
now.
Even
if
we
run
into
such
limitations,
we
can
host
OBS
platform
ourselves
and
the
entity
is
relatively
easily
migrated.
The
problem
will
be
jpj
key,
but
yeah
from
what
I
have
seen
I,
don't
think
I,
don't
anticipate
problems.
Storage
is
going
to
be
a
question,
for
example,
because
zero
percent,
those
relatively
large
star
balls
but
I,
mean
even
if
we
push
kubernetes
sources
to
build
the
packages
in
OBS.
J
J
A
I
have
a
concern
or
a
question.
One
concern
is
that
we've,
so
we
you
configure
the
repositories
for
the
cubelet,
for
example
like
Center,
S7
or
Debian
testing,
and
people
will
complain
to
pull
in
Ubuntu
or
Debian
packages
into
Ubuntu
distributions.
For
example,
we
probably
can
also
tweak
the
configuration
and
rename
that
to
something
like
generic
RPM
Builder
or
generic
Debian
Builder,
so
that
it
doesn't
really
point
to
a
distribution.
J
A
Yeah,
that's
that's
the
thing
we
can
probably
just
rename
it,
but
we
would
still
have
a
generic
Builder
and,
for
example,
if
that,
if
a
dependency
in
a
package
has
changed
like
I,
think
we
need
curl
or
something
like
this
or
I,
don't
think
we
need
contract.
D
A
J
A
E
I
also
point
out:
one
thing
that
puts
me
a
little
bit
is
that
I
think,
aside
from
Windows,
the
rest
of
the
kubernetes
release
today
is
technically
possible
to
like
invoke
the
build
locally
and
produce
the
build
outputs
yourself,
and
this
kind
of
moves.
The
specifics
of
the
build
out
a
bit,
on
the
other
hand,
like
I,
think
I
I
personally
feel
these
as
like
slightly
ancillary
packages.
I
think
we
need
reference
like
systemd
spec
files,
but
beyond
that,
like
specifically
installing
for
these
distros.
E
This
isn't
the
only
way
to
install
kubernetes
and
Beggars
can't
be
choosers,
and
we
kind
of
need
somewhere
to
host
these
things.
E
Unless
maybe
we
can
figure
out
some
way
to
do
it
on
Amazon
we're
pretty
we're
pretty
tight
on
the
budget
going
into
next
year,
and
we
don't
quite
have
all
the
specifics
of
the
Amazon
details.
Yet.
E
Yeah
I
mean
one
of
the
one
of
the
upshots
to
OBS.
That
is
I
mean
fairly
substantial.
Is
that
they're
providing
the
hosting
for
dealing
with
the
gpg
keys
and
for
like
actual
downloads
for
end
users,
which
has
been
something
of
a
pain
Point
per
infra?
E
And
we
don't
really
know
how
expensive
the
the
current
packages
are,
because
they're
they're,
not
I
mean
it's
not
a
product.
J
Yeah
I
mean
this
is
what
I'm
a
little
bit
worried
as
well,
because
from
what
we
have
got
feedback
from
Mobius
folks
did
they
can
do
it
and
they
don't
expect
any
problem.
But
I
am
personally
a
little
bit
worried
about
the
law
that
this
is
going
to
generate
because
those
packages
are
relatively
bigger
and
I.
J
Think
since
kubernetes
is
a
big
project,
it's
going
to
be
quite
a
lot
of
flawed
on
the
OBS
infrastructure,
since
yeah
many
folks
are
using
those
packages,
but
I
think
the
only
way
we
can
see
Harvest
is
going
to
work
is
to
give
it
a
try
like
I.
Don't
really
think
we
have
any
way
to
estimate.
How
is
this
song
going
to
work
like
sorry,.
E
Man,
as
opposed
to
us,
standing
up
our
our
own
hosting,
where
we
also
have
this
problem.
This
is
sort
of
a
way
for
like
Susie
to
Supply
the
project
like,
which
is
something
that
we're
hoping
to
get
more
vendors
involved
in.
D
Yeah
I
just
want
to
expand
on
that.
That
was
one
of
the
ideal
end
goals
with
this.
However,
because
we
are
using
these
packages
a
lot
within
the
various
different
Cloud
providers
for
testing
and
everything.
D
You
know
whether
we
provide
like
a
redirect
or
a
proxy
Service.
We,
we
probably
want
to
have
a
discussion
around
that
around
these
packages,
so
that
we
could
potentially
cache
those
within
the
various
different
Cloud
providers
and
speed
up
delivery
time
and
availability
for
those
for
like
network
connections,
or
we
could
just
have
a
simple
redirector
service
so
that
if,
in
the
future,
we
do
need
to
migrate
to
a
self-hosted,
OBS
or
some
other
type
of
infrastructure
for
hosting
the
devs
and
RPMs.
We
don't
have
to
go
through
the
URL
Shuffle.
D
Another
time.
The
good
thing
about
being
able
to
migrate
to
our
own
self-hosted,
OBS
infrastructure
as
well
is
a
suse
folks
can
actually
give
us
the
gpg
keys
that
are
being
used
in
the
public
OBS
instance,
so
that
we
can
migrate
to
the
self-hosted
infrastructure
wherever
we
want
to
host
it
and
import
those
gpg
keys,
so
that
if
we
have
that
redirect
or
that
proxy
in
place,
it
would
be
absolutely
no
disruption
to
the
end
users
other
than
the
cut
over.
E
J
Okay,
now
we
get
here
running
out
of
time,
so
I
have
to
ask
one:
let's
go
brother,
a
quick
question
is
how
do
we
want
to
proceed
like
now
that
we
have
seen
the
DC
has
potential
to
work
that,
mostly
like
it
is
going
to
work?
How
do
you
want
to
go
next,
like
soft
line
that
I
had
is
that
we
could
try
to
get
all
packages
that
working?
D
So,
unless
our
objections-
and
it
seems
like
most
folks-
are
good,
with
kind
of
proceeding
with
this
route,
I'd
say
we
could
start
putting
together
a
road
map
detailing
what
the
current
problems
that
we
have
are,
what
the
gaps
are
to
actually
implement
the
solution
and
actually
build
out,
basically
update
the
cap
with
details.
You
know
adoption
strategy
of
how
do
we
get
to
Alpha
Beta
And
GA,
with
obs.
K
And
I
think
in
the
short
term.
Excuse
me
I,
think
it'd
be
awesome
to
Define
what
is
like
the
MVP
minimum
goer
no-go
for
OBS
like
can
we
build
all
the
packages
or
what
is
that
lowest
bar
to
say,
like
totally
green
light
from
Sig
release
or
let's
put
the
brakes
on
this
and
just
get
to
that
kind
of
fail,
fast
point
and
move
forward,
because
it
does
seem
like
we're
all
on
board,
but
I
think
we
need
that
little
bar
of
saying,
like
yes,
we're
all
in
based
on
this
criteria
that
we've
defined.
D
Yeah,
that's
why
I
suggested
the
cap
I
think
that
would
be
a
good
place
where
we
can
asynchronously
as
a
group.
Come
to
you
know
that
conclusion
of
you
know
what
is
that
point
I
think
we
could
discuss
it
in
this
meeting,
but
we're
lacking
a
whole
a
lot
of
thoughts
and
opinions
from
the
larger
group.
If
we
do.
E
Not
sure
how
to
put
this,
but
basically
I,
don't
have
super
strong
opinions,
but
at
the
point
I
have
some
thoughts
about
like
if
we
go
with
each
one.
How
to
you
know
be
careful
with
that,
but
the
the
biggest
thing
that
I
just
want
to
underline
is
that
if
I
can
have
something
to
point
to
in
the
near
future
to
show
the
the
googlers
that
are
still
stacking
this
that
there
is
like
a
path
towards
the
project
taking
over
I.
E
Think
it's
going
to
be
easier
to
convince
their
management
to
you
know,
make
sure
that
we
have
a
very
graceful
transition,
they're,
pretty
close
to
just
not
doing
this
anymore.
There's
just
no
team
alignment
and
they've
been
they're
getting
folded
into
the
to
a
larger
team
that
is,
you
know,
has
existing
significant
on-call
duties,
I.
D
Think
that's
a
great
Point,
Ben
and
I
think
we
can
point
them
at
the
combination
of
the
work
that
we're
doing
to
break
out
the
building
and
the
signing
of
the
packages
as
one
of
those
steps
and
I
think
we
can
absolutely
Point
them
to
the
POC
that
we're
doing
with
OBS
right
now
and
like
Marco's
demo
as
another
one
and
and
I
think
the
cap
is
the
next
step
to
show
them
that
you
know
we
can
actually
put
a
timeline
on
this
and
time
box
it
instead
of
just
you
know
up
in
the
air
like
we're
working
on
it
type
of
thing.
E
I
yeah
I
I
appreciate
it
I'm,
putting
them
to
the
break
to
the
to
the
breakout
and
so
on.
I'll,
you
know
bring
the
capital
as
soon
as
we
have
that
and
I'm
gonna
follow
up
on
the
third
with
their
managers
and
so
on
about
this
meeting
today.
That,
like
from
my
point
of
view
like
we
have
pretty
significant
progress
here
and
thank
you
all
for
working
on
that.
J
A
A
Exactly
so,
I
will
move
to
open
topics
either
in
the
next
week's
meeting,
or
we
just
discuss
them
asynchronously
in
select,
as
you
prefer.
Thank
you
all
for
attending.
We
are
already
out
of
time
thanks
for
the
demo.
Marco
enjoy
the
rest
of
the
day
and
see
you
all
soon.
Bye.