►
From YouTube: Kubernetes SIG Bi-Weekly Meeting for 20221018
Description
Kubernetes SIG Bi-Weekly Meeting for 20221018
A
Hey
everyone
and
Welcome
to
our
weekly
sick
release
meeting
this
is
the
EU
time
slot
or
Pacific
yeah
epic
time
slot,
probably
this
meeting
adhes
to
the
cncf
code
of
conduct
so
which
basically
boils
down
to
be
excellent
to
each
other.
As
always,
I
will
link
the
agenda
in
the
chat
and
thanks
to
Ray
for
taking
notes
for
today.
A
A
B
I
well
I,
guess
again,
so
there
are
I
I,
think
I've
been
out
to.
They
may
not
be
aware
of
everything
going
on,
but
I
have
two
topics.
I
would
like
to
talk
about
the
first
one
is.
B
B
And
it
should
be
happening
pretty
soon.
Maybe
it's
already
done.
I,
don't
know
it's
pretty
early
here
and
the
second
one,
which
also
Segways
a
little
bit
to
the
to
the
next
topic,
is
the
state
of
the
promotion
process,
as
you
may
have
seen
from
the
previous
releases
that
we
got.
The
image
promotion
currently
is
running
very,
very
slow.
B
This
is
because
we
are
now
promoting
to
a
ton
of
artifact
registry
regions
and
before
we
used
to
when
we
did
the
promotion
process,
which
means
that
stage
images
moved
from
the
staging
Registries
after
a
community
authorization
to
release,
they
moved
to
the
production
Registries
we
used
to
copy
them
to
three
locations,
and
now
it's
I
think
over
20,
if
I'm
not
mistaken,
and
so
this
made
the
promotion
process
very
slow,
it
will
require
a
significant
redesign,
Aretha
and
King
of
the
whole
process.
B
Everybody
I
think
right
now
is
applying
their
best
to
come
up
with
new
Solutions,
and
there
are
some
Nazis
questions
going
on
in
slack
and
the
issues
and
other
places
so
just
well.
First
of
all,
I
would
like
to
thank
the
release.
Managers
who
have
taken
over
those
releases
will
have
it
taking
over
well
more
than
a
day
to
complete
and
the
second
one
is
well
just
let
everybody
know
that
we
are
trying
to
come
up
with
some
answers
to
well.
B
Maybe
remediate
the
situation
a
little
bit
in
the
in
the
at
the
moment,
but
then
also
trying
to
rethink
how
promotion
looks
like
for
the
following
months
and
maybe
years.
B
Not
really
currently
it's
spreading,
there's
a
PR
going
on
on
on
Kate's
kubernetes
six
promo
tools
trying
to
speed
up
the
process
in
the
authorization
and
mostly
Ben,
is
working
trying
to
look
at
how
to
adjust
the
the
promoter
to
to
run
more
efficiently
in
in,
in
that
make
a
better
use
of
the
of
the
hardware
infrastructure
that
supports
it,
but
not
not
overall
there's,
no,
because
we
we
don't
have
like
a
working
plan.
Yes
yet,
but
we
can,
we
can
create
one.
A
Yeah,
for
example,
it
also
affects
the
pre-submits
right,
so,
for
example,
it
would
be
great
if
we,
if
we
could
do
something
like
it
did,
for
the
pre-submits.
So
what
has
changed
and
then
check
the
everything
which
should
be
checked
in
the
pre-submit
and
not
for
every
image
in
the
Repository
and
I
would
also
assume
that
that
we
can
somehow
identify
possible
fields
for
parallelism
or
something
like
this.
B
Yeah
I
well,
thank
you,
Muhammad
for
sharing
all
the
issues
on
on
the
chat
and
yeah.
So
one
one
idea,
for
example,
that
Ben
came
up
with,
is
maybe
not
looking
at
what
the
Registries
look
like,
but
instead
relying
on
the
state
we
know
from
git.
B
Well,
in
fact,
I
will
be
presenting
at
the
community
at
the
contributor
Summit
with
Arnold
and
Ben,
just
like
a
quick,
mostly
informal,
kind
of
talk
about
about
this.
What
about
their
changes
to
the
registry?
And
the
idea
of
this
is
well
it's
twofold
one.
Let
the
community
know
why
the
changes
happen
and
what
they
look
like,
and
maybe
it
can
open
the
form
for
a
nice
discussion
around
new
ideas
around
this
topic.
B
And
well
I
guess
this
is
it
for
me:
I
mean
Muhammad
and
I,
don't
know
here
who
have
been
working
fearlessly
on
all
of
this
process.
So
if
they
have
anything
to
add
you're
more
than
welcome.
C
D
B
C
Add
to
that
right
so
yeah
it
was
a
long
project.
I,
don't
know
four
months
in
now.
We
finally
got
there.
Nobody
expected
the
promotion
to
drag
on
that
slowly,
but
we
kind
of
do
need
to
rethink
some
of
our
assumptions.
C
One
quick
and
easy.
The
effects
right
now
is
to
get
the
pre-submits
sped
up
by
basically
looking
at
the
depth
of
head
and
the
branch
basically
look
at
the
resulting
definite
yaml
and
just
check
that
right
instead
of
checking
the
entire
thing,
there
were
some
other
ideas
going
around,
but
I
haven't
seen
anything
in
writing
or
in
an
issue.
That's
ready
to
go
just
here
and
if
it
helps
anybody,
we
only
promote
to
20
out
of
the
35
regions
that
are
available.
C
So
if
we
do
turn
those
15
on
I
think
that's
gonna
drag
onto
another
two
three
hours.
C
B
And
also
Muhammad
I
know
I,
don't
know
if
you're
more
up
to
date
of
the
status
of
the
project,
because
there's
also
the
plan
to
mirror
some
of
those
images
in
AWS
or
is
it
that
still
going
or
how
does
it
look.
C
We
mirror
from
the
GCS
bucket
that
backs
GCR,
there's
another
long
conversation
that
we
need
to
have
about
that
idea
itself,
because
that
might
go
away
sometime
in
the
future.
But
as
of
today,
that's
what
we
do
so
there's
a
bucket
from
Google
Cloud
that
we
copied
to
S3,
and
then
we
use
some
AWS
services
to
copy
to
the
other
10.
D
C
B
Yeah
I
know
it's
going
on
pacing
from
the
actual
promotion,
but
yeah
I
mean
that
is
the
absolute
big
one
and
well
I
hope
that
we
can
get
some
work
going
on
in
the
really
neat
near
future.
To
remediate
to
some
of
this.
A
A
Oh,
it
does
not
look
like
this,
so
we
skip
this
over
to
the
next
meeting,
which
is
not
a
big
deal
at
all,
and
we
can
jump
over
to
the
open
discussion,
sections
or
yeah.
The
first
topic
is
a
short
one,
so
I
sent
out
a
mail
earlier
today
that
we
moved
our
sick
release
meeting
from
Monday
to
Tuesday
next
week.
A
The
main
reason
for
that
is
that
we
assume
that
folks
still
travel
on
Monday
and
Monday
morning
is
probably
not
the
best
time
slot
to
for
everyone,
and
you
say,
is
still
free
and
the
contributors
Summit
somehow
reduce
their
schedule
from
Monday
Tuesday
to
just
Mondays.
We
have
the
full
Tuesday
for
meetings
and
get
togethers
like
that
yeah.
So
it's
now
on
Tuesday
10
a.m.
Local
time,
I
updated
the
schedule
I
also
added
the
location
on
top
of
the
agenda.
A
Yeah
Jeremy,
you
have,
you
have
the
Edition
for
the
social
event
on
Wednesday.
Do
we
have
any
further
details
on
that.
E
There's
two
venues
that
kind
of
were
high
on
the
list.
One
is
a
rooftop
thing,
that's
really
close
to
the
venue
and
the
other
was
a
place
that
has
a
lot
of
outdoor
seating,
there's
also
fairly
close,
so
I'm
going
to
try
to
call
today
and
secure
both
of
them,
see
which
one
I
can
get
yesterday
both
were
closed.
We
were
closed
on
Mondays,
but
closing
Mondays,
so
I'm
gonna
try
to
block
off
some
time.
We
had
10
folks
responding
to
the
survey,
so
I'll
shoot
for
like
that
number.
E
If
anybody
else
is
interested
in
coming
and
hasn't
indicated
so
just
ping
me-
and
let
me
know
so:
I
just
have
a
rough
idea
having
folks.
A
D
Hey
everyone,
so
here's
the
situation
I
have
spent
a
lot
of
time
since
our
last
meeting
to
try
and
see
how
open
field
service
works,
and
if
this
is
something
that
we
can
use
in
our
Pipeline
and
for
the
beginning,
I
would
like
to
thanks
a
lot
of
Falls
from
Office
UC
and
dropping
Bill
service,
who
helped
me
to
get
started
to
learn
more
about
how
everything
works
and
how
we
can
use
it.
The
situation
is
what
is
the
most
important
is
that
some
initial
phase
is
successful.
D
I
was
able
to
build
devs
and
RPL
RPMs
for
amd64
for
all
packages
that
we
already
published
like
cry
tools:
community
cni,
cubelet,
qbd
and
qctl
I
use
two
projects
for
that
and
the
first
one
is
ICB
communities
and
the
second
one
is
my
home
project.
I
think
that
ICB
kubernetes
is
probably
most
complete,
but
I
think
it
only
has.
Actually
Tony
has
Debian
packages
for
now,
and
only
cubot
actually
is
bought
there
again
and
RPM.
So
you
can
take
a
look
at
how
it
looks
like
and
what
we
can
expect
from
those
packages.
D
I
think
it
is
fairly
from
standpoints
of
how
it
is
organized
I
think
it
is
fairly
okay
from
what
is
described
in
the
cap.
I
think
that
it
is
okay,
for
example,
we
have
pedestal,
we
have,
we
can
create
multiple
sub
projects
and
that
stuff
and
better
create
those
channels
like
nightly
daily
whatsoever.
We
want
to
have
for
pre-releases
table
releases,
I,
think
that
is
not
going
to
be
a
problem
and
actually
I
try
to
install
those
packages,
and
that
was
successful.
D
I
even
tried
migrating
existing
cluster
like
creating
a
cluster
you
installed
with
Google's
packages
and
then
upgrading
to
OBS
and
upgrading
the
cluster
that
works
as
expected.
There
was
no
issues,
everything
very
really
well,
so
this
is
also
another
good
thing,
and
the
migration
process
was
pretty
simple,
like
I
just
had
to
remove
the
old
grapple
and
the
jpg
key,
then
at
the
new
repo
and
jpg
key
again
and
the
just
run
update,
try
to
upgrade
packages
and
like
it
worked
out
of
the
box,
there
was
no
additional
steps
they
actually
had
to
take.
D
So
this
is
from
the
good
side,
also
to
say
from
good
side.
Is
that
the
build
times
were
okay
initial
experiences
that
they
were
variable
because,
when
I
was
building
in
my
home
project,
some
stuff
was
very
slow.
I
had
to
wait
many
hours,
but
it
turns
out
that
icv
kubernetes
is
a
bit
higher
priority
and
after
that,
I
didn't
receive
problems.
Everything
was
finishing
in
matter
of
peanuts,
like
let's
say
then.
10
to
50
minutes
to
get
packages
was
something
that
I
observed.
D
I
think
this
is
not
too
bad
and
I
think
that,
even
if
he
honestly
had
multi
architectures
like
arm
and
other
stuff,
we
have
I
think
they're
running
it
parallel.
So
it's
not
going
to
be
a
problem
and
I
think
like
building
packages
shouldn't
take
like
more
than
half
an
hour
or
something
like
that.
But
this
is
something
that
we
are
still
going
to
see.
How
is
this
going
to
work?
D
Also
something
that
is
different
and
what
is
something
to
pay
attention
to
is
that
spec
files
have
to
be
changed
a
little
bit
for
RPM.
We
were
able
to
use
the
existing
one
more
or
less,
without
significant
changes.
I
had
to
change
some,
let's
say
smaller
details,
but
everything
else
was
pretty
much
same
for
Debian.
It
actually
acquired
some
significant
changes.
I
had
to
change
the
structure
a
little
bit
and
also
in
our
Debian
packages.
D
The
situation
is
that
when
we
build
packages
we
run
curl
to
double
the
binary
and
then
use
that
binary
put
it
in
the
package
and
complete
everything,
but
this
was
impossible
with
OBS,
because
Debian
Builders
are
not
really
connected
to
the
internet,
so
we
can't
use
Curl
to
download
packages.
We
had
to
fetch
those
packages,
put
them
in
the
binary
and
then
push
them
to
OBS.
So
this
is
basically
how
it
is
going
to
work
I
still
going
to
save
it.
D
Folks,
if
this
is
maybe
one
possibility
like
to
have
the
internet
Texas,
but
I
will
count
that
much
of
that.
So
even
the
way
that
we
have
like
this,
we
would
fetch
it
who
she's
going
to
work
pretty
much.
Okay,
I
think
there
is
a
problem
in
my
life,
will
note
here
about
the
layout
of
the
repositories
and
packages,
because
right
now,
so
what
is
the
key
issue
here
for
webinar
packages?
We
have
a
package
for
each
of
our
packages
that
we
publish,
we
have
dedicated
cubelete,
Cube,
CTL,
Cube,
ADM
and
so
on.
D
For
RPM
we
have
everything
in
the
same
spec
file,
so
it
defines
all
packages
and
in
OBS
there
is
only
one
package
created
like
you,
just
create
a
cubic
package
and
it
is
going
to
build
everything
there
like
both
Cube
CTL,
Cube,
ADM
and
everything,
and
there
was
some,
let's
say
as
a
requirement
in
a
slack
channel-
that
we
keep
this
one
RPM
file
and
build
everything
with
one
spec
file,
but
I'm
not
sure
how
much
I
like
that.
D
I
am
a
little
bit
more
fan
of
having
multiple
RPM
spec
files
like
for
each
package,
One
spec
file,
because
it
makes
stuff
much
easier
to
maintainer,
at
least
in
my
experience.
But
this
is
something
that
we
have
to
discuss.
So
this
is
one
of
action
items
discuss
how
we
want
to
let's
say:
template
our
spec
files
and
now
today,
not
so
good
part
is
that
we
are
still
not
clear.
How
are
we
going
to
have
the
multi
art
bills?
D
The
problem
there
is
that
we
will
probably
have
to
create
a
tarba
turbo
with
all
binaries
for
all
architectures
the
deposit
to
OBS.
That
can
be
quite
large,
and
there
is
some
concerns
how
values
are
going
to
work.
What
is
usually
that
is
happening
is
that
false
publish
sources
like
we
will
have
to
publish
kubernetes
sources
and
then
build
binaries
that
we
need
for
architecture
needed,
but
I
also
have
doubts
about
that
approach,
because
the
first
one
is
that
kubernetes
is
pretty
huge,
I
think
without
git
history.
D
Cloud
run,
or
something
like
that.
So
this
is
some
problem
that
we
are
going
to
have
and
I.
Don't
think
that
the
music
sources
is
actually
going
to
work,
but
it
would
be
nice
to
get
on
another
call
with
OBS
folks
to
discuss.
How
are
we
going
to
do
this
and
another
problem
that
has
been
required
in
the
cap
by
Ben
and
by
some
of
the
folks?
Is
that
that
we
should
build
packages
ourselves
and
then
provide
them
to
Google
and
provide
them
to
OBS?
And
that's
not
going
to
work
for
OBS.
D
You
can
only
provide
sources
you
can
provide
like
created
packages
and
those
packages
that
they
created
by
OBS
are
already
signed,
and
it's
mostly
like
that.
Google
can't
take
science
packages
to
publish
them
because
their
systems
are
not
really
made
for
that,
and
it's
probably
like
that.
They
can't
even
resign
them
and
like
it's
also
that
we
can't
remove
signature
or
something
like
that.
So
this
is
not
going
to
work.
This
is
an
awkward
topic
that
we
have
to
discuss
with
men
and
with
Google
Force.
How
are
we
going
to
handle?
D
D
The
question
is
to
again
meet
with
OBS
follows
to
discuss
the
next
steps
to
discuss
those
problems
and
to
see
if
we
are
going
to
be
able
to
use
their
infra
like
coffee,
Studio,
C
infra,
so
that
you
don't
have
to
run
our
own
OBS
yeah
if
we
both
have
to
run
our
own
OBS
I
have
some
concerns
about
that,
because
this
is
another
component
that
we
don't
really
have
experience
with.
The
there
has
a
lot
of
moving
parts
that
we
need
to
take
care
of.
A
E
I
was
just
gonna
say
the
the
open
Tuesday
folks
already
said
that
they
would
be
willing
to
host
these
on
the
OBS
infrastructure.
E
So
we
can
go
ahead
with
that
assumption
in
place
and
then,
if
we
wanted
to
migrate
to
our
own
OBS
instance
at
some
point
in
the
future,
we
would
have
the
flexibility
to
and
we
could
migrate
that
information
over
relatively
easily.
The
other
thing
I
would
recommend
is
that
we
can
put
a
basically
a
proxy
in
place
so
that
we
have
the
naming
that
we
want
underneath
the
kubernetes
DNS,
and
then
we
can
proxy
the
the
OBS
hosted
images
through
that
proxy.
E
D
Okay,
yes,
I
agree
with
that.
I
think
it
is
a
good
point.
I
think
we
should
definitely
have
something
like
that.
Some
proxy
I
think
we
have
something
similar
for
Google
like
there
is
apt,
kubernetes.io
and
young.kubernetes,
or
something
like
that.
So
I
think
it
should
also
work
for
the
office
of
Z
infra
I.
D
Don't
see
that
as
a
problem
and
yeah
I
think
that,
as
of
the
next
steps,
is
probably
to
get
the
multi-arg
stuff
working
I'm
going
to
look
into
that
as
Sasha
commented
on
chat
that
there
is
service
and
download
files
for
getting
the
files.
This
is
something
that
I
am
going
to.
Try.
I
have
already
asked
the
officers
folks
about
that,
and
they
told
me
about
this,
but
I
didn't
have
time
to
try
that
as
well.
A
Yeah,
this
is
something
we
can
play
around
with
right,
so
they
have
this
service
infrastructure
which
allows
internet
access,
for
example,
and
if
we
imagine
for
downloading
old
modules
and
things
like
that,
we
don't
have
to
render
everything
into
the
into
the
repository.
So
we
can
use
access
to
the
internet
by
using
those
services.
E
A
D
I
think
this
is
going
to
be
something
that
we
still
have
to
see
because
the
OBS
client
is
in
Python.
So
we
can't
really
do
that
natively.
We
would
probably
have
to
create
some
our
own
API
layer
that
we
use.
They
provide
the
API
and
there
is
some
documentation.
I
started
looking
into
that
I.
Don't
think
that
we
need
too
much.
We
probably
just
need
to
do
stuff
like
push
files
and
that
but
I
think
this
is
normal.
It
requires
some
work,
but
I
guess
it's
possible.
F
F
F
I
think
there
was
a
recent
conversation
in
a
PR.
That
was
basically
you
know,
hey
we,
shell
out
to
so
many
other
things
in
general.
I
think
that
this
is
this
is
effectively
going
to
be
a
trigger
somewhere.
Maybe
it
is
you
know,
maybe
it's
a
separate.
You
know.
Maybe
it's
a
separate
GCB
job
that
that
we
run
or
something
that's
triggered
as
a
result,
it's
another
step
within
the
the
gcp
process,
but
I
think
packaging.
F
You
know
their
tool
if
there's
already
a
you
know
a
container
image
for
it
packaging
or,
if
not
you
know,
packaging
their
Tool,
triggering
the
jobs
and
and
reading
the
output
from
there
should
be
sufficient.
We
don't
necessarily
have
to
to
integrate
into
Krell,
especially
if
there
is
not
already
a
a
go
API
for
this
I.
Don't
want
to
I.
Don't
think
that
we
need
to
get
into
the
business
of
of
doing
the
go
API
for
for
OBS
just
to
to
satisfy
this.
So
I
think
you
know
this
is
Marco.
F
This
is
phenomenal
progress,
thank
you
to
to
Marco
and
to
to
to
Jason
and
all
the
folks
that
have
been
been
working
on
getting
this
going.
D
So
more
of
something
that
I'm
going
to
need
some
help
from
cigarettes
as
well
is
to
decide
on.
How
are
we
going
to
handle
our
spec
files,
both
from
RPMs
and
Ambience?
For
example?
Do
we
want
to
have
one
huge
RPM
spec,
or
do
we
want
to
have
multiple
RPM
specs
for
each
package?
How
do
we
want
to
handle
Debian
packages?
There
is
some
idea
that
you
can
use
RPM
specs
for
building
both
RPMs
and
Debian
packages
and
someone
on
release
packages,
Spock
Channel
Serbia.
D
That
tool
actually
I
think
that
podman
is
using
that,
but
I
am
not
really
a
huge
fan
of
that,
because
I
would
like
that
we
have
Debian
sources,
so
Debian
users
cannot
natively
build
packages
for
them,
but
Steven
said
that
we
should
give
that
possibility
to
users.
There
is
stuff
such
as
like
that
we
need
to
decide
that
I
don't
really
want
to
decide
on
my
own,
so
yeah.
This
is
something
that
I'm
going
to
need,
help
with
the
series
yeah.
F
So
so
my
quick
thoughts
there
would
be
you
know
and
echoing
what
what
Jason
said
in
the
chat.
It
is
simpler
if,
if
the
the
concerns
are
separated,
you
know
the
spec
or
you
know,
Debian
config
per
package
and
the
I
think
I
think
you
know.
For
the
RPM
side,
we
have
been
surviving
with
this
Omnibus
package,
but
it
really
should
be
a
separate
spec
for
a
package.
D
F
The
the
thing
to
think
about
is,
whenever
you're
changing
an
interface
like
this,
like
how
is
the
user?
How
is
this
going
to
change
the
way
a
user
consumes
our
packages
right
so
as
as
we're
kind
of
walking
down
this
this
path,
are
we
making
sure
that
we're
doing
like
reviewing
things
like
reviewing
the
the
installation
docs
on
the
website
right?
F
If
because,
because
those
are
their,
you
know
those
are
effectively
people's
expectations
of
how
to
install
kubernetes
if
they're
going
the
package
trap
right.
So
as
long
as
we're
as
long
as
we're
meeting
them
or
keeping
them
in
mind,
as
we
are
changing
documentation
and
process,
then
we
should
be
fine.
D
I
agree
with
that,
but
because
we
could
go
spec
files
like
yeah
I,
think
we
have
a
little
bit
more
freedom
in
terms
of
what
we
can
change,
because
as
long
as
you
don't
change
stuff
like
packaging
games,
that's
all
and
you
don't
change
what
you
deliver
about
those
packages.
Users
are
not
going
to
much
care
if
this
is
going
to
be
one
spec
file,
10,
specs
files
or
whatsoever.
So
we
have.
F
So
what
I
would
what
I
would
say
here
is
as
much
as
possible
like
we
are
moving
to
New
systems
gives
us
an
opportunity
to
be
clear
about
the
fact
that
we
are
moving
to
New
systems
and
break
things
if,
when
necessary,
the
I
think
the
ideal
outcome
is
that
there
is
no
breakage,
but
if
we're
being
truly
honest,
Dev
and
RPM
building
publishing
has
been
broken
in
kubernetes
for
for
for
too
long,
so,
let's
I
think
I
think
backwards.
Compatibility
is,
is
maybe
a
nice
goal.
F
D
F
Feel
yeah,
so
don't
feel
hampered
by
by
certain
things
by
by
the
way
that
we're
we're
doing
certain
things
right
now,
because
we
are
going
to
have
to
sit
down
as
we
get
closer
to
to
to
shipping
this
and
and
say:
does
this
work
that
you
know
does
does
the
system
does
the
system
work?
Is
it
expected
and
like
that
is
a
huge
opportunity
to
to
have
a
net
Improvement
for
folks.
D
D
I
will
see
about
meeting
Utopia
spokes
so
that
we
can
go
through
stuff
that
we
did
once
again
and
make
sure
that
they're
finding
that
they
they
agree
with
what
we
did
and
to
see
if
there
may
be
some
better
way
or
some
if
we
miss
a
topic
and
yeah
I,
think
it
is
okay
for
now,
I
will
continue
working
on
this
track
to
try
this
forward,
but
let's
hope
that
we
can
make
it
work
and
yeah
I
guess.
One
option
that
we
should
also
do
is
try
to
get
that
cap
merged.
A
Yeah,
we
basically
need
approval
for
this
capsule
Jeremy
and
also,
if
you
would
have
a
minute
to
read
out
this
cap
and
give
it
an
approval,
then
we
can
move
forward
with
it
sure
thanks
one
more
question
or
one
last
question:
Marco,
the
main
goal
is
to
move
the
dab
and
RBM
package
building
to
community
infra,
so
we
don't
have
really
defined
what
community
infra
means.
I
would
assume
that
it
mostly
means
that
every
release
manager
is
able
to
debug
in
case
something
fails
or
that
the
locks
are
visible
to
everyone.
A
Do
you
see
anything
which
which
which
really
does
not
fulfill
the
community
and
for
our
aspect
of
of
that
effort.
D
So,
in
my
opinion
on
this
I
mean
it
is
not
going
to
be
truly
Community,
because
this
is
running
on
offices
in
staff
and
but
I
think
this
is
okay
because,
as
you
said,
logs
are
public
and
anyone
can
go
ahead
and
see
logs.
That
was
failed
and,
besides
that
you
actually
have
a
way
to
run
packages
locally,
and
this
is
super
useful,
fantastic
stuff,
like
you
have
it
with
their
own
CLI
tool,
you
can
run
it
locally
and
it's
going
to
print
you
error.
D
D
So
even
you,
if
you
give
access
to
someone,
they
can
just
access
the
public
key,
so
no
private
key
and
we
don't
have
to
deal
with
key
management
and
even
if,
if
we
would
want
to
go
with
a
community
for
like
a
fully
OBS
platform
of
his
open
source,
there
are
images
that
you
can
run
on.
Your
own,
this
is
not
the
ideal
way
of
doing
things,
because
there
is
that
maintenance
and
other
stuff,
but
it
is
possible.
It
is
not
like
that.
D
D
End
of
the
year-
maybe
maybe
one
thing
that
you
can
ask
for
OBS
topic
is:
how
are
we
going
to
proceed
in
terms
of
using
it
like
I
guess
we
can
set
up
it,
create
packages
and
so
on,
and
then
what
are
going
to
be
our
next
steps
like?
Are
we
going
to
announce
this
to
declare
some
Alpha
state
to
ask
folks
to
try
it?
How
do
we
want
to
go
with
that.
A
So
after
the
cap
has
been
merged,
so
we
have
kind
of
outlined
the
next
steps
in
the
issue
link
to
the
cap,
together
with
Lori
and
I,
think
if
we
decide
that
we
move
forward
with
OBS
and
then
we
should
in
an
alpha
state,
do
the
implementation
and
build
something
which
kind
of
replicates
the
current
state,
but
we
built
it
on
community
and
fraud,
so
that
would
be
the
alpha
without
having
the
actual
transition
done
right.
So
we
just
have
the
system
up
and
running
in
parallel.