►
From YouTube: Kubernetes SIG Release 20200519
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Right,
hello,
hello,
everyone
today
is
May
19th,
it
is
a
Tuesday.
This
is
the
Sig
release,
bi-weekly
meeting
it's
a
meeting
that
is
recorded
and
available
on
the
Internet.
So
please
be
mindful
of
what
you
say
and
do
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
in
general,
just
be
awesome
people.
So
we've
got
a
few
things
on
the
agenda.
A
lot
of
them
are
just
updates
on
some
of
the
work
that
we've
been
doing
over
the
past
month
and
change
and
years
and
everything.
A
So
we've
been
working
in
the
background
to
move
the
kubernetes
base
images
over
to
kubernetes
infrastructure,
so
that
is
primarily
the
the
ones
that
we
primarily
care
about
are
debian
basin,
WIP
tables.
They
form
the
base
of
the
images
that
we
use
for
our
go
to
quote
core
images,
so
keep
api
server,
controller
manager
scheduler
keep
proxy.
A
We
have
new
versions
of
the
debian
base
and
debian
a
type
e
tables
images.
So
if
you're
looking
for
those
those
are
us
GC,
r
dot,
io
/,
kate's,
artifacts,
prod
/
build
image,
/
your
image
right
so
debian
base.
The
new
version
of
debian
base
is
v,
2,
1
0
and
the
new
version
of
debian
iptables
is
12.1.0.
A
So
if
you
are
managing
a
project
that
requires
an
updated
image,
so
those
images
include
would
update
to
Debian
Buster
in
some
cases,
if
your
pre
one
or
if
your
for
Debian
base
or
if
you
were
111
our
version,
11.04
Debian
iptables.
So
if
you're
managing
a
project
that
needs
an
update
to
those
images,
please
use
the
new
endpoint
you're,
basically
waiting
to
have
Google
work
out
some
internal
issues
around
the
PDF
or
the
vanity
domain
flip.
So
that
is
our
flip
from
Kate's
GCRA
I/o
from
the
underlying
registry.
A
Okay
easy-peasy,
so
the
next
image
is
go
Runner
and
go.
Runner
is
some
work.
That's
the
DIMMs
has
been
working
on
to
essentially
provide
some
utilities.
It's
it's
essentially
a
distro
list
plus
plus.
So
there
is
a
cap
opened
to
rebase
all
of
our
base
images
to
differ
lists
and
some
of
the
images
require
bugging
utilities.
Debugging
utilities
are
the
means
to
redirect
log
output
from
K
log,
and
because
of
that,
we
need
to
add
some
some
extra
bits
on
top
of
the
dish
jealous
image,
so
that
is
go
runner.
A
Go
runner
has
merged
to
two
kubernetes
kubernetes
master
and
we're
starting
to
look
at
we're,
starting
to
look
at
integrating
that
as
the
base
image
for
a
few
of
the
images
that
require
that
require
K
log
or
require
redirecting
output
or
require
some
sort
of
debugging.
That
cannot
that
we
don't
necessarily
want
using
debian
base.
So
as
of
today,
I
believe,
we've
I
graded,
keep
API
server
and
I
believe
and
I
believe
keep
scheduler
over
to
go
runner
as
their
base
image.
A
The
cube
we're
still
using
perky
proxy
we're
still
using
the
Debian
IP
tablespace
and
for
the
controller
manager
we're
still
using
the
debian
base.
The
reason
for
the
controller
manager
using
the
debian
base
is
there.
Some
exec
base
magic
in
some
of
the
in
some
of
the
controllers,
specifically
I
believe
some
of
the
storage
controllers
that
would
prevent
us
from
moving
over
to
a
distro
list
like
base
image.
So
there's
still
some
ongoing
work
there.
If
you
want
to
check
it
out,
I
can
provide
some
links
later.
A
Ok,
cool
Yes
No,
maybe
so
alright
cool.
So
the
next
piece
of
that
is
because
we're
tracking
images
for
the
rebasing
everything
distro
lists
I
took
some
time
to
do
an
update
of
the
base
image
exception
list.
So
the
base
image
exception
list
will
give
you
a
run-through
of
the
some
of
the
images
that
we
care
about.
It's
not
a
completely
exhaustive
list
just
yet,
but
some
of
the
images
that
we
care
about
for
releases,
I've,
broken
it
into
release
and
non
release
images
and
then
also
non
org
images
that
were
tracked
previously.
A
So
some
of
the
ones
that
I
was
talking
about
earlier,
Debian
iptables,
the
ones
for
cube
controller
manager,
keep
proxy
API
server
and
scheduler.
Some
non
release.
Issue
images
that
you'll
care
about
so
like
Etsy
D
and
the
at
CD
kind
of
star
images,
as
well
as
fluent
e
elasticsearch,
IP
mask
and
all
the
kate's
DNS,
so
DNS
mask
nanny.
The
cube
DNS
know
cash,
sidecar,
add-on
manager,
no
problem
detector.
Things
like
that.
So
check
that
out
to
get
a
better
idea
of
of
what
images
we
depend
on
and
why
it
I
updated.
A
The
lists
to
include
whether
or
not
an
image
is
supported
the
reason
that
we're
exempting
it
from
needing
to
be
on
disher
lists,
as
well
as
the
owners
that
are
attached
to
it
so
like
if
you
click
a
link
for
the
image
name,
it'll
lead
you
to
the
docker
file
or
delete
you
to
the
folder
where
the
image
is
built.
If
you
click
on
the
base
image,
it
will
link
you
to
the
docker
file
for
the
base
image
in
question
and
yeah.
That's
about
it
for
that.
So
any
questions.
B
B
Well,
he's
entering
its
doorbell
I'll
go
further
into
what
I
was
thinking
like,
though
one
of
the
challenges
we
have
is
managing
all
the
things.
Obviously,
that's
part
of
what
our
sig
to
do
with
all
these
different
images,
and
we
always
have
kind
of
a
question.
Should
we
use
this
one?
Should
we
use
that
one
whose
should
we
use
who's
maintaining
it?
Do
we
need
to
maintain
our
own?
That
CD
is
like
a
seemingly
standard
thing,
that's
kind
of
everywhere,
there's
a
bunch
of
preexisting
at
CD
images
out
there.
B
C
B
B
A
B
A
A
Yeah
it's
base
image,
it's
exposing
a
few
ports
and
then
copying
in
some
of
the
sed
utilities,
as
well
as
that
migrated
if
needed,
script
yeah.
So,
let's,
let's
get
up
with
dims
and
see.
What's
up
with
that,
because
I
know
he's
got
he's
got
a
PR
active
right
now,
because
we
just
merged
that
we
just
merged
the
image
building
and
pushing
on
the
test
in
for
a
side
for
Etsy
D.
So
let's
make
sure
that
we're
building
what
we
need
to.
B
That
migration
script
has
been
around
since
at
least
2060
yeah
2016.
There's
this
newfangled
thing
called
cube.
Adam
I,
wonder
if
we
might
like,
as
tests,
moved
to
newer
mechanisms
for
upgrading
if
this
script
might
become
less
needed,
and
maybe
we
we
get
to
a
place
where
we
don't
need
a
custom,
that's
IDI,
but
yeah
that
meta
meta
issue
for
the
future.
Yes,.
B
A
Yeah
I
I
think
they
over.
The
overall
goal
is
to
get
rid
of
cluster
and
I.
Think
that's
been
something
movi
wanted
to
do
for
a
few
years,
but
it's
will
take
time.
It
will
take
time,
I
think
we're
making
strides
in
terms
of
moving
things
that
were
to
urban
Addie's
infrastructure,
but
yeah.
There's
a
lot
lots
of
interesting
stuff
in
cluster
in
the
cluster
directory
that
we
can't
get
rid
of
just
yet
a
lot
of
our
a
lot
of
our
end
to
end
tests
depend
on
that.
A
D
A
Awesome
so
yeah
I'm
really
excited
to
see
that
that
we
were
able
to
release
without
cutting
that
branch.
This
goes
into
some
of
the
changes
that
Sasha
and
myself
made
over
the
last
week,
or
so
we
essentially
this.
This
goes
into
the
the
changes
to
the
release,
the
the
119
release
schedule
and
we
have,
because
we've
shifted
a
bunch
of
the
timelines,
we're
essentially
moving
to
cut
the
release
branch
at
RC
right.
So
this
does
a
few
things
right.
A
Our
previously
branch
fast
forward
and
we're
kind
of
we're
kind
of
trying
to
turn
down
some
of
that
usage,
we'll
still
be
doing
some
fast-forward
throughout
the
the
code
freeze
period
from
code
freeze
into
some
specific
boundary
that
we
haven't
defined
yet
so
we
should
define,
we
should
decide
when
we're
going
to
move
from
doing
fast
forwards
into
cherry-pick
only
mode
and
and
that's
what
we
can.
We
can
save
that
for
maybe
next
week
and
and
release
engineering
to
figure
out
dates
for
that.
But
this
is
a
reasonably
heavy
change,
so
now
go
I.
A
Don't
think
that
we've
done
something
that
big
in
a
nah
go
for
for
a
bit
a
lot
of
the
a
lot
of
the
beta
logic,
a
lot
of
the
branch
cutting
logic.
A
lot
of
the
things
that
are
keyed
on
the
beta
release
label
have
been
shifted
to
the
RCE
phase
and
I
thought.
A
whole
bunch
of
stuff
was
going
to
break
so
I'm,
pleasantly
surprised
and
happy
about
that
work.
The
next
piece
that
we're
working
on
this
kind
of
goes
into
the
patch
releases
tomorrow
is
fixing
a
push
issue
with
the
official
types.
A
So
we
switched
to
using
types
for
the
reload
type
flag
for
the
release,
so
you
can
specify
alpha
beta,
RC
or
official
type
and
based
on
that
type,
assuming
that
you've
got
the
right
combination
of
flags
you'll,
either
release
an
alpha
beta,
RC
or
official
for
the
people
who
are
not
familiar
with
kind
of
how
our
releases
work
really
briefly,
depending
on
the
previously,
depending
on
the
the
type
that
you
would
specify.
A
If
you
didn't
specify
a
type
we'd
assumed
it
was
a
pre-release
and
by
pre-release
we
mean
like
alpha,
basically
for
the
for
the
alphas,
you
cut
an
alpha
on
the
master
branch
right
for
betas
betas.
If
you
trigger
a
beta
for
a
release
branch
that
does
not
exist,
it
will
cut
the
release
branch
in
addition
to
cutting
the
beta
zero
for
for
that
x,
dot
y
and
then
also
cutting
the
x
dot
y
plus
1.0
alpha
zero
on
the
master
branch
right.
A
So
if
you
cut
the
119
release
branch
and
the
B
end
the
119
beta
zero,
it
will
also
cut
the
120
alpha
zero
on
master
right.
Then
you
can
only
cut
our
C's
on
the
on
the
one
of
the
release
branches
and
once
you
cut
an
RC
for
x,
dot
y,
you
cannot
cut
beta
for
that
same
X
at
Y,
Z
right.
So
when
I
say
X
at
Y,
I
mean
they
the
major
minor
and
patch
versions,
just
for
clarity.
A
So
this
is
a
bit
of
a
change
in
the
system.
We
weren't
sure
that
we
would
be
able
to
support
doing
alphas
betas
on
the
same
branch,
but
we
can-
and
that's
that's
great
I've
done
some
testing
in
the
background
and
the
RCS.
Essentially,
when
we
cut
our
C's,
we
just
cut
the
RCS
and
then
we,
when
we
cut
officials,
we
cut
an
official
as
well
as
the
next
beta
right.
So
so
now
we're
going
into
phase
where,
when
we
cut
the
official
we're
cutting
the
next
RC
zero
of
that
release
branch
right.
A
E
A
A
E
A
A
The
release
commit
is
essentially
to
ensure
that
the
that
the
beta
the
the
official
release
and
then
the
next
beta
are
not
cut
on
the
same
commit
because
that
is
leads
to
interesting
things
with
like
our
publishing
and
and
and
some
of
the
the
CI
tests,
and
also
like
the
CI
kubernetes
build
which
determines
CI
kubernetes
build,
pushes
a
CI
version
of
of
the
release,
artifacts
that
are
then
consumed
in
our
intent
tests.
So
we
need
to
ensure
that
there
was
a
true
separate
commits.
A
There's
the
dependency
report.
That's
outputted
in
our
release,
notes
now,
so
you
can
see
what
dependencies
were
added,
modified
and
removed
in
kubernetes
kubernetes
as
a
result
of
that,
and
that
that
dependency
report
is
now
part
of
Corel
changelog,
as
well
as
Crowell
release
notes.
So
I
believe
that
Sasha
would
I
be
correct
in
saying
that
anybody
who
decided
to
use
our
release
notes
tool
could
now
generate
a
dependency
report
for
their
project.
Exactly
yeah
really
really
awesome
work
Thanks.
F
A
A
There
are
some
categories
and
yeah
will
file,
of
course,
for
configuration
where
you
can
define
exactly
how
you
want
to
do
triage,
whether
it's
on
you
know
daily
weekly,
bi-weekly
monthly
basis
and
the
kind
of
and
the
kind
of
indicators
that
you
you
want
to
pull
in.
So
whether
it's
an
issue
that
has
a
high
comment
volume
or
an
issue
that
has
multiple
reaction,
emojis
or
things
that
have
not
been
touched
in
X
amount
of
days.
A
G
Like
that,
like
your
life's
easy
early,
we
don't
want
the
opposite.
So
as
far
as
we
can
tell
there's
definitely
not
the
case
like
in
case
someone
has
had
any
previous
experiences
with
with
this
tool.
We
can
say
that
it
has
been
like
a
smooth,
smooth
experience
for
us
so
far
and
now
we're
exploring
the
options
just
for
the
infrastructure
yeah.
So
in
case
you
have
any
questions.
Let
us
know,
because
in
this
lock
channel
so.
A
G
H
A
Right
so
say,
they're
testing
a
fix.
They
may
be
testing
the
fix
thinking
about
the
different
permutations
things
that
can
go
wrong
and
trying
to
test
different
and
trying
to
test
this
in
areas
and
right
now.
That's
a
manual
process.
The
reason
that
we
don't
the
reason
that
we
haven't
automated
that
today
is
that
there's
a
I
mean
really
security
implications
right.
So
when
we're
doing
yes,
yes
Tim.
A
So
when
we're
doing
the
releases,
the
the
release,
the
gcpd
runs,
have
access
to
the
github
release.
Token
right.
So
some
of
the
things
that
we,
some
of
the
things
that
I'm
worried
about,
is
potentially
exposing
that
token
in
a
place
where
the
group
of
people
who
manage
the
infrastructure
do
not
have
that
level
of
should
not
have
that
level
of
access
to
the
various
credentials
that
we're
using
right
so
figuring
out
how
to
do
that
in
a
good
way.
A
What
was
suggested
is
that
maybe
eventually
cig
release
gets
their
own,
build
cluster
and
I.
Think
that
is
something
reasonable
to
do,
but
right
at
this
moment,
I'm
a
little
concerned
with
the
verónica
to
what
you
were
saying:
the
maintenance
overhead
like
I,
don't
personally
want
us
to
have
to
maintain
a
group
in
IDs
cluster
right.
G
G
A
A
Cool,
so
we're
looking
at
doing
two
updates
to
go.
The
first
is
I.
Guess
I'll,
give
an
update
on
my
side.
I've
been
working
on
the
go
114
bump
in
master,
so
we've
kind
of
dragged
this
PPR
along
from
114
0,
so
114
1
to
114.
There
are
various
things
that
are
going
wrong
scalability.
Why
is
there
was
some
concerns
on
the
go
side,
the
some
of
the
functions
are
causing
nodes
in
scalability
situations
to
get
wedged,
so
it's
not
something
we
could
have
merged
at
that
time.
A
There
is
now
a
slew
of
basil
failures
which
I'm
not
going
to
say
anything
which
is
going
to
require
an
update
to
repo
infra
I.
Believe
to
support
to
support
the
new
version
of
go
within
their
basil,
go
rules
and
then
once
that's
fixed,
I'll
retest
on
that
PR.
If
there
are
more
basil
issues,
we'll
try
to
knock
those
out
before
coming
back
to
the
beat
bolts
side
and
yeah
yeah,
exactly
yeah
I
know
what
has
video
on
because
they're
all
cringing
about
basil
right
now.
A
So
once
we
figure
that
out
I
mean
the
the
the
114
one
is
something
that
I
want
to
land
for
119,
but
it's
we're
kind
of
dragging
the
PR
along
in
the
meantime
and
I'm,
making
sure
that
that
gets
rebased
and
updated
from
from
you
know
every
every
week
or
two
from
the
the
go
1:13
side
go
1:13.
Since
the
patch
release
we're
it's
a
little
easier
for
us
to
get
that
done
too
right
now,
Marky
and
Veronica
are
tasked
with
doing
the
go
1:13
11
or
they
were
10,
but
now
11
updates.
A
B
Briefly,
there's
two
parallel
conversations
going
on.
The
working
group
has
been
in
discussion
for
about
two
years
on.
If,
when
how
we
could
increase
support
beyond
nine
months
as
of
January,
there
is
consensus
that,
yes,
we
would
start
that
for
119,
so
they
kept
merged
as
provisional
that
April
sometime
last
month
and
we're
looking
to
bump
it
over
to
implementable
state
again
still
just
for
119
and
newer,
but
also
over
the
last
month.
The
conversation
started
up
on
you
know,
there's
a
lot
of
different
contexts
now
compared
to
discussions
over
the
past
years.
B
B
Once
we
address
one
of
the
questions
they're
on
the
PR
but
post
today,
a
lazy
consensus
for
Friday,
also
on
the
kept
moving
to
implementable,
but
one
or
two
well,
I
guess:
there's
four
combinations
possible:
there:
119,
yes,
118
1716,
yes
or
no
on
either
both
of
those
as
well.
But
probably
it's
looking
like
by
Friday
we'll
have
a
lazy
consensus
on
some
of
those
and
we'll
we'll
see
which,
but
basically
we're
I
think
we're
ready
to
this
less
people
or
almost
I
think
not
even
less
people.
B
A
And
I
think
that
that
is
somewhat
an
artifact
of
the
work
that
we've
been
doing.
The
release
engineering
side
definitely
like.
We
have
I,
think
raised
confidence
and
the
tools
that
we
use.
We've
established
some
process
around
the
things
that
we
do
day
to
day,
and
you
know
so
for
the
I
think
when
we
were
bringing
up
the
the
119
scheduled
updates.
A
You
know
a
few
people
brought
up
some
good
points
about
like
when
do
we
do
this,
or
when
can
we
do
this
right?
As
I
initially
mentioned
115,
and
so
things
to
note
about
the
115
116
boundary
115
is
currently
on,
go
112,
17
right
and
we
just
I
mean
this
is
post
discussion.
We've
we've
marked
115
as
out
of
support
officially,
as
of
the
last
patch
release
cycle,
I
believe
right,
so
yeah.
So
the
big,
the
big
thing
was
using
a
non-supported
version
of
go,
was
kind
of
a
no-go
for
us.
A
Maintaining
something
or
having
to
consider
backporting
fixes
or
maintaining
our
own
Fork
of
go
is
not
really
an
acceptable
path
to
go
down.
There
are
concerns
around
maintaining
the
infrastructure
that
our
tests
run
on
for
an
additional
branch
for
longer
the
yeah,
so
I
think,
and
then
you
look
at
the
116
side
and
the
116
once
you've
seen
introduced,
is
a
set
of
deprecations
for
for
api's
right.
So
we
were,
you
know,
Tim
and
I
were
chatting
about
this
and
we're
like.
A
A
B
We
a
little
discussion
on
slack
and
github
and
I.
Think
I
think
I
have
clarity,
I'm
about
to
push
another
sentence
or
paragraph
on
the
thing,
and
hopefully
that
addresses
Erin's
question
and
yeah.
Then
that's
enough,
the
email,
saying,
lazy,
consensus
on
Friday
also
and
then
people
can
sort
of
choose
their
own
adventure
both
or
only
one
or
neither.
If
there's
somebody
yet
to
come
out
with
a
major
issue.
A
B
C
Yes,
thank
you
for
bringing
this
up.
I
put
this
here
just
to
find
out
if
these
cherry-pick
requests,
the
PR
that
I've
linked
are
okay
to
qualify
as
as
a
cherry
pick
or
is
that
is
that,
like
a
feature-
and
we
only
have
to
do
it
in
119
just
for
some
context-
the
PRS
change,
the
behavior
of
how
cube
DNS
pubs
are
gonna,
be
scheduled
so
without
the
change
without
change,
but
the
pods
could
be
scheduled
on
same
nodes.
C
A
That's
kind
of
like
the
game.
We
play
a
few
patches
whether
or
not
it's
whether
or
not
it's
a
pure
feature,
but
I
would
call
this
I
would
call
this
a
bug
fix
or
regression
fix,
and
so
the
one
thing
to
note
is
that
we
have
passed
the
cherry-pick
deadline
for
for
this
cycle.
Is
it
it
sounds
like
it's.
It
sounds
like
something
that
I
would
let
slip
in
Tim.
What
do
you
think
yeah.
B
I
think
so
it's
always
a
first
indicator
when
there
there's
a
bug
or
an
issue,
that's
been
around
for
a
while,
and
it's
marked
feature
and
the
patches
that
come
in
around
it
are
also
marked
feature.
It's
just
sort
of
a
flag
that
we
key
off
of
just
in
really
it's
not
just
to
say
no
directly,
but
to
start
a
conversation
and
the
response
on
github
between
between
you,
brush
and
voytek.
Now
as
well.
B
I
think
that
lays
any
fears
just
having
having
remembered
like
the
complexity
in
the
past
and
wondering
about
the
scalability
issues,
seeing
that
Wojtek
is
saying
like
yeah.
Let's
do
this
I
think
that
that
covers
any
concerns
that
I
have,
and
it's
also
a
normal
pattern
for
something
that
feels
like
a
feature
to
also
end
up
being
a
bug.
There
there's
often
a
gray
space
between
the
two
and,
as
time
goes
by
something
that
felt
like
it
was
going
to
be
fixed
and
be
called
a
feature
to
have.
B
The
fix
starts
feeling
like
we
just
need.
This
bug
fix
this
one
I
don't
know
if
it's
quite
exactly
that
case
there's,
but
it's
been
sort
of
a
dance
back
and
forth
and
I
wouldn't
want
to
be
finding
out
that
after
we
ship
this,
we've
reintroduced
something
so
I'm,
okay,
merging
but
I.
Think
because
we're
past
the
traffic
deadline
we're
about
to
release
tomorrow,
we
have
other
CI
issues
on
one
18
and
17.
B
I
would
like
to
wait
until
Thursday
to
merge
it
to
the
branches,
and
then
we
also
have
time
to
see
over
the
next
month,
if
anything
else,
crops
up
from
testing
against
master
or
on
these
two
branches,
and
we
couldn't
deal
with
any
any
other
fixing
that
might
be
needed
over
the
next
month
before
the
the
likely
June
patch
release.
Does
that
feel
yeah.
C
A
A
Cool
awesome.
Thank
you,
so
much
yeah,
of
course,
so
question
for
you
since
you're
here.
Can
you
give
us
kind
of
like
a
state
of
affairs
on
the
DNS
stuff,
I
I
was
starting
to
poke
around
in
the
DNS
repo
and
I
noticed
that
you
were
on
some
of
the
PRS
as
well,
so
I'm
kind
of
curious
we're
as
we're
working
through
I,
don't
know
when
you
joined
the
call,
but
others
we're
working
through
the
base
image
updates.
We
also
want
to
make
sure
that
we
hit
the
DNS
images
and
make
sure
that
they're
all.
C
C
Sure,
yes
and
I
did
see
the
update
in
this
meeting
about
the
base
image
and
I
also
reviewed
the
PR
for
the
DNS
images
to
use
the
latest
debian
base
that
went
in
yesterday.
We
are
actually
still
using
the
images
of
CCRI
on
google
containers
just
because
the
Kate's
dodgy
cRIO
flip
is
not
fully
done.
I
was
just
waiting
for
that
to
to
kick
in,
but
we
do
have
the
automated
build
process
that
pushes
to
the
new
staging
repo
I
did
set
that
up
in
the
gas
entra,
so
that's
happening
after
every
commit.
A
A
C
Correct
so
the
Debian
iptables
one.
The
issue
was
that,
with
that
latest
iptables
version,
adding
rules
to
a
particular
table
writing
to
the
raw
table
wasn't
happening
correctly.
It
was
fine
with
everything
else,
but
I
don't
recall
exactly
why.
But
I
do
know
that
the
person
who
open
the
PR
investigated
and
found
that
it
was
actually
a
bug
fix,
but
we
are
just
waiting
for
that.
Bug
fix
to
be
part
of
an
official
image
in.
C
A
C
C
A
A
C
I
C
B
A
A
They
kind
of
the
front
matter
for
the
for
the
cap
is
now
in
a
separate,
kept
yamo
within
that
folder.
The
caps
are
now
named,
whatever
the
cap
title
was
or
but
the
the
enhancement
issue
number
whatever
the
cap
title
was,
is
the
name
of
the
folder
and
then
within
the
folders,
readme
and
I
kept
that
animal.
So,
as
I
was
going
through
the
doing
some
of
the
updates,
I
I
kind
of
read
the
the
cig
release
ones
so
to
fit.
A
The
format
and
I
noticed
that
there
are
a
set
of
caps
that
we
have
that
have
not
seen
some
updates
in
a
bit,
so
those
caps
are
linked
here.
It's
and
the
relevant
tracking
issues,
17:29
1731,
1732,
1733
and
1734,
so
caps
about
the
rebase
images
to
dis
realist.
We
talked
about
that
earlier
publishing
packages,
I
believe
this
has
to
do
this
was
this
was
written
up
by
Hannes
a
while
back
around
our
debian
rpm
pushes
possibly
and
other
artifacts
that
we
maintained
on
the
other
release
side
same
with
the
artifact
management.
A
One
I
think
that
was
done
by
Brendan
a
while
back
release,
notes,
I,
think,
release
notes.
We
have
some
tangible
updates
on
that
one,
but
we
haven't
reflected
them
back
in
the
cap
and
then
finally,
the
Kate's
image
promoter
cap,
which
has
been
kind
of
been
being
worked
on
by
the
working
group,
Kate,
send
for
Linus
Tim
Hawken
and
a
few
others
on
that
team.
So
so
TLDR.
A
What
I'd
like
to
see
is
us
update
our
caps
I
think
our
caps
are
more
likely
to
go
stale
because
we
tend
to
manage
things
that
are
out
of
tree
tracked
out
of
tree
right,
so
not
actively
pinged
on
by
the
release
team.
These,
like
we
didn't,
have
enhancement
issues
for
this.
So
now
we
have
an
enhancement.
Oh,
she
is
for
those
need
to
find
the
relevant
owners
for
these
things.
I
feel
based
on
the
work
that
Tim's
and
I
have
been
doing,
that
the
rebase
images
realest
thing
is
probably
in
our
court
Sasha.
A
A
Already
have
an
enhancement
issue,
for
it
give
me
give
me
a
day
or
so
to
get
all
the
enhancement
issues
updated
and,
and
and
at
that
point
you
can-
you
can
shift
over
as
the
owner
update
the
the
kept
metadata
and
all
that
stuff
and
give
a
proper
update
for
from
you
know
the
provisional
state
to
where
we
are
now,
because
we're
definitely
in
the
posts,
posts,
implementable,
kind
of
near
implemented
state
and
then
for
the
the
Kate's
image
promoter
I
think
Linus
should
be
the
owner
for
that.
One
publishing
package
is
an
artifact
management.
A
A
C
F
F
K
A
Alright,
well,
thank
you
as
always
for
hanging
out
with
us.
We
have
this
meeting
bi-weekly
the
release
team
meetings.
If
people
want
to
hang
out
on
those
and
the
release
engineering
meetings,
which
are
at
the
same
time,
alternating
weeks
with
the
the
cig
release
meeting
so
catch
you
at
one
of
those
are
on
slack
later,
IDs
thanks.
Everybody.