►
From YouTube: Kubernetes SIG Release 20200616
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hello:
everyone
today
is
June
16th.
This
is
a
sig
release
meeting.
This
is
a
meeting
that
is
recorded
and
available
on
the
internet.
I
will
be
available
on
the
internet,
so
please
be
mindful
of
what
you
do
and
say,
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
in
general,
just
be
awesome
people.
So
we've
got
a
super
light
agenda,
but
it
is
probably
one
of
my
favorite
agendas
in
an
agenda
items
and
many
many
moons.
A
So
we
have
been
talking
about
this
on
and
off
kind
of,
in
the
background
for
a
little
bit,
I
I
kind
of
probed,
some
of
you
during
during
cube
con
and
in
some
private
conversations
and
I-
think
that
we're
ready
to
do
this
we
had
been
talking
about
you
know
what
does
sig
leadership
look
like
overall
and
I
think
that
you
may
have
seen
my
mailing
list
thread.
My
mailing
list
note,
but
I
you
know.
A
Seen
that,
as
a
you
know,
you've
seen
that,
as
a
result
of
them
being
promoted
into
the
brought
into
the
the
release,
managers
group
or
promoted
into
a
whole
branch
manager,
patch
release,
team
member
and
you
know,
and
and
obviously
I
think,
the
the
the
natural
progression
of
things
for
on
the
release
team
side.
Where
you
know
you,
you
shadow
you,
you
lead
you
shadow
for
the
lead
spot
and
then
you
eventually
do
to
the
lead
spot.
So
I
think
in
all
of
that
we
have.
B
A
Work
now,
so
what
we're
planning
to
do
is
so
one
for
people
who
are
not
necessarily
familiar
with
Sasha's
work
with
with
Jorge's
work.
Sasha
has
I,
think
I
think
the
proof
is
in
the
pudding.
Honestly,
if
you
go
to
kubernetes
release,
sasha
has
the
top
commits
behind
behind
me.
He's
been
he's,
been
working
on
improvements
across
release,
notes
across
the
the
Corel
release,
tool
box
and
doing
really
thorough
reviews
for
a
lot
of
people.
I
think
that
I
think
the
the
review.
The
review
stands
alone,
that
he
takes
as
sign
of
leadership.
A
But
you
know
all
of
these.
All
of
these
ideas
that
he
comes
up
with
on
the
release.
Engineering
side
has
been
incredibly
incredibly
useful
as
well,
as
you
know,
just
finally
seeing
them
get
implemented
right
and
kind
of
having
someone
that
I
I
know
is
going
to
be
responsible
for
reviews
and
approvals
as
we
move
through.
That
has
been
very,
very
helpful.
Knowing
that
we
don't
have
we
don't.
A
Bandwidth
to
take
care
of
everything
right
and
having
having
reliable,
reviewers
and
approvers
alongside
and
I.
Think
sasha
has
been
one
of
the
most
reliable
for
that.
So
so
that's
the
release
engineering
side
and
on
the
the
the
release
team
side
and
then
CI
signal
I,
think
Jorge
is
gone
above
and
beyond
for
for
one
running
the
release
team,
the
efforts
that
he
has
made
across
multiple
cycles
for
CI
signal
and
the
fact
that
he
just
keeps
continuing
with
some
of
that
stuff.
A
Having
conversations
in
CI
signal
across
multiple
SIG's
I
saw
that
you
recently
did
the
sig
note
test
failures.
Group
updated
that
and
you've
been
working
through
sig
know
to
kind
of
improve
more
of
the
the
community's
overall
test
posture,
so
we're
gonna
do
I
think
what
we're
gonna
do
is
we're
going
to
try
to
keep
the
two
of
them
in
their
their
specific
areas
of
focus
for
a
little
bit
as
you
all
ramp
up.
The
the
first
thing
I'd
like
you
to
do
is
work
on
a
playbook.
A
I
know
that
both
of
you
have
also
been
working
with
Laurie
to
talk
about
optimizations,
the
release
team
and
overall
processes.
I
want
you
to
continue
that
work,
and
then,
outside
of
that,
just
you
know,
be
there
to
mentor
people
either
to
just
be
there.
I
think
that's
what
this
is
going
to
allow
us
to
do
it's
going
to
defray
some
of
the
some
of
the
day-to-day
burden
from
from
Tim
and
I,
and
we
can
start
to
focus
on
some
more
strategic
efforts
for
the
sig
I.
A
Think
that
we're
starting
to
get
a
lot
of
traction
and
in
some
of
the
work
that
we've
done
over
the
past
few
cycles.
So
this
will
just
give
us
give
us
some
time
to
think
strategically
start
writing
policy
Docs
for
for
things
that
I've
been
missing
for
a
while
and
and
start
to
target
it.
What
we're
going
to
be
doing
over
the
next.
You
know
the
next
year
next,
two
years
so
yeah.
D
D
So
I
I'm
really
happy
with
the
extent
to
which
people
are
coming
in
and
that
you
are
all
finding
a
place
to
contribute
and
doing
that
across
a
lot
of
a
longer
span
of
time
that
it's,
it's
not
just
sort
of
the
release,
seem
in
one
quarter
that
people
are
coming
in
and
doing
more
deep
work
and
and
we're
making
so
nice
progress
on
cleaning
things
up
and
modernizing
and
getting
to
a
better
place
and
how
we
were
doing
the
release.
So
thank
you
all.
A
Yeah
we
have
created
some
things
that
did
not
exist
when,
when
we
started,
which
has
been
really
exciting
and
I,
think
I
think
one
of
the
nicest
notes
that
I
saw
was
they
was
the
note
from
JSON.
The
mailing
list,
because
Jace
is
Jace,
is
the
one
who
brought
us
and
as
sig
chairs,
so
you
know
so
a
lot
of
you
know.
A
lot
of
seeing
seeing
people
bring
on
new
chairs
and
technical
leads
is
we're
getting
to
see
kind
of
the
next
generation
come
in
right
and
start
working
on
things.
A
D
A
So
we
have
announced
it
we're
going
to
hold
for
lazy
consensus,
always
a
consensus.
Timeout
of
June
18th,
that's
Thursday
at
4:00
p.m.
us
Eastern
gonna
start
thinking
about
what
PRS
we
need
to
prep,
because
we've
never
done
this
before,
but
but
that'll
all
be
merging
in
four
for
the
18th.
In
the
meantime,
Sasha
and
hora
just
continue
to
question
the
things
you've
been
doing
really
really
look
for
optimizations
in
our
processes
and
and
an
offer
to
take
on
things
where
you
can
yeah.
A
E
So
just
want
to
give
some
heads
up
about
Trish
party,
so
I
put
the
peer
in
charge
of
deploy
Trish
party
in
community
ephra,
so
I'm
just
waiting
for
the
get
up,
talk
and
I
think
we'll
be
fine
for
the
beginning.
Okay,
basically,
the
current
configuration
just
create
all
the
issue
and
PR
for
the
current
master.
Okay,.
E
E
A
A
F
F
E
A
A
F
A
E
A
So
eventually,
especially
if
we
want
to
start
on
a
smaller
scale,
we
could
run
it
against
the
sig
release,
repo
and
the
release
repo.
So
so,
ideally,
this
will
not
just
be,
for
that
instance
will
not
just
be
for
the
release
of
the
release
team
and
bug
triage.
That
instance
will
eventually
extend
or
will
have
multiple
instances
to
work
for
release
engineering,
as
well
as
the
other
sub
projects.
A
A
A
I
think
some
of
them
are
on
this
call:
Daniel
Jorge,
proposing
a
working
group
before
naming,
because
naming
is
hard
and
the
I
think
the
impetus
around
this
this
working
group
is,
is
everything
that's
going
on
in
the
world
we're
interested
in
eliminating
racist
and
non
inclusive
language
from
the
projects?
There
have
been
some
kind
of
great
strides
there
already
I've
seen
some
changes
to
the
docks
to
to
eliminate
whitelist
and
blacklist
on
the
the
kubernetes
what
side
side,
but
we
need
more.
We
need
more.
We
need
the
effort
to
not
be
disjointed.
A
I
think
that
a
you
know,
single
PR
and
discussion
in
one
place
does
not
necessarily
lead
to
project
change
over
all
right,
so
that
discussion
needs
to
happen
somewhere.
You
know
there.
There
was
a
discussion
on
the
obviously
architecture,
mailing
list
that
I
think
the
last
time
someone
had
commented
on
that
thread
was
almost
a
year
to
date.
Last
time,
I
was
looking
at
it
and
the
the
discussion
was
around
the
way
we
discussed
control,
plane,
notes
right.
So
there
instances
where
you
might
see
master
roles
held
or
references
to
references
to
Redis.
A
A
We
can
shop
that
too,
to
other
projects
or
not
necessarily
shop
it,
but
act
as
the
exemplar
for
other
projects
and
kind
of
the
CNCs
sphere.
So
that
is
a
grander
goal
and
I'm
going
to
be
I'm
going
to
be
Co,
leading
this
effort.
Assuming
it's
approved,
let's
not
jump
the
gun,
the
it's
it's
in
talks
right
now
right,
but
the
plan
is
to
co-lead
this
effort
along
with
Zach
on
cig
Docs,
and
so
as
well,
who
has
been
doing
review.
A
So
I
think
that
that's
a
nice
mesh
of
people
to
push
on
this
when
we're
ready
to
go
further
upstream,
so
details,
details
to
come.
I
would
say
that,
for
anyone
who
is
interested,
I
see
Markey
and
Seth,
you
have
your
hands.
You
have
your
hands
up
to
help
stay,
keep
a
watch
on
ke
dev
and
you
know
that
will
be.
A
That
will
be
the
go
no-go
for
the
working
group
and
and
once
that
happens,
we'll
be
able
to
establish
slack
slack
groups,
and-
and
you
know
the
various
things
that
you
do
would
say,
get
the
signal
to
establish
a
working
group.
So
once
we
have
a
place
to
land
and
mailing
lists,
I
think
that
we
can.
We
can
start
jumping
forward.
So
sorry
for
the
non
really
specific
announcement,
but
I
think
it's
a.
D
Fairly,
really
specific,
though,
because
if
you,
if
you
start
looking
at
how
we
do
the
release,
it
backs
up
into
get
and
automation,
and
these
things
are
also
in
our
branch
names
and
it's
good
I
just
did
a
quick
grep.
The
word
master
shows
up
in
a
ke
release
repo
almost
2000
times,
and
that
there's
gonna
be
a
lot
of
things
to
investigate
and
figure
out.
This
isn't
going
to
probably
be
a
simple
mechanical
change,
like
we're.
A
lot
of
engineering
work
to
make
sure
that
we
carefully
affect
the
change
so
yeah.
A
Yeah
and
I
think
what's
what's
important:
there
is
that
we
we
not
only
make
the
change.
We
make
people
aware
of
policies
to
prevent.
You
know
prevent
a
non-inclusive
language
from
from
kind
of
coming
into
the
project.
I
see
I,
see
that
you
know,
potentially
something
that
we
can
do
is
a
pretty
submit
check
with
a
word
list
or
something
right
that
once
we
scrub
references,
we
can
also
prevent
new
references.
Okay,
I
think
it's
not
just
I
think
it's
not
just
scrubbing
the
existing
things.
We
have.
It's
also
making
sure
that
going
forward.
A
We,
we
don't
let
that
happen.
So,
yes,
it
will
be.
It
will
be
quite
a
bit
of
work.
I
think
you
know
some
of
the
interesting
parts
that
you
you
know
on
from
the
technical
side,
you'll
get
to
scratch
an
itch,
so
I
think
that
I
think
that
Doc's
in
terms
of
responsible
parties,
sponsors
for
this
I
see
steering
Docs,
contributor
experience
and
architecture
as
potential
sponsors,
and
you
know
from
the
from
the
architecture,
standpoint
I
think.
If
you
look
at
the
way
we
name
things
right,
but
from
the
API
level
right,
you
know
there.
A
A
Yes
Laurie
eventually?
Well,
no,
the
answer
is
no.
Is
there
a
way
to
roadmap
the
changes?
No,
but
we
will-
and
that's
part
of
the
reason
for
the
formation
of
this
group
I
think
that,
prior
to
prior
to
this,
we
said
that
a
lot
of
people
were
interested
in
doing
this
and
we
had
chats
on
mailing
lists,
but
nothing
concrete
right,
maybe
a
few
PRS
here
and
here
or
there.
A
D
A
A
Thank
You
Markey
and
thank
you
additionally
because
a
lot
of
people
from
from
cig
release
had
reached
out
to
me
about
this
and
so
calling
calling
two
of
you
out,
specifically
Daniel
and
and
Jorge.
Thank
you
for
saying
that
you're
interested
and
knowing
that
it's
something
important
for
the
project
to
do.
A
D
Yes,
I
was
just
throwing
that
in
there
just
to
oh,
my
gosh
I
cannot
type
this
early
in
the
morning.
With
this
little
coffee
in
me,
there's
a
link
in
the
doc.
You
can
click
on
it.
It's
discussion
from
kubernetes
dev
yesterday,
so
we
is,
everybody
would
have
seen
over
the
last
two
weeks.
We've
had
a
change
of
focus,
I,
think
and
so
message
went
out,
saying,
like
hey,
there's
a
lot
going
on
in
the
world
if
you're
taking
time
to
do
things,
we
understand
from
a
project
is
leads
for
your
cig.
D
If
you
want
to
cancel
meetings
or
some
project
or
working
group
or
whatever
it
totally
makes
sense,
people
have
a
lot
of
things
on
their
mind
and
want
to
focus
a
bit
on
other
things.
Don't
feel
beholden
to
this
project.
We
make
a
lot
of
us.
It
big
companies
will
say
like
project
before
product,
but
I
think
one
of
the
things
we're
seeing
is
like
real
world
community
before
our
virtual
open-source
project
community
and
just
making
that
explicit.
D
If
people
weren't
really
working
as
productively
in
large
parts
of
the
community
for
a
week
or
two,
it's
it's
kind
of
our
responsibility,
I
think
then
to
say
like
we're,
gonna
back
things
off
a
week
or
two
so
that
that's
basically
what
we
had
discussed
in
the
the
release
team
meetings
and,
more
broadly
than
that
also
I
mean
a
lot
of
slack
discussion.
But
there's
a
question
on
the
mailing
list
about
how
we'd
achieved
this
decision
and
just
to
sort
of
react.
D
Oh
that
here
we
discussed
this
possibly
coming
in
the
last
release:
engineering
meetings
repeatedly
in
the
release
team
meetings.
But
it's
and
I
think
it
might
have
even
been
mentioned
on
the
communit.
The
last
community
call
meaning
that
we're
wondering
if
we
might
be
slipping
more
things.
I'd
have
to
go
back
and
look,
but
it's
a
reminder
to
us
to
think
about
how
we
communicate.
We.
D
We
have
focused
on
the
release
things
and
we
we
do
try
to
communicate
these
things
broadly,
but
it
it's
kind
of
always
been
one
of
the
six
challenges
to
get
the
attention
of
other
SIG's
and
other
SiC
chairs.
So
just
something
for
us
to
be
mindful
of
I
I,
don't
see
any
issues
and
how
this
was
decided
or
done,
or
the
reason
for
it.
So
just
wanted
to
mention
that
all
here
today,
yeah.
A
A
D
A
A
You
know
things
that
are
going
on
and
a
non
recorded
meeting
right,
so
it's
an
opportunity
for
say
technical,
leads
and
chairs
to
to
connect
on
a
deeper
level
with
their
peers
that
we
don't
necessarily
you
know
when
we're
when
we're
doing
all
of
this
stuff
were
kind
of
on
and
we're
we're
on
for
you
we're
on
for
the
community,
so
it's
nice
to
have
that
that
space
and
I
think
that
that
has
led
to
definitely
more
effective
communication
across
the
project.
So
I
think
we
need
to.
A
We
need
to
keep
it
up,
but
I
do
agree
that
I
think
what
would
what
would
be
helpful
is
to
have
maybe
and
and
who
knows.
Maybe
this
is
going
to
be
faster
session
tour
it
to
have
really
understand
the
tiers
of
communication
right.
When
is
when?
Is
it
something
that
is
sick,
internal
right
and
how
do
we
communicate
that?
What
does
it
look
like
when
we're
communicating
to
a
smaller
group
of
people?
What
does
it
look
like
when
we're
communicating
to
the
entire
community
right?
C
No
worries
illumise
is
correct,
so
yeah
last
time,
I
was
online.
The
meeting
unfortunately
didn't
get
recorded
because
it
was
basically
cancelled,
ish,
so
I
wanted
to
use
just
time
here,
also
to
say
hi
from
the
Loomis
community
also
just
give
a
quick
introduction
of
what
we
are
who
we
are.
We
are
a
operating
system
community
or
a
community
control
operating
system,
development
effort.
C
It's
basically
a
continuation
of
the
opensolaris
effort,
originally
initiated
by
Sun
Microsystems
and
has
been
completely
independent
since
decade
now
yeah,
with
our
own
little
framework,
where
we
do
local
containerization
stuff.
If
people
remember
from
Solaris
or
open
Solaris
zones
and
people
that
have
wanted
to
get
more
containerization
into
that,
both
with
support
for
the
docker
formats
for
images,
lot
of
tools
and
one
of
those
tools
people
wanted
to
have
from
our
side
was
cuban.
C
Basically
it's
including
illumos
Pacific
containers.
So
it's
containers
that
are
binary
compatible
Lumos,
but
also
we
have
a
emulation
layer
for
Linux
called
Alex
branded
zones.
I
can
gladly
go
into
details,
but
brands
is
basically
our
flavors
of
how
we
display
zones.
This
is
similar,
to
example,
K
native
versus
normal
pot
versus
a
virtualization
pot.
In
our
terms,.
C
Yeah
there
is
a
couple
of
things
going
on
in
the
works,
mostly
enabling
compilation
of
Cuba
needs
boundaries
on
illumos
and
everything
else
is
the
idea
to
run
or
to
have
it
implemented
in
the
content,
a
runtime
interface
as
to
not
to
incur
that
many
changes
upstream
for
Cuban
eighties,
as
there
are
very
stable
interfaces
available,
so
I
don't
think
that
need
to
be
much
changes
in
later
efforts.
We
can
also
talk
about
implementing
other
specific
is
like
our
network
infrastructure,
which
features
a
full
vmware
like
distributed.
C
D
Two
weeks
ago,
we
kind
of
in
a
lightweight
discuss
some
of
the
potential
implications
it
at
a
basic
level,
as
developers
across
the
project.
Develop
we've
already
gone
through
a
phase
of
starting
to
move
beyond
Linux
assumptions
in
the
runtime
environment.
Is
we
added
Windows
support,
but
this
would
be
another
instance
of
similar,
so
just
at
a
basic
development
level,
but
then
also
testing
and
how
one
sets
up
their
own
local,
dev
and
test
environments
or
more
sophisticated
test
CI,
which
obviously
then
gets
into
the
project
and
tests
infrastructure.
D
So
a
lot
of
learning,
probably
also
some
conformance
thinking
to
do
so,
a
lot
of
big
changes.
So
one
of
the
things
that
we
talked
about
was
so
till
had
sort
of
come
to
us
two
weeks
ago.
Saying
like
what
do
I
need
to
do.
How
can
I,
how
can
I
get
added
to
the
mix
and
we
kind
of
laid
out?
Let's
try
to
look
at
the
the
past
example
of
how
the
windows
work
happened
in
the
pre
capped
days
and
was
complicated
and
communication
and
planning
are
complicated
on
a
big
project
like
this.
D
A
Agreed
so
yeah,
so
you
know
definitely,
if
concerned
me
or
the
the
testing
for
implications
that
Tim
mentioned
as
well
as
resourcing
over
all
right.
We
need
people
who
are
experts
in
some
of
the
stuff
so
that
they
they
can
carry.
They
can
carry
issues
and
and
do
testing
around
around
platforms
that
we
don't
necessarily
natively
support
as
well
as
providing
you
know,
CI
instances
for
us
to
test
against.
So
this
has
extended
to
this
is
extended
to
Sikh
architecture
and
I
believe
that
this
week,
I'd
have
to
check
my
notes.
A
I
believe
that
this
week
we're
having
a
discussion
and
take
architecture
about
some
of
this,
so
I'll
be
on
that
call
too
and
I
would
suggest
anyone.
That's
interested,
join
that
call
as
well
the
the
big
question
that
we
have
today
is:
we
already
have
a
set
of
architectures
and
you
know
different
variances
that
we
produce
as
artifacts
of
the
current
release
process
and
and
they
don't
get
the
same
level
of
rigor
and
testing
as
as
Linux
amd64
does
right.
So
how
do
we
bridge
that
gap
right?
A
We
want
to
cover
not
just
not
just
operating
system
support,
but
also
architecture
level
support.
So
how
do
we
do
that
effectively
as
a
project?
That
is
the
basis
for
that
discussion.
I
think
it's
something!
That's
super
important
because
you
know
as
of
today,
if
we
get
issues
I
think
you
know,
one
of
the
ones
that
Ben
had
pointed
out
was.
A
There
was
an
issue
with
cube
proxy
for
s/390,
X
architecture
right
and
I,
think
that
was
in
117
and
that
if
she
took
a
while
to
be
closed,
it
was
assigned
it
was
assigned
to
us.
But
you
know
we
essentially
had
our
hands
tied
because
there's
no
way
for
us
to
test
as
390x
right
without
getting
you
know
getting
some
sort
of
sponsorship
and
expertise
there.
A
So
we
want
to
make
sure
that
we
have
a
clear
plan
from
the
from
the
architecture
level,
from
the
from
the
release
level
on
how
we
I
decide
for
or
against
bringing
things
in,
to
bring
things
into
the
project
right,
because
not
only
do
we
need
to
do
it,
but
we
need
to
be
able
to
support
it
when
we
do
so.
So.
Thank
you.
It's
going
to
be
it's
going
to
be
a
road,
it's
gonna
be
a
bit
of
a
road.
C
A
So,
while
I'm
doing
that,
Lumiere
I
see
you're
here
there
was
a
there
was
a
recent
change
to
a
PR
proposed
for
change
to
some
of
the
specs
I.
Think
the
cubelet
spec.
Do
you
want
to
talk
a
little
bit
about
that?
One
I
have
thoughts,
but
I
haven't
added
them
to
the
issue
and
PR
just
yet.
So
did
you
ask
me?
Yes,
yes,
the
I
think
it's
a
change
to
a
qiblah
spec
inkay
release
that
you're
commenting
on
yeah.
G
G
G
Start
so
basically,
this
implies
that
we
have
to
change
the
surface
file,
but
we
should
not
do
this
for
the
past
versions
of
older
kubernetes,
so
there
has
to
be
a
way
for
us
to
make
such
disruptive
changes
only
to
the
new
minor
kubernetes
version.
You
know
in
the
old
days
before
I
said
this
to
Sasha
I
believe
in
the
old
days
too,
before
cube
package,
we
used
to
have
a
bunch
of
go
through.
G
Conditionally
set
some
modified
some
aspects
of
the
table,
spec,
for
instance.
This
is
difficult
to
maintain
it's
doable,
but
it
I
remember.
It
was
a
bit
of
a
mess
that
nobody
wanted
to
touch
so
yeah
short
term.
We
can
do
that
long
term.
There
has
to
be
a
way
to
have
pretty
much
separate
specs
for
the
separate
versions.
G
A
It's
still
a
bit
of
a
mess.
The
QP
kg
is
actually
keeping
a
G
as
a
refactor
of
that
Debian
build
go
code
right.
That
included
the
a
bunch
of
the
conditional,
so
I
ripped
out
a
lot
of
the
conditionals
that
are
no
longer
relevant
right
if
they're
conditionals
that
were
like
Oh
after
1:11,
let's
do
xr8
conditionals
that
no
longer
made
sense
were
removed,
but
but
yeah
we
do
have
a
place
to
add
conditional
logic
for
this
stuff.
A
Just
to
give
you
a
kind
of
idea
of
the
grand
plan,
because
I
know,
a
lot
of
people
were
talking
about
the
fact
that
specs
are
should
be
content
and
the
content
should
be
in
the
place
where
we
hold
all
of
the
content,
which
is
kubernetes
kubernetes.
So
the
reason
for
the
initial
removal
of
the
the
package
definitions
and
the
specs
on
the
kubernetes
kubernetes
side
was
that
we
had
to
write.
A
We
had
one
that
lived
and
you
know
we
had
a
set
of
specs
that
lived
in
kubernetes
kubernetes
and
we
had
a
set
of
specs
that
lived
in
kubernetes
release
and
it's
it's
a
you
know
they
drifted
and
there
it's
kind
of
inaccurate
we
had,
because
the
ones
that
we
use
for
releases
are
the
ones
that
are
inkay
release
and
the
ones
that
we
use
in
in
KK.
We're
strictly
only
used
for
testing,
so
the
place
that
I
want
to
see
us
get
to
is
that
we
have.
A
A
We
have
the
written
specs
somewhere
else
right,
so
the
near
term
was
going
to
be
keeping
occasionally.
Some
release
manager
comes
in
writes
the
new
specs
for
basically
runs
Q
pkg
gets
the
output
of
the
new
specs
for
119
118
what-have-you
and
writes
them
to
some
repository,
whether
that
be
K
release
for
KK
right.
A
So
we're
not
in
that
state
right
now
and
we
actually
don't
currently
use
Q
pkg
for
the
release
process
that
has
to
do
with
that
has
to
do
with
the
way
we
interact
with
the
on
from
the
the
Google
side,
because
we
don't
actually
cut
our
our
Deb's
and
rpms.
There
are
there's
a
set
of
build
admins,
our
Googlers,
who
have
access
to
the
app
tinium
repositories
that
we
use
for
kubernetes
package
publishing.
So
they
use
a
tool
called
rapture.
Sorry,
Tim
has
his
hand
up
so
go
for
it
for.
D
This
we
can
make
it
simpler.
We
do
have
that
big
long
term
stuff
to
figure
out,
but
in
this
case,
without
going
into
the
spec
files
on
all
of
that,
the
one
piece
of
content
that
we
would
need
to
change
is
the
system
D
unit
file,
so
that
that
could
be
managed
distinctly
from
the
other
parts
and
and
more
easily
I.
Think
no.
D
C
D
Side
of
the
world,
but
if
it's,
if
it's
just
the
system
D
unit
file,
we
could
have
two
different
system.
D,
you
know
files,
one
that's
default
on
one,
this
default
off
and
the
the
code
that's
choosing
to
pull
them
in,
could,
in
the
Debian
case,
I
guess:
we'd
have
to
copy
one
or
the
other
into
the
directory
for
use
at
packaging
time
for
the
Red
Hat
one
we
could
just
in
the
spec
conditionally
use
the
one
that
we
want
for
a
given
version,
since
we
don't
have
fork
specs,
but
that
that
feels
more.
D
D
A
Would
like
to
solve
it
and
we're
we've
made.
We've
made
some
strides
and
being
able
to
solve
like
this
will
be
the
change
right
here.
If
that
was
submerge
into
master,
it
would
go
into
the
next
set
of
patch
releases
right
and
it's
not
necessarily
something
we
want
today.
My
big
concern
is
that,
are
we
strictly
like?
Does
this
fix
it?
For
does
it
fix
it
for
cube,
ATM
and
break
a
whole
bunch
of
other
consumers
right?
That's
that's!
My
big
concern.
Sorry.
D
The
standard
choice
Oh
in
in
my
experience,
projects
in
the
open-source
ecosystem
do
not
default
enable
themselves,
but
leave
it
up
to
the
distribution
to
make
those
choices
and
I
think
we
should
follow
that
pattern.
Now.
I
would
argue
you
know
no
argument
with
myself
whether
we're
a
distribution
but
I
would
say
it
makes
sense
for
us
to
default
not
enable
them,
as
as
our
things
stand
today
in
these
packages.
So.
A
G
So
the
unit's
filed
change,
in
particular,
which
is
disabling
the
service
by
default,
is
more
of
a
fix
about
the
certain
annoyance,
and
it's
not
that
critical
those
notes
we
seen
people
complain
about
it.
They
have
workarounds,
they
punched
service
files.
They
have
no
solutions
for
that.
The
bigger
problem
that
we
are
going
to
face
is
the
change
in
the
drop
in
file
for
Cuba
IAM,
which
is
the
10
cube
ATM
code.
This
file
includes
some
couplet
flags
that
are
soon
going
to
go
away,
such
as
does
the
script
config
bootstrap
cube
config.
G
The
couplet
has
plans
to
reboot
these
flags,
I,
don't
know
when
maybe
121,
but
when
the
couplet
does
that
the
the
file
will
no
longer
be
compatible
with
this
particular
release.
Version
incubating
has
to
adapt
as
well
to
comply
with
the
change
and
the
change
is
going
to
apply
only
to
this
particular
no
release
this
particular
snapshot
of
the
dropping
phone.
So
at
that
point
we
either
have
to.
G
A
A
Just
be
sure
to
loop,
myself
and
Tim
in
on
those,
so
we
can
give
opinions
I.
Think
the
what
we
kind
of
deal
with
today
is
the
assumption.
That's
the
files
will
essentially
be
the
same
across
multiple
release
and
patches,
multiple
minor
and
patch
versions,
and
we
did
have
some
conditional
logic
for
the
attend.
Cubelet
cube
a
DM
can
sake
before,
but
that
has
we've
basically
moved
past.
The
point
of
conditional
null
miss
I
think
it
was
up
until
like
111
or
something
that
that
we
cared
about
the
changes
between
those.
So
we
can
reimplemented.
A
G
A
Yes,
so
that
that's
what
I
was
alluding
to
at
the
beginning,
the
specs
will
move
back
to
KK,
but
only
once
we've
perfected
what's
happening
near
the
release,
artifacts
right,
so
someone
will
come,
come
along
generate
the
specs
for
119,
say
right
and
then
publish
them
in
KK
right
and
those
can
be
the
specs
moving
forward
for
KK.
Currently,
the
way
the
the
templates
are
laid
out
is
is
kind
of
like
templates
latest
and
and
then
debin
RPM
right.
A
The
idea
there
is
that
there
would
eventually
be
templates,
119
death
or
RPM
templates
118
that
bar
p.m.
so
on
and
so
forth.
So
I
think
that
I
think
that
that's
fine
to
get
those
back
in
to
get
this
back
into
KK.
What
I
don't
want
is
people
operating
off
of
the
assumption
that
the
stuff
that
is
in
KK
is
true,
and
it's
going
to
be
the
stuff
that
we
use
for
the
release
when
that
is
definitely
not
true.
Today,.
A
A
A
A
A
A
A
Okay,
all
right,
oh
I,
have
one
more
late
stage
update,
so
we're
also
working
on
the
Galang
updates.
We
have
the
114
for
update
and
flight
114
for
is
currently
blocked
on
a
on
a
net.
Cdv
bolts,
update,
I
believe
that
PR
has
merged,
but
we're
still
waiting
for
them
to
cut
a
release.
So
once
that
release
cut,
we
can
bump
P
bolts
within
kubernetes
kubernetes,
and
we
should
be
able
to
move
forward
there.
A
An
additional
additional
blocker
that
we
have
right
now
is
is
basil
and
dealing
with
repo
infra,
so
repo
infra
holds
kubernetes
slash,
repo
infra
holds
the
basil,
rules
underscore
go
and
we
often
need
to
bump
the
rules
underscore
go
to
be
able
to
support
a
new
version
of
of
cope.
So
before
we
can
update,
update
to
114
for
or
update
to
113
12,
which
is
also
in
flight.
We
need
to
update
the
rules,
underscore
go
and
I
believe
I
already
have
a
pier.
That's
that's
working
on
that.
So
yeah
stay
tuned,
okay,
yeah.
A
Or
no
sorry
actually,
I
have
like
I
think
it's,
it's
local
only
right
now.
Okay,
thank
you,
but
yes,
I'll
get
that
up!
It's
a
secret
PR.