►
From YouTube: Kubernetes Release Engineering 20200106
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Happy
new
year,
kubernetes
folks,
this
is
January
6.
It's
the
Monday
release
engineering
meeting.
This
is
a
meeting
that
is
recorded
and
available
on
the
internet.
So
please
be
mindful
of
what
you
say
and
do
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
in
general,
just
be
awesome
people.
A
So,
as
I
mentioned
before
I
started,
recording
we
don't
have
a
very
large
agenda,
but
if
there
is
something
that
you
want
to
discuss,
please
take
a
moment
to
to
drop
that
in
the
open
discussions
section
so,
first
off
I
guess
only
off
right
now,
I've
been
working
on
a
tool
that
we
were
talking
about
a
little
bit,
Honus
myself
and
Tim.
During
cube
con,
it
is
called
cube,
pkg
RQ
package.
A
The
idea
for
that
tool
is
basically
that
we
want
to
be
able
to
build
Deb's
and
rpms
for
the
kubernetes
artifacts
that
we
publish
to
GCS
Google
Cloud
Storage.
So
what
would
we
like
to
be
able
to
do
is
allow
anyone
to
build
them
and
build
them
inside
or
outside
of
our
normal
saging
and
release
process.
So
I've
had
some
PRS
that
I've
been
cranking
around
for
a
little
bit,
the
first
one
merged
recently
Friday,
maybe
yeah
Friday
I,
think
so
that
was
essentially
a
refactor
of
what
was
the
dead
builder.
A
So
a
little
background
context
for
people
who
haven't
played
around
with
this
before
we
have
two
separate
tools
that
we
use
to
build
Deb's
and
rpms.
On
the
Deb
side,
it
is
a
go
tool
and
on
the
RPM
side
it
is
a
bash
script,
run
a
docker
container
that
does
a
few
said
to
replace
some
of
the
information.
A
So
what
I
eventually
would
like
to
see
is
us
get
to
a
place
where
we're
using
a
single
tool
to
build
both
the
Deb's
and
the
RPMs,
so
I've
been
working
on
while
they
sell
just
share
my
screen,
so
the
the
first
piece
was
essentially
just
taking
the
the
Deb
portion
of
it.
What
was
in
the
Deb
builder
and
moving
it
to
a
format
that
was
a
little
closer
to
what
we've
been
working
on
on
the
Krell
side
or
the
kubernetes
release
toolbox
side?
A
So
it's
a
tool
that
now
you
know,
takes
command-line
options
from
Cobra
and
you
know
it's
properly
formatted
and
all
that
good
stuff
and
has
like
some
general
semblance
of
testing,
which
is
which
has
not
been
the
case
for
this
repo
for
some
time.
So
thank
you
in
general
to
everyone.
Who's
been
working
on
that
sascha,
honest
Dan,
the
scores
of
other
people
who
have
been
reviewing
those
PRS
around
Krell
and
tiin
PKG.
The
second
piece
that
I've
been
doing
is
here,
and
that
is
templating
and
rpm
support
right.
A
So
after
it
was
refactored
to
look
a
little
bit
more
like
krell.
The
second
piece
of
that
was
my
computer,
so,
okay,
alright,
the
second
piece
of
that
was
to
support
templating
in
the
package
specs
and
then
add
support
for
building
rpms.
So
before
we
were
so
first
any
questions
on
any
of
this
do
I
need
to
back
up
on
anything.
A
Something
that's
that
happens
in
between
those
two
places
is
that
it
takes
a
set
of
templates
that
are
stored
in
the
repo
and
rights
of
various
variables
to
the
templates,
so
that
you
know
whether
it's
the
the
name
of
the
package,
the
the
dependencies
of
the
package
has
the
you
know
the
the
revision
for
the
package,
the
download
links
and
so
on
and
so
forth.
So
we
can
actually
look
at
that
here.
A
So
as
an
example,
we
could
look
at
the
control
section
for
one
of
the
Deb's
right.
So
this
is
the
kubernetes
cni
package
and
you
know
the
way
it's
kind
of
laid
out
right
now.
If
this
is
a
little
too
based
the
way
it's
kind
of
laid
out
right
now
is
we've
got
a
okay,
okay,
cool,
sorry,
computer
technical
difficulties,
so
we've
got
the
build
directory
within
that
Deb's.
A
It's
now
renamed
to
templates,
because
that's
actually
what
those
are
and
then
the
essentially
the
directory
structure
that
would
be
required
by
a
dpkg
built
package
or
so
so
changelog
the
compatibility
file,
the
control
file.
So
just
for
an
example,
we
can
look
at
the
control
file
and
see
that
there
are
some
go
templating
happening
here
on
the
build
arch,
and
if
we
looked
at
a
file
like
the
tbdm
one,
we
could
look
at.
A
Maybe
the
rules
for
qadian
right,
and
you
can
see
that
there's
more
templating
the
download
link
base
the
architecture
I'm
so
basically
telling
you
exactly
where
to
curl
and
drop
this
into
the
the
build
workspace
for
Debian.
And
then
you
know
all
the
instructions
to
actually
build.
The
Deb
so
I
wanted
to
do
something
close
to
that
for
the
for
the
the
RPM
side,
but
I
also
wanted
to
make
it
so
that
what
often
happens
is
we
are.
A
What
always
happens
is
that
we
build
the
package
and-
and
we
don't
have
an
idea
of
the
the
spec-
was
at
the
point
in
time
that
we
built
it-
that's
not
committed
to
a
repo
anywhere,
so
it's
kind
of
like
a
and
then
we
also
have
the
problem
that
as
we
as
we
improve
these
tools,
the
way
that
we
do
things
changes
right.
So
you
say
we
bump
a
dependency
the
next
time
that
we
build
packages
for
that,
the
next
time
that
we
build
packages
for
different
architectures
for
that
package.
A
B
A
Repo
right,
so,
if
you
know
so,
if
the
required
version
or
the
minimum
version
for
kubernetes
CNI
was
zero,
seven
five,
which
currently
is
although
kubernetes
CNI,
is
zero.
Eight
three
is
out
right
now.
If
we
were
to
build
a
version
of
heard
a
bump,
the
ver,
the
minimum
version
to
zero
eight
three
for
kubernetes
CNI.
That
means
every
version
of
kubernetes
of
the
building
kubernetes
CNI
that
we
built
from
from
there
forward
would
take
that
minimum.
So
someone
isn't
muted
right
now
and
they're
kind
of
breathing
into
the
mic.
A
A
A
A
So
if
we
just
did
Deb's-
and
let's
say
so-
basically
this-
you
can
see
the
help
for
this
now,
so
you
can
supply
a
few
flags
to
this
to
this
utility
and
what
we're
gonna
do
is
we're
gonna.
Do
a
QP
and
we're
gonna
choose
one
packet,
we're
gonna
choose
one
channel
right,
so
channels
are
released,
testing
and
nightly
so
things
that
actually
land
in
the
GCS
kubernetes
release,
slash,
release
bucket
the
and
then
testing
is
stuff
that
lands.
So
release
is
the
official
releases
of
kubernetes
testing.
A
A
Release
and
then,
let's
say
arch,
thank
you
Andy
64,
which
is
our
usual
and
we'll
choose
a
package
which
will
be
cutesy,
TL
right
and,
if
I
say
expect.
Only
this
is
a
new
feature
if
I
say:
okay,
cool
awesome,
it's
channels
and
I'm
still
on
ok,
I've
got
to
add.
Actually
you're
gonna
actually
have
the
commit
right.
A
So
these
are
some
debug
messages
that
I'm
fixing
in
this
PR
right
now.
This
isn't.
This
is
on
the
active
branch
that
I'm
working
on,
but
it
lets
you
know
that
we're
constructing
the
bill
were
successfully
constructed
the
builds
and
then
we're
starting
to
walk
them.
We're
looking
at
what
the
kubernetes
version
is
we're
setting
that
version
we're
figuring
out
what
the
download
link
is
going
to
be
and
then
setting
the
version
for
the
individual
package
right
and
then
build
it,
starting
to
build
a
package
for
aim.
A
So
this
is
the
go
architecture
and
then
the
Deb
are
rpm
build
architecture
right.
Let's
you
know
that
spec
only
mode
was
selected
and
then
says
that
we
successfully
walk
the
build
alright.
So
looking
at
those
bills,
if
we
were
to
do
so,
this
is
in
my
temp
directory
right
now
we
can
see
that's
wow,
so
many
things.
A
Ok
right,
so
you
can
see
that
what
it's
done
is
it's
written,
the
specs
for
keeps
ETL
aimed
amd64
and
then
so
those
files
that
you
saw
before
you
can
see
that
they're
not
now
properly
they're,
no
longer
templated,
we've
actually
written
out
that
this
is
going
to
build
1:17,
that's
0-0,
so
the
version
and
then
the
revision
of
that
compatibility,
not
important
right
but
keeps
ETL
doesn't
have
any
dependencies
that
we
slot
in
from
from
the
tool.
But
we
can
see
within
the
rules
section
that
it's
also
templated
out
the
download
link
right.
A
Ok,
right!
So
that's
cool
right!
This
is
this
is
basically
filling
in
an
intermediary
step
that
we
were
not
writing
out
before.
As
a
as
a
release,
engineer,
I
might
then
say:
ok
well!
These
are
the
ones
that
are
for
the
current
release,
I'm
going
to
take
these
and
copy
them
into
the
repo
right
and
then
commit
a
PR.
So
I
think
this
will
probably
be
once
this
tool
is
kind
of
not
moving
around
as
much
I.
Think
that
will
probably
be
the
process.
A
Maybe
for
118
we
can
start
writing
the
specs
out
to
the
repo.
So
this
would
be
a
task
for
the
release
engineers
to
the
release
managers
to
do
branch
manager
in
the
patch
release
team
before
they
actually
cut
the
the
depth
right
and
then
and
then
the
idea
would
be
to
also
add
a
template
directory
the
flag
to
the
to
the
tool
so
that
we
could
run
the
tool
against
a
specific
template
directory
as
opposed
to
having
it
generate
the
template
on
the
fly
right.
A
That
way,
we
know
that
something
that
we've
put
in
the
repo-
something
that
we've
already
committed
to
the
repo
is
exactly
what
we're
going
to
be
building
against
right.
So
now,
if
we
want
to
look
at
a
different
one,
I
will
also
fix
this
piece
where
or
you've
got
to
change
directories
right.
So
I've
got
to
go
to
the
rpm's
directory
right
now
and
then
from
here.
I
can
do
something
similar.
Let's
say:
I
want
to
build
cube,
ATM.
B
A
A
A
A
The
I've
got
some
titties
in
here,
because
it's
still
an
active
PR,
but
we
can
look
at
the
spec
and
see
that.
Okay,
it's
told
me
exactly
where
to
go.
Get
this.
It
knows
what
it
has
to
poland.
These
needs
still
need
to
be
templatized,
but
it's
also
replaced
the
version
and
the
release
number
for
that.
Right.
With
this.
A
C
Collect
people's
opinions
yeah
sure,
so
one
of
the
patterns
that
we've
talked
about
is
having
this
idea
of
immutable,
artifacts
and
building
things
and
promoting
them
across
channels.
If
we
have
a
version
string
like
shown
here
where
the
street,
the
string
literal
alpha
or
beta
RC
is
included
in
the
version,
we
can't
do
that
promotion
pattern,
because
we
would
have
to
change
that
string
and
rebuild
right.
C
I,
from
past
career
experience
really
like
this
promotion
idea
and
not
rebuilding
things,
so
that
what
we
deliver
is
what
we
tested
as
opposed
to
something
that
was
built
after
test
passed.
It's
weird
bugs
sneaked
into
the
universe
that
way.
Yeah.
A
So
the
first
thing
I
will
start
that
conversation
off
is
one
we
need
to
be.
We
need
to
be
more
diligently
testing
the
packages
that
we're
creating,
which
is
not
necessarily
happening
as
thoroughly
as
it
could
be
right
now
and
to
we.
This
is
something
you
and
I
talked
about,
but
the
just
bringing
in
the
idea
of
promotion
to
a
nah
go,
but
it
doesn't
exist
today,
so
we
would
have
to
definitely
talk
about
design.
So.
D
If
I
may,
one
thing
I'm
a
bit
confused
by
is
I
thought.
The
overall
goal
was
to
get
the
specs.
Finally,
the
RPM,
whatever
they
are
called,
and
the
Demian
control
files
and
whatnot
into
KK,
so
that
they
are
version
with
whatever
with
with
upstream.
If
you
will
now,
you
talked
about
in
retrospect,
generating
those
files
based
off
of
templates
and
and
version
or
whatever,
and
then
committing
it
back
into
KK.
If
I
understand
that
correctly,
this,
like
kind
of
the
the
wrong
way,
if
you
will
around.
A
So
I
disagree
with
it
landing
in
KK
a
little
bit
the
idea
of
the
release
repo
was
so.
There
is
a
an
issue
that
was
open.
That's
called
something
like
KK
is
the
canonical
source
for
build
artifacts
or
something
like
that,
and
that
issue
was
opened
as
a
hope
by
someone
else.
I
think
that
the
entire
reason
that
the
release
repo
it
exists
is
to
build
the
things
that
we
need
to
use
for
release.
A
To
take
to
move
them
out
of
that,
repo
is
not
the
greatest
idea.
We
can
already
see
that
there
are
problems
with
having
interactions
between
the
release,
repo
and
the
kubernetes
kubernetes
repo,
with
tools
like
push
built
right.
So
when,
when
we
did
the,
when
we
did
the
the
shell
check
updates
to
push
build,
we
broke
a
decent
chunk
of
all
of
the
tests
for
kubernetes
right.
So
it's
something
that
I
would
like.
If
we
have
tools
here,
I
would
like
to
isolate
them
from
from
what
happens
in
Kerber
nineties.
D
Kubernetes
I
kind
of
agree
with
you,
I
just
think
we're
talking
two
to
two
different
things.
You
are
talking
more
about
the
tooling,
which
I
actually
I,
don't
have
a
big
opinion
if
it
should
be
external
to
Kay,
Kay
or
not,
but
I
was
talking
about
just
the
configuration
files
to
build
the
packages.
I
still
think
they
should
be
versioned
with
the
code
and
not
the
other
way
around
like
generated
somewhere
outside
and
then
the
generated
stuff
put
back
into
KK
by
I,
don't
know
an
automatic
or
manual
PR
or
what?
D
What
have
you
I
think
like
the
process?
Kind
of
should
be
okay,
I
have
I
check
out,
KK
I
run
a
script
or
a
thing
might
be
external,
from
K
release
or
from
KK
who
cares
and
based
on
the
versioned
control
file
in
case
for
the
or
for
the
Debian
packages.
I
think
a
package
should
be
generated
now.
Will
then
we
get
all
into
the
business
Tim
talked
about,
like
which
version
should
it
have?
D
Should
it
be
promotable
or
do
we
need
to
repackage
it
if
we
promote
it
from
nightly
to
something
else,
but
generally
I
think
the
main
goal
should
be.
The
specs
should
be
as
close
to
the
code
as
we
can,
which
means
in
KK,
okay,
otherwise,
so
otherwise
we
end
up
in
in
this
weird
space
where
we
we
need
to
version
K
release
exactly
the
way
as
KK
is
versioned
basically
kind
of
by
commit,
because
if
I
bump
I
don't
know
CRI
tools
in
117,
1
I,
don't
know
what
I
don't
need.
D
C
Have
a
set
of
artifacts
today
that
span
multiple
repos
and
compared
to
when
that
first
issue
got
open
about
moving
things
over
to
KK
is
the
one
canonical
place
that
was
I,
think
being
driven
out
of
the
cube
ATM
folks,
who
subsequently
have
actually
left
the
KK
repo,
so
I
feel
like
over
the
last
two
years,
we're
starting
to
come
around
to
the
idea
and
need
to
manage
things
in
separate
repos.
But
when
it
comes
to
having
a
deterministic
reproducible
build,
it
gets
a
little
tricky
because
okay,
we've
already
seen
this.
C
Actually
it
was
at
november/december,
where
the
order
in
which
tags
are
applied
in
different
places
can
lead
to
unexpected
outcomes
and
that
you,
you
can't
ensure
that
we
in
a
race
freeway,
make
this
thing
happen
correctly
across
repos,
but
I
think
pragmatically,
given
the
the
type
of
changes
and
who's
able
to
affect
changes
in
the
repos
I,
don't
see
it
that
being
a
huge
problem
to
manage
them
separately.
At
this
point,
the
one
thing
that
appeals
to
me
philosophically,
though,
about
having
things
close
so
the
dependencies
is
another
area.
C
Maybe
that
would
be
the
place
that
we
should
try
to
have
these
spec
files
since
they're
closely
related
I
could
see
them
ended
up
ending
up
in
KK,
because
that's
like
the
core
of
this
but
I,
don't
think
as
a
project.
There's
been
a
lot
of
conversation
about
where
these
things
are
most
natural
and
whichever
one
place
we
choose,
it
makes
things
unnatural
for
other
places,
because
then,
in
order
to
do
things
in
other
places
in
the
project.
C
C
I
think
we
could
start
out,
I
mean
since
they
are
separate
today
we
could
start
out
operating
separate
and
we
could
Institute.
We
we
need
to.
We've
talked
about
this
for
a
long
time.
It's
clearly
recognize.
We
need
to
branch,
the
K
release,
repo,
the
exact
same
way
that
we
do
KK
and
they
need
to
move
in
step.
We
should
be
building
117
front
from
the
117
release,
branch
of
KK
and
K
release.
I.
C
A
What
I
was
gonna
float?
Actually,
it
was
what,
if
we
did,
that
in
a
Naga
I
know
I,
don't
necessarily
like
the
idea
of
shoving
more
functionality
into
now
go,
but
the
but
I
would
say
adding
one
more
repo
to
tag
and
release
during
the
release
process.
So
that
way,
that
way,
it's
not
a
question
of
like
did.
We
run
two
processes
to
to
cut
a
release
for
both
the
kubernetes,
kubernetes
and
kubernetes
release
right.
C
A
B
A
A
C
Of
the
things
that
I
had
hoped
going
back
to
the
the
original
mention
of
version,
so
in
and
packaging
we
have
the
the
packaged
version
number,
which
typically
is
always
just
zeros
in
our
beer
builds
I
had
hoped
that
we
would
get
to
a
place
where
we
were
building
from
committed
code,
not
on
taking
committed
code
generating
intermediate,
artifacts
and
building
from
the
intermediate
artifacts
and
in
that
world.
If
we
had
something
that
was
always
bumping,
the
packaged
version
number
then
we'd
have
a
zero
one.
C
A
two
or
three:
is
you
typically
see
in
the
world
of
Deb's
and
rpms?
Today?
If
you
pull,
if
you
look
at
your
installed
packages
from
the
distro,
it
might
be
kubernetes
1.17
zero,
whatever
some
distro
Virginie
number
stuff
136,
because
they
built
136
times,
but
each
one
was
a
unique
thing
and
it
just
happened
that
136
was
the
one
that
they
delivered
as
the
release.
If
we
have
an
a
number,
that's
moving
up
always
and
all
of
our
builds
have
a
unique
number.
C
Then
the
deep
I
understand
for
humans,
alpha
beta
RC
has
some
significance
as
a
string,
but
we
wouldn't.
We
would
be
able
to
say
like
currently
build.
136
is
beta
and
three
weeks
later,
maybe
it's
milled
136
is
actually
still
RC,
because
we're
so
ultra
stable
in
this
medical
future
and
then
later
build
136.
Is
the
release
build,
and
it's
just
that
the
channel
from
which
you
download
it
whatever
the
highest
number
there
is,
is
the
one
that
you
get
like.
A
D
Yeah
I
guess:
one
thing
we
need
for
debt
is
a
canonical
if
you
will
master
P
beyond
mirror
same
for
rpms,
because
essentially
what
it
boils
down
right
that
time
of
building
I
need
to
figure
out.
What
is
the
last
number
I
built
for
this
specific
version
or
commit
hash
and
I?
Don't
think
we
have
like
we
have
that,
but
inside
Google,
so
we
don't
have
real
access
to
that
I
mean
well,
that's
not
that's
not
entirely
true.
We
could
also
just
slap
some
file
in
some
bucket
and
manage
that
ourselves.
A
C
D
C
C
C
Yes,
you're
right,
you're
right,
it
does
yeah,
it
does
well.
So
that
goes
back
to
how
okay,
so
it
does
matter
in
the
implementation
we've
seen
both
in
apt
and
young
and
DNF,
both
all
three
and
I
think
some
others
as
well.
It
does
matter
in
the
implementation
of
the
consuming
tool,
but
if
you
do
publication
correctly,
it
doesn't
matter
because
you
publish
as
a
distro
and
you
only
publish
in
a
given
directory.
C
A
So
the
idea
was
to
get
us
to
so,
if
you're
anyway,
in
any
way,
shape
or
form
around
the
release
process
are
the
eventual
packaging
process
that
happens
behind
the
Google
curtain,
so
people
who
are
branch,
managers
or
patch
release
team
members.
You
know
that
they
kind
of
run
a
tool
called
rapture
and
rapture
rapture
takes
builds
packages
and
I'm
rapture
them
up
to
the
Google
package
repositories
right.
A
The
currently
the
way
that
tool
is
configured
it
requires
them
to
use
an
older
version
of
our
of
our
release
tool,
so
I
think
while
we're
so
there
are
multiple
things
that
have
to
happen
at
the
same
time
like
we
have
to
one.
We
need
to
build
something
that
is
or
we
need
to
have
rapture
and
some
way
restructured,
so
that
it
supports
the
continued
improvements
that
we're
making
on
the
repo
to
build
artifacts
right.
A
We
also
need
to
eventually
own
the
way,
we're
publishing
the
artifacts
right
so
making
sure
that
we're
you
know
migrating
off
of
Google
infrastructure,
and
that
is
also
happening.
We
need
tools
that
support
the
multiple
channels
that
we're
talking
about
right,
the
the
nightly
the
the
testing
and
release
channel
so
that
when
we
start
talking
about
promotion,
we
can
actually
have
tools
in
place
that
do
promotion
right
or
lay
down,
lay
down
bits
that
allow
us
to
build
things
that
could
be
promoted
so
so
yeah.
What
do
you
want
to
do?
First?
C
If
we
have
that
kind
of
philosophy,
we'll
eventually
get
there,
it's
going
to
be
iterative,
I
I
think
we
we've
long
ago,
already
decided
that
we're
not
gonna
make
one
fell
swoop,
where
we
just
simply
designed
this
yeah,
this
new
better
place
and
it's
publishing
to
a
new
bucket
and
and
that's
the
future,
and
we
we
turn
off
the
old
way.
How
we,
how
we
interact
with
the
existing
Google
process,
though,
is
tricky
around
publication,
but
I.
C
A
C
I'm
using
the
word
rapture
too
broadly
I,
think
and
that's
where
I
think
I
went
back
to
the
sort
of
the
Google
process,
which
is
also
hand-wavy,
because
they
they
do
build
things
and
because
of
the
the
layout
of
the
directory
scheme
and
the
existing
contents.
It
within
those
it
all
sort
of
conflates.
And
as
we
try
to
clean
things
up,
the
existing
publication
process
is
going
to
end
up
publishing
things
that
aren't
consumable
yeah.
A
A
B
D
B
C
C
D
Sorry
yeah
go:
go
ahead,
go
ahead.
If
we
decide
that
AB
dot,
Q
Boneta
start
something
something.
The
Google
thing
is
identical
or
identical
enough
to
our
new
thing,
like
we
publish
on
I,
don't
know
and
as
your
bucket
I
don't
care
if
we
can
even
automatically
check
if,
if
the
packages
are
the
same
or
at
least
show
the
same
versions
or
what
have
you,
then
we
can
eventually
switch
so
I
wouldn't
spend
too
much
time
and
like
backporting
things
to
rapture.
That's
that's
I!
Guess
what
I'm
trying
to
say
yeah.
A
C
Maybe
this
is
the
time
to
go
ahead
and
admit
that
we
are
forked
I
think
from
a
code
perspective,
we've
pretty
much
just
argued
what
the
number
of
commits
that
between
the
two
variants
that
we
we
are
forked
and
to
maybe
go
ahead
and
just
dive
in
on
on
finishing
out.
So
maybe
for
the
the
upcoming
cycle.
C
We
build
things
the
Google
way
and
we
community
own
and
build
and
publish
to
the
channels
as
an
attempt
to
see
what
we
think
of
them
and
do
things
like
change
that
version
string
and
and
see
what
we
get
in
the
the
different
channels
for
artifacts
and
and
massage
them
a
bit
and
get
to
a
point
where,
where
we're
happy
with
the
the
output
and
for
for
119,
maybe
we
we
start
to
talk
more
seriously
about
cutting
over.
We
have
socialized
to
kept.
Maybe
at
that
point
yeah.
A
A
Yeah
so
we've
it
so
yeah.
Definitely
things
like
if
anyone
who's
been
following
the
the
the
continuing
like
attempts
to
upgrade
CNI
across
the
packages
or
the
fact
that
the
CRI
tools
are
not
being
published
currently
since
112,
so
basically
new
versions
of
kubernetes,
so
117,
it's
basically
pin
to
112
right
now
and
that's
this
is
something
that
has
been
ongoing
because
we
don't
have
essentially
the
way
our
tools
are
built.
A
A
Cool
yeah,
it's
wow!
This
is
not
working.
What
I
wanted
to
build
our
cams
and
keyboards
back
right?
So
so
this
just
as
an
example
right,
the
CRI
Tools
version
is,
is
static
here
and
there's
no
witchcraft
that,
like
this
is
basically
this
is
some
macro
that
you
know
figures
out
what
the
what
the
kubernetes
ember
is.
But
there
are
also
some
things
to
consider
like
if
you're
trying
to
build
what,
if
you're,
trying
to
build
CRI
tools
for
a
version
that
doesn't
exist.
Yet
if.
C
You're
back
up
for
a
second,
so
this
is
K
release.
K
release
is
not
branched
and
this
one
file
says
one
13.0
and
we
know
that
we
built
for
things
other
than
kubernetes
1:13
0
right.
So
the
the
way
we
build
today
is
to
modify
the
spec
file
and
build
from
the
modified
uncommitted
spec
file
right.
So
if
you
do
that
same
pattern
and
change
the
CRI
tool
version,
you
would
get
a
modified
CRI
tool
version
when
you
publish
it
so
building
the
way
we
build
today.
A
C
That
new
version
and
all
prior
clients
will
consume
that
which
means
that
they're,
using
a
an
untested
combination
of
components,
so
I
really
do
think
it's
the
publication
phase
and
the
channels
fixes
this
problem,
and
we
could
go
back
to
actually
publishing
CRI
tools
that
are
versioned
in
accordance
with
desire.
Yeah.
B
C
But
what
do
you
think
about
maybe
going
ahead
this
week
or
so
and
updating
the
branch
management
handbooks
to
reflect
the
two
parallel
tools
and
for
the
117
cycle?
We
try
to
operate
them
in
parallel,
even
acknowledging
that
the
the
new
way
is
a
total
alpha,
workflow
but
start
seeing
with
that
highlights
and
what
we
get
in
the
published
repos.
A
C
A
A
C
D
C
A
C
Longer-Term
questions
around
the
existence
of
sig
release
in
general,
like
do
is
how
much
of
this
is
actually
needed,
because
if
you
use
people
will
frequently
say
like
the
distributions
are
already
gonna.
Do
this
Google?
Does
this
themselves
anyway?
For
for
their
kubernetes
Amazon?
Does
it
Microsoft?
Does
it
any
company?
C
That's
shipping,
an
enterprise
kubernetes
is
going
to
do
this
stuff,
the
the
community
side
of
it
is
redundant,
but
then
we
have
cube
ATM,
which
consumes
our
sig
release,
artifacts
right,
and
so
we
we
do
have
an
existing
workflow
that
uses
ours,
but
we
don't
have
data.
Even
that
tells
us
much
about
that
usage.
B
A
D
Know
much
about
the
open,
build
service
just
to
clarify.
I
was
just
talking
about
the
artifact
storage
like
the
apt
repo
and
the
RPM
repo.
So
I
was
thinking
about.
We
use
cube.
What
is
it
called
cube,
pkg
to
build
the
packages,
and
then
we
just
hit
some
random
HTTP
endpoint
push
the
package
and
magic
happens,
and
there
is
the
updated
apt,
mirror
or
rpm
thing.
I
was
not
talking
about
open
the
open
or
building
the
thing.
Yeah.
A
D
E
Maybe
just
kind
of
spitballing
here
you
know,
I,
don't
know
what
it
would
take.
I
haven't
built
a
you
more,
an
app
repository,
but
it'd
be
great
to
get
into
some
of
these
EPL
repositories
or
some
of
the
extended.
You
know
upstream
branch
repositories
and
that
might
be
way
more
work
than
we're
willing
to
do,
but
as
kubernetes
matures,
it
would
be
awesome
if
you
could
just
grab
a
bunt
to
upstream
and
install
cube
GTL.
A
Yeah
I
know
that
was
brought
up
a
while
ago
or
like
we
had
done
some
work,
and
this
is
but
by
we
I
mean
the
community
at
large.
Not
any
I
think
not
any
of
the
people
are
who
are
currently
on
the
call,
but
we
had
at
some
point
the
kubernetes
packages
published
some
we're
more
official
and
I
think
the
it
was
maintenance
overhead
for
people.
But
we
should
I'm
fine
with
revisiting
that
I.
B
I
think
that
would
be
good,
even
even
if
it
is
something
that
we
delegate
away,
even
if
it
is
like
a
process
that
handles
a
building
and
it's
a
third-party
tool.
There
will
still
be
that
managing
of
the
integration
itself,
so
any
I'd
say
that
work
would
be,
would
be
interesting
to
kind
of
handle
and
get
assigned
mostly.
C
From
an
implementation
perspective,
I
can't
speak
to
the
outside
of
the
house,
but
on
the
the
young
side
of
the
house,
this
is
literally
just
a
couple
of
makers
and
then
copying
the
files
you
want
to
the
right
places
and
a
typical
workflow
might
have
that
happen
in
an
internal
test
staging
place
and
then
the
the
resulting
directory
and
file
structures
are
synced
out
to
the
published
place.
These
are
like,
if
you
go
back
to
for
the
young
side.
C
Again
specifically,
if
you
go
back
to
when
that
stuff
started
and
sort
of
look
like
1995,
the
world
is
a
very
simple
place.
You
had
on
Apache
and
directories
and
files,
and
it
was
very
simple
and
flat-
and
it's
kind
of
followed
through
to
that-
that
you
can
impose
a
little
bit
more
directory
structure
for
variability
for
creating
the
idea
of
channels.
But
there's
a
it's.
C
It's
really
really
simple
and
this
the
idea
of
official
nough
stone
in
teresting
one
to
talk
about,
because,
like
today
you
you
may
get
docker
from
your
distro
or
you
may
add.
Dockers
repositories
to
your
distro,
config
and
pull
theirs
or
on
basil
is
an
example
of
one
where
you
might
pulled
out
from
somewhere
else
and
I
I
feel
like.
If
we
do
this
well,
we
become
the
canonical
source
for
the
kubernetes
bits.
C
C
Community
base
anyway,
like
the
enterprise
issues,
they're
still
going
to
be
doing
their
own
and
even
the
community
dinner
shows
may
have
their
there
on
their
flavor
of
the
thing,
but
for
people
who
are
wanting
to
run
upstream
you
just
like
with
docker,
you
read
doctor's
documentation
and
it
tells
you
how
to
add
their
repo
to
your
config
and
you
pull
from
the
channel.
You
prefer
that
we
would
be
something
like
that
and
that
for
for
many
users,
that's
gonna,
feel
native
and
normal
I
think.
D
A
Yeah
everyone,
so
the
release,
team,
lead
and
shadows
have
their
have
have
the
Huskie
now
and
the
meeting
was
created
earlier
today,
so
they
should
be
fine,
its
first
a
conversation.
What
do
we
think
our
immediate
next
steps
are?
Are
we
happy
with
the
general
direction
what's
been
tossed
around
so
far?
A
A
So
I
need
to
I
need
to
break
out
the
code
branches
for
debin
rpms,
basically
do
something
very
similar
to
what
the
deb
side
is
doing
for
for
a
deep
package,
build
package
and
do
that
for
the
RPM
build
section
and
wire
up.
Ci
jobs
actually
write
the
current
specs
to
the
repo
I.
Don't
think
it's
too
much,
but
those
are
also
like
famous
last
words,
I,
think
it's
I
think
it's
close
yeah.
E
Nice
yeah
I
was
just
thinking
it.
It
seems
to
make
sense
to
get
that
tool,
at
least
in
the
working
State.
It
seemed
like
you
made
some
great
progress
on
that
it'd
be
nice
to
get
that
at
a
point
where
we
could
use
it
to
test.
Some
of
these
theories
evaluate
some
of
the
alternatives
and
start
to
go
down
that
path.
I,
don't
know
I
like
the
idea,
like
Tim,
mentioned
earlier,
of
running
things
in
parallel,
so
we
could,
you
know,
still
continue
to
release
the
way.
C
A
There
are
a
bunch
of
things
to
do
ahead
of
time
to
make
sure
that
we
can
actually
build
kubernetes
on
the
new
infrastructure
and
and
then
test
in
parallel.
So
I've
been
working
on
this
in
a
product,
I
branch,
the
prototype
branch
on
kubernetes
release.
If
you
want
to
see
some
of
that
work
and
all
the
PRS
are
linked
up
here.
So
so,
yes,
that
was
always
my
plan
to
do
them
in
parallel.
A
A
All
right
cool!
Well,
thank
you
so
much
it
looks
like
we
only
needed
one
topic
meeting.
Thank
you
so
much.
It's
good
to
see
your
faces,
I
hope
you're,
having
a
good
new
year
so
far,
and
for
anyone
who
is
on
the
118
release
team
are
hanging
out
in
that
meeting.
I
will
see
you
in
two
minutes.
Take
it
easy
happy.