►
From YouTube: Kubernetes SIG Release 20200113
Description
Weekly Kubernetes SIG release meeting
A
B
A
A
A
Meeting
right
now,
and
if
you
were
here
next
week,
could
be
the
release
engineering
meeting
so
stay
tuned
for
that
I
will
send
that
out
on
all
of
the
mailing
lists,
as
well
as
slack
cool.
Any
questions
sweet
all
right.
Moving
on
the
next
one
is
grooming.
We've
got
a
bunch
of
issues
open
and
we
should
prove
them.
A
118
is
over
we're
in
the
midst
of
118,
so
we
want
to
make
sure
we
get
a
jump
on
all
that
stuff,
so
we'll
be
working
on
so
one
if
you're
assigned
to
an
issue
take
a
moment
to
think
about
it.
Think
about
whether
or
not
you
can
work
on
it
in
118
potentially
reassign
it.
If
you
are,
if
you're
a
part
of
a
previous
release
team
and
no
longer
think
you're
able
to
work
on
that
stuff
feel
free
to
ping
us
and
we'll
get
it
assigned
to
someone
else
yeah.
A
So
over
the
next
few
weeks,
we'll
see
a
flurry
of
assign
and
priority
important
soon
and
yada.
Yada
yada
for
the
118
cycle
feel
free
to
do
some
of
that
yourself,
if
you're
already
assigned
to
things
or
if
there
are
interesting
issues
that
you've
noticed
that
you'd
like
to
pick
up
just
ping
us
on
the
issues
too
cool.
A
A
And
some
of
that
is
our
responsibility.
So
if
you
have
opinions
on
how
we
do
what
we
do
every
year,
please
take
a
moment
to
fill
out
that
survey
cool
all
right.
Next,
one
is
keep
a
DM
out
of
tree,
so
cluster
sig
cluster
lifecycle
has
submitted
a
cap
that
is
marked
as
implementable
or
the
PR
it
has.
A
cap
set,
isn't
implementable
right
now.
That
cap
is
going
to
require
quite
a
bit
of
interaction
with
sig
release,
so
I
want
the
released
I
want
the
release
engineering
sub
project.
A
To
be
aware
of
that
cap,
there
are
things
that
are
talked
about
like
Debian,
Devon,
rpm
package
production
and
tagging
and
branching
of
her
leases
and
timing
of
who
goes
first.
Was
it
the
K
release
repo
getting
tagged
or
visit
kubernetes
kubernetes,
or
is
it
the
cube,
ADM
repo?
So
a
lot
of
things
will
be
happening
around
that
and
I
think
that
I
think
that
what
I
want
to
do
is
make
sure
that
we
prioritize
those
tasks
so
that
they
can
be
successful
for
1:18.
A
Along
so
talking
about
hub
building,
devs
and
rpms
I've
been
working
on
QP
kg,
which
is
a
partial,
partially
a
refactor
of
our
of
what
was
the
deb
builder
or
whatever
we
were
calling
it
random,
random
goat
script
within
the
k
release
repo
that
built
our
Deb
Deb's
before
so
that's
a
little
cleaner.
Now
it
is
it's
a
Cobra
based
CIL
CLI
utility,
and
it
does
exactly
what
the
title
of
the
agenda
item
says:
it's
build
stuff!
Well,
it
builds
Deb's
right
now
it
doesn't
build
rpms
it.
A
So
the
idea
so
I
linked
the
initial
PRS
as
well
as
the
readme
for
cube
pkg.
The
readme
is
kind
of
a
quick
read
me,
which
is
just
a
output
of
the
usage,
the
from
the
command-line
utility,
as
well
as
some
of
the
known
issues.
So
two
of
the
known
issues
is
that
the
way
we
do
validation
for
the
packages
channels
and
architectures
within
the
tool
is
a
little
broken
now.
A
So
if
you
submit
multiple
command-line
options
for
say
like
packages,
it
will
interpret
those
options
as
a
string
with
a
comma
with
multiple
commas
in
it,
as
opposed
to
a
string
separated
a
comma
separated
which
gets
interpreted
as
a
well
should
get
interpreted
as
a
string
array.
So
I
know
about
that
one.
It's
listed
in
the
known
issues
as
well
as
the
fact
that
we
can
only
build
rpm,
specs
right.
D
A
Need
some
time
to
sit
down
and
spin
up
a
VM
that
has
the
VM
our
docker
container
of
container
image
that
has
the
capability
to
do
all
the
Debbie
stuff.
Do
all
the
archeonys
stuff
and
once
I
have
some
time
to
do
that.
I'll
start
tweaking
the
rpms
you'll
notice
that
the
old
scripts
in
the
repo
are
gone,
so
the
build
is
H
for
rpms
and
and
and
anything
that
was
in
basically
slash.
Build
is
gone
in
in
K
release
and
all
of
that
stuff
has
been
moved
into
CMD
/,
/
q
pkg
right.
A
So
that
includes
well.
Most
of
that
stuff
has
been
moved.
That
includes
the
the
doctor.
Files
have
been
moved
to
the
top
level
of
the
repo
and
the
cloud
build
as
well.
So
basically,
the
reason
for
that
is
that
we
have
to
figure
out
versioning
and
branching
for
the
repo
first,
so
that
it
supports.
We
may
want
to
have
a
separate,
go
mod
sub
module
for
for
q
pkg,
so
that
people
can
basically
build
within
within
that
folder
right.
A
So
right
now
it's
in
the
top
level
so
that
it
can
take
advantage
of
copying
in
the
go
mod
and
go
some
from
the
top
level
and
all
the
extra
bits
that
are
required.
I
we
can
get
into.
We
can
get
into
how
the
directory
can
should
be
designed.
A
little
later
probably
will
try
to
move
those
those
things
and
later,
but
I
want
to
make
sure
that
at
least
we
have
we
have
a
place
where
the
image
is
actually
building.
A
So
the
images
for
q
PKG
are
live
in
case,
staging
release
test
the
container
registry,
so
the
names
of
the
the
images
are
QP
kg,
+
q,
pkg
rpm,
that's
I
want
to
complete
this
cycle,
I
think
we're
close
ish
famous
last
words,
but
I
gave
a
demo
of
QP
kg
during
the
release
engineering
meeting
last
week.
So,
if
you're
interested
in
seeing
that
I
won't
go
over
it
again
here.
B
A
But
yeah,
so
that's
gonna,
be
something
that
the
cube
ATM
side
is
going
to
want
to
use.
A
C
A
A
So
I'd
like
that
to
change,
and
so
essentially
there
is
a
separate
template,
except
if
you
run
key
PKG
and
one
of
the
sub
commands
deads
or
rpms,
with
the
spec
only
flag,
with
the
spec
only
flag,
it'll
allow
you
to
output
the
current
specs
for
Deb's
or
rpms.
The
idea
would
be
then
a
release
manager
would
commit
those
to
the
repo
will
figure
out
how
we
want
to
structure
that,
whether
it
be
like
specs,
slash
latest
you
know,
are
117
dot,
blah
right.
A
Another
piece
of
that
is
that
we'll
be
building
this
tool,
but
we'll
also
be
trying
to
get
a
better
understanding
of
how
rapture
works.
Rapture
is
the
tool
that
Google
uses
to
take
the
packages
that
we
build
and
then
publish
them
to
the
optimum
repos.
So
we
need
to
understand
the
inputs
and
outputs
for
that.
I
have
an
action
item
to
write
a
to
write
up
an
issue
about
exactly
what
we
need
from
rapture,
so
hopefully,
I
can
I
can
drop
that
in
this
week.
A
Comments
concerns
okay,
all
right
by
the
way.
If
we
it
appears,
we
might
not
have
a
depth
taker
if
anyone
is
interested
in
doing
it's
probably
should
have
mentioned
that
at
the
beginning,
you'll
get
the
prestigious
honors
being
listed
on
our
agenda
as
a
note
takers,
okay,
okay,
next
one
up,
Pat
releases
tomorrow,
so
Tim
and
I
are
going
to
be
handling
the
patch
releases
for
for
one,
seventeen,
one
one
16.5
and
one
15.8.
The
issues
are
linked
in
the
agenda.
A
A
Our
builds
happening
out
of
order,
so
Tim
and
I
want
to
be
available
to
directly
address
that,
as
the
builds
are
running,
I'm
gonna
be
doing
some
poking
at
a
Nago
today
and
seeing
if
I
can
figure
out
exactly
why
the
builds
are
running
out
of
order.
But
we
have,
we
have
some
fixes
in
our
back
pocket
that
we'll
need
to
execute
on
fairly
quickly,
so
we
just
want
to
be
available
for
that.
A
He
mentioned
that
the
he
mentioned
that
the
there
are
a
few
cube
ATM
tests
that
use
114,
basically
skew
tests
for
cube,
Adium
and
and
they're
broken
right
now,
they're
broken
as
a
result
of
the
fun
the
fun
tagging
issue.
So
once
we
fix
the
tagging
issue,
it's
like
I
would
prefer
them
to
not
be
broken
for
the
entire
cycle,
but
I
can
also
understand
if
we
don't
want
to
cut
another
release.
Just
for
that,
I
don't
think
it's
it's
critical
to
the
project,
but
not
necessarily
the
community
consuming
releases,
so
I.
C
Want
to
read
through
that
and
if
I
can
catch
him
he's
ten
hours
off
of
me,
so
it's
practically
in
the
business
for
him,
but
I
know
he
sometimes
has
unusual
hours
in
his
local
types
of
it.
I'm
gonna
get
a
hold
of
him
and
try
and
get
a
little
bit
more
detail
on
that
and
describe
what
you
and
I
had
discussed
last
week
about
hopefully
having
the
new
artifacts
that
we
built
tomorrow
be
good
and
understand
what
he
feels
would
still
be
missing
with
those
and
yeah.
C
A
C
A
A
But
it
will
yes,
they
may
be
tagged
on
the
same,
commit
and
have
no.
A
Be
they
definitely
would
be
tucked
on
the
same
committee?
So
what
we're
talking
about
for
the
people
who
aren't
aware
is
basically
there
is
so
when
we
do
a
when
we
do
an
official
release,
so
that's
a
like
a
patch
release
or
they
yeah,
so
dot
zero
through.
So
great
two
things
happen,
we
or
two
releases
happen
right.
The
first
one
is
a
the
official
release
and
the
second
one
is
a
staging
of
the
of
the
next
beta
for
that
branch
right.
A
So
if
we
cut,
if
we
cut
one
seventeen
one,
it's
also
doing
a
117
117
beta,
zero,
yeah
yeah
right.
Basically,
it's
you
prep.
You
know
to
prep
our
stuff
for
the
next
leg,
I'm,
not
sure
that
we
should
continue
doing
that,
but
also
an
audio.
Is
this
monstrous
mess
that
we're
we're
learning
more
and
more
about
every
day.
Now.
C
A
So
we
need
to
make
sure
that
so
he
he
set
the
the
version
fields
unburdened
in
that
commitment
for
the
open,
API
spec.
We
need
to
make
sure
that
whatever
scripts
are
running
in
KK
that
update
the
open,
API
spec
are
not
pulling
it
from
some
source
that
we
don't
know
about
great,
because
if
that
is
still
bumping
it
to
some
some
version
of
kubernetes
and
will
still
have
the
problem.
A
A
C
A
So
we're
gonna
work
on
that
today.
The
second
piece
of
that
is
the
jobs
were
failing,
because,
because,
ultimately,
the
get
cubed
sh
script
is
looking
for
a
certain
version
of
of
kubernetes,
whether
in
our
GCSE
buckets,
whether
it
be
in
the
CI
bucket
or
the
release
bucket
and
the
version,
because,
because
it
tagged
out
of
order,
the
version
that
was
pushed
was
essentially
a
commit
ahead,
which,
when
part
of
our
logic,
is
doing
a
get
describe
and
if
it
is
a
commit
ahead.
A
Commits
ahead,
commit
ish
right,
so
it'd
be
like
one
17.1,
12
G
and
some
sha
right
and
that
doesn't
match
the
regex
that
we
use
to
determine
where
build
artifacts
are
located.
So,
in
addition
to
removing
the
opening,
API
spec
bump
logic,
we
also
need
to.
We
also
need
to
mess
around
with
the
regex
that
determines
where
CI
and
release
artifacts
are
located.
A
We
had
done
a.
We
had
done
a
kind
of
hotfix
ii
thing
after
the
last
patch
releases.
Do
you
get
the
systems
moving
again,
where
I
manually
tagged
each
of
the
each
of
the
release
branches?
So
117
116
115
choose
the
next
beta,
so
they
were
tagged.
Beta,
Beta,
dot
1,
which
basically
becomes
a
becomes
a
a
the
our
CI
recognizes
that
as
a
new-build
tag
and
kicks
off
a
new
build
from
there.
A
The
new
build
has
the
correct,
has
a
correct
version,
string
that
matches
the
regex
and
allows
people
to
use
get
qsh
again,
so
we
need
to
edit
those
as
well
Tim.
What
I
was
mentioning
about
114
is
that
the
skew
tests,
the
ski
tests,
are
comparing
114
114
115
right.
So
if
we
determined
that,
like
we
have
fixed
all
this
stuff,
then
we
would
need
to
back
port
this
stuff
to
the
114
branch
before
cutting
another
release
to
fix
those
jobs.
A
A
We
have
to
determine
if
we
merge
this
PR.
The
PR
is
also
going
to
be
a
it's
going
to
be
a
conflict,
magnet
job
generation
things,
and
it's
it's
a
little
messy
it.
It's
it's
it's
messy
because
of
the
way
we
do
the
version
markers
in
kubernetes,
so
the
not
the
latest
stable,
x,
dot
y
markers,
but
the
Kate's
Kate's
dev
beta,
stable
one,
two
three
markers
those
markers
basically
slide
every
release
cycle
and
at
different
times.
A
So
when
we
cut
a
beta,
when
we
cut
a
branch,
a
new
release
branch
that
when
we
cut
the
118
release
branch,
it
will
slide
so
one
eighteen
will
become
beta.
Stable
one
will
become
one.
Seventeen
stable
two
will
become
one.
Sixteen
and
stable
3
will
be
115
so
part
of
what
I'm
trying
to
do,
and
then
this
PR
is
detangle
the
logic
between
when
we
cut
when
we
delete
the
the
last
supported
branches,
jobs
and
when
we
create
jobs
for
the
new,
the
new
branch
right.
A
A
The
a
lot
of
the
tools
that
we
use
on
the
branch
management
side,
we're
written
by
Kathryn
during
our
great
tests,
infra
roll
destruction
of
114
or
115,
and
so
those
jobs
have
recently.
Those
tools
have
recently
been
moved
to
a
rel,
ang
repo
relevant,
rather
real
end
directory
within
the
test
and
fro
repo
right.
So
if
you
want
to
take
a
look
at
what
those
tools
do.
A
A
So
what
I've
done
is
add
an
owner's
file
for
that,
so
that
release
can
actually
more
actively
take
an
interest
in
and
cleaning
up
those
jobs.
Since
it
looks
like
we
are
primarily
the
only
ones
who
we
use
them,
there's
also
a
PR
and
flight
around
the
updating,
the
generated
jobs
and
the
generated
jobs
have,
and
the
generator
has
a
bug
in
it
where
it
strips
the
iing,
the
environment,
flags
for
certain
jobs,
so
I'm.
Looking
into
that
one
as
well.
A
A
Starting
the
conversation
around
removing
are
continuing
the
conversation
around
removing
the
generic
version
markers,
but
before
we
can
do
that,
one,
we
need
to
make
sure
that
jobs
that
are
manually
configured
do
not
have
do
not
use
those
generic
version
markers.
So,
if
you've
ever
seen
the
fork
/
fork,
/
generic
suffix,
annotation
or
something
on
a
destined
for
job
set
to
true
that's
what
will
when
the
job
is
4?
That's
what
will
produce
a
job?
You
know
job
foo,
beta
or
job
for
the
stable
one
or
something
right
with
that
generic
suffix.
A
So
instead
we
want
jobs
that
are
targeted
to
the
branches
that
they
are,
that
they
are
testing
against
so
job
foo,
117,
job
foo,
so
on
and
so
forth.
Right,
that
is
the
easy
part.
The
harder
part
is
refactoring
Relan
j--
tests
dent
tests
underscore
config
gamal,
which
is
the
yeah
mole
that
gets
slurp
tough.
To
do
the
to
do
the
job
configuration
that
the
configuration
of
the
generated
jobs
for
testing
for
us
right,
I've
also
added
us
in
that
PR
I've
also
added
us
as
owners
to
those
jobs.
A
A
All
right
cool
super,
so
the
next
one
is
creating
the
CI
signal.
Sub
project,
so
I've
been
thinking
about
this
for
a
little
bit,
and
it
seems
that
Jorge
has
also
been
thinking
about
this
a
little
bit.
So
we've
been
chatting
back
and
forth
about
the
idea
of
creating
a
CI
signal,
sub
project
and
the
and
and
essentially
what
this
is,
is
to
maintain
this
body
of
expertise
that
we
build
up
over
the
the
release
cycle.
So
a
lot
of
think
about
you
know
think
about
what
we
did
on
the
release
engineering
side
right.
A
We
pulled
the
branch
we
pulled,
the
the
patch
release
manager
off
the
release
name
and
into
a
into
a
and
created
a
team
around
patch
release
and
and
then
also
did
the
same
for
branch
management.
So
we
want
to
do
the
same
for
CI
signal.
The
idea
would
be
CI
signal
remains,
as
there
will
still
be
a
CI
signal
liaison
or
whatever
we
call
this
thing.
A
Basically,
the
person
who
is
the
team
lead
right
now
for
for
118,
as
well
as
their
shadows
and
and
the
sub
project
would
be
composed
of
a
bunch
of
really
clever
people
who
have
who
have
been
CI
signal
to
their
will
over
the
past
few
release
cycles.
So
several
of
the
several
of
the
CI
signal
roll
leads
for
the
release,
as
well
as
people
who
are
interested
in
getting
involved.
A
So
if
you
have
interest
in
getting
involved,
let
me
know
I
have
to
put
together
a
quick
proposal
for
that,
defining
what
the
team
is
doing,
making
sure
that
we
talk
about
shifting
the
activities
of
the
team
from
or
of
the
sub
project,
from
not
from
reactive
to
proactive
there's.
A
lot
of
we
found
a
failing
test.
The
SAS
is
failing,
okay,
let's
go
fix
the
test
or
let's
go
contact
the
owners
to
fix
the
test
I
a
lot
of
the
work
that
we've
been
poking
at
around.
A
D
B
I
kind
of
feels
like,
unlike
unlike
patch,
release,
I,
don't
feel
like
this
team
is
gonna,
replace
the
release
team
role,
because,
theoretically,
with
the
release
team
role,
that
does
is
very
different,
which
is
the
release
lead
says.
Are
we
ready
to
release
and
see?
I
signal
is
one
of
those
roles
that
says
no
stuff
still
broken.
A
Right
so
yeah,
so
the
idea
would
be
where
the
CI
signal
team
within
the
relief
team
currently
focuses
on.
What's
in
master,
walking
and
forming
and
118
blocking
and
informing
the
CI
signal
team
would
be
focused
on
all
blocking
all
informing
and
the
way
those
jobs
are
generated,
how
those
things
are
handled.
Who
has
owners
who
doesn't
wear
the
flakes
are
building
dashboards
around
that
stuff
and
Josh
I
would
want
you
to
be
a
part
of
that.
Obviously,
yeah.
B
A
A
A
Okay,
all
right
so
the
last
one
so
last
call
seriously
last
call
for
agenda
items.
If
anyone
has
anything
else,
I
know
I've
been
talking
most
of
the
time,
so
the
last
one
we're
looking
at
is
well
the
last
one.
That's
on
the
agenda
is
discussing
what
the
the
K
release,
repo
and
tag,
repo
tagging
and
branching
policies
should
be.
There's
a
lot
of
good
content.
They're
already
love
the
the
write-ups
that
loop,
Amir
and
Hannes
and
and
and
Tim
have
dropped
in
there
I'm
I'm
thinking
that
initially
we
can
start
light.
A
Maybe
we
don't
need
to
do
tagging.
Maybe
we
don't
need
to
do
branching
rather
immediately
and
I.
Think
that's
echoing
what
Hana
said
and
just
just
to
get
tagging
in
place
as
we're
starting
to
build
more
and
more
go-go
tools.
It
might
make
sense
to
you
have
multiple
tag,
types
that
reference
the
tools
so
kubernetes,
Wow,
Crowl,
Wow
and
cue
PKG,
blah
right
as
well
as
a
top-level
tag
for
for
the
repo
right.
So
we've
been,
we've
been
going
along
the
zero
to
you,
I
think
zero
I
think
there
are
do
one
right.
B
A
And
yeah
I
don't
know
they're
still
things
to
figure
out,
it'd
be
it'd,
be
nice
to
figure
figure
them
out
this
cycle.
I
think
we
need
to
to
support
the
key
of
ADM
out
of
tree
stuff.
There
are
also
things
to
figure
out
around.
How
can
we
do
branching
in
a
generic
manner?
Can
is
this
something
that
is
already?
This
is
something
that's
already
functioning
in
tests
infra.
Is
it
something
that
we
can
borrow
some
art
from
the
publishing
bot?
Can
we
actually
just
use
the
publishing
lot
for
this?
A
C
Cool
one
of
the
things
that
we
discussed
here
was
the
code
we
have
today
it's
kind
of
your
classic
forklift
of
a
monolithic
process
into
the
cloud
we
we
do
a
cloud
build,
but
really
we
just.
We
have
a
giant
set
of
Bosch
that
runs
sequentially
in
a
VM
man.
Not
super
interesting
we'd
like
this
to
eventually
be
evolved
towards
more
of
a
cloud
native
pipeline,
something
that
akin
to
what
people
talk
about
is
get
ops
kind
of
going
back
to
earlier.
C
In
the
meeting
when
Steven
mentioned,
having
a
build
that
comes
from
a
spec,
that's
not
on
a
machine
but
isn't
get.
If
we
have
each
of
these
tools
pipeline
together
and
maybe
they
each
run
in
a
container.
Each
of
those
containers
is
version
independently.
That's
where
you
start
to
realize
that
you
have
a
potential
problem
with
tagging
and
support,
because
for
depending
on
how
how
forked
this
stuff
becomes
in
terms
of
enhancements
and
trailing
support.
C
If
we
are
trying
to
rebuild
one
more
release
114
this
week,
for
example,
we
may
not
dare
to
do
that
with
the
the
tooling
that
we're
building
right
now
for
118.
We
might
want
to
use
the
the
114
branches
equivalent
of
tooling
so
that
there's
continuity
for
that
stream
of
building
and
the
tooling
used
for
the
building.
C
So
you
could
maybe
imagine
a
yam
will
file
that
specifies
I'm
using
these
five
tools
at
these
five
different
versions,
but
that
needs
to
be
reflected
in
the
repo
and
somehow
that
that
forking,
a
variation
of
the
code
happened.
So
this
probably
ends
up
is
branches,
maybe
branches.
Maybe
we
have
a
master.
Maybe
we
have
a
branch
per
tool
and
a
branch
per
release
and
some
interesting
merging
of
content
between
those
and
and
tagging.
Obviously,
as
well
yeah.
A
B
A
And
if
we
can
prove
we
can
do
those
successfully,
then
we
sign
them
and
we
publish
them
somewhere
right.
So
it's
it's.
We
want
to
eventually
build
something
where
we're
taking.
You
know
each
I
think
this
is.
A
You
know
something
that
Hana
side
alluded
to
or
earlier
last
year,
when
he
did
the
demo
with
Concours
about
like
there
should
be
a
there,
should
be
a
pipeline
where
there's
clear
entry
and
exit
criteria
for
each
of
these
steps,
which
we
kind
of
kind
of
we
conflate
a
bunch
of
the
steps
that
are
happening
today,
based
on
the
tool
that
we're
using.
So
this
this
impresses
the
importance
of
refactoring
a
nago
this
year.
A
It's
it's
gonna,
be
a
massive
undertaking,
but
I
think
that
you
know
we
have
learned
a
lot
of
good
lessons
from
starting
to
refactor
things
like
changelog
and
and
fast
forward,
and
and
push
build
and
so
on
and
so
forth.
So
I
think
we'll
be
successful.
I
think
we
just
need
to
understand.
What's
the
best
way
to
to
jump
into
an
go,
because
enough
of
it
is
intertwined
that
as
we're
doing
this,
we
kind
of
have
to
do
it
all
at
once.
A
A
And
the
idea
is
to
use
that
for
the
new,
the
name
of
Kate's
staging
release
tests
GCP
project-
so
that's
us
moving
from
away
from
Google
intro
and
on
to
Kate's
in
press
but
they're
like.
Obviously,
there
are
a
lot
of
things
happening
at
the
same
time,
so
like
it's
moving
away
from
that
infra
also
building
a
system
that
can
run
in
either
infra
and
then
refactoring
the
tool
that
runs
on
that
system.
And
then
it's
a
lot.
It's
a
lot
of
stuff.
C
Call
for
maintainer
strat
content
for
cube
con
EU
has
come
out
as
of
late.
Last
week.
We
typically
do
an
intro
in
a
deep
dive
session.
We
have
a
tentative
plan
for
the
deep
dive.
Sasha
was
interested
in
doing
a
talk
and
the
the
early
drafting
that
we
had
done
last
fall.
I
think
it
was
in
late.
November
early
December
is
leaked
there
and
a
Google
Doc.
We
we
typically
for
the
intro,
do
an
overview
of
what
releases,
what
sig
releases,
what
the
release
team's
part-
and
that
is
what
the
cycle
looks
like.
C
We
don't
have
to
do.
That's
something
we've
done
a
bunch
of
there
are
there's
lots
of
videos
out
there.
It's
not
like
this
isn't
discoverable
discoverable,
so
we
could
also
mix
it
up
a
bit
and
do
something
different
there,
but
we
have.
We
have
seven
days
at
this
point
to
submit
our
content
and
if
anybody
is
interested,
especially
if
I
would
say,
if
we
have
new
folks
on
the
call
who've
been
around
cig
release
just
a
bit
this
last
year
and
if
you
feel
like
you,
have
something
interesting
that
you've
you've
noticed
or
wondered
about.
A
B
C
C
B
C
A
A
Because
I
know
yeah
I
know
a
few
people
said
they
weren't
attending
you,
but
I
will
definitely
be
there
and
and
I
know.
I,
know
I
think
we
have
some
critical
mass
and
to
say
go
over
all
based
on
the
people.
I've
talked
to
already
so
yeah
if
you're,
if
you're
going
and
you're
potentially
interested
in
this
just
drop
on
it
on
the
agenda,
and
this
goes
for
anybody
listening
to
the
call
afterwards.
Please
also,
if
you're
interested
drop,
your
drop
a
note
on
the
agenda.
A
C
A
A
Okay,
all
right.
Well,
this
is
going
to
be
a
busy
cycle
y'all
between
between
cube
Colin
on
our
heels,
as
well
as
the
cube
ATM
out
of
tree
stuff
and
all
the
release
engineering
work
going
on.
So
if
there
are
things
that
you
want
to
do
to
help
out,
please
let
me
know
there
are
things
that
you
see
notice
that
we
will
be.
We
will
be
kicking
things
out
of
a
milestone
and
staying
like
this
is
not
critical
for
this
release
cycles
and
so
on
and
so
forth.
A
So
just
be
aware
of
that,
please,
if
you
are
cig,
release,
release
team
release,
engineering,
CI
signal
licensing,
sub
projects
be
sure
that
you're
watching
your
issues.
We
really
need
you
to
be
incredibly
attentive
to
the
issues
and
PRS
that
you
have
open
this
cycle,
because
it's
going
to
take
all
of
us
to
be
super
successful
there.