►
From YouTube: Kubernetes SIG Cluster Lifecycle 20170926
Description
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.d8erdtdrdkzg
Highlights:
- Starting the 1.9 planning process
- 1.8 blog post ready for review
- DNS in kubeadm
- Using dynamic kubelet configuration in kubeadm
- Adding contributors for the kubeadm repository
- Upgrade tests for 1.8
- Testing
A
Hello
and
welcome
to
cluster
lifecycle
meeting
for
September
26th
2017
I
will
jump
right
into
the
agenda
and
I
guess:
I've
got
the
first
two
things
on
the
agenda.
The
first
one
is
1/9
planning.
1/8
is
maybe
still
on
track
to
be
cut
tomorrow,
but
either
way
it
what
we
cut
very
soon.
So,
as
we've
done
for
the
past
couple,
releases,
I
created
a
doc
if
you'd
like
to
have
access
that
are,
please
join
the
mailing
list.
A
A
It
sucks
a
bunch
of
stuff
in
the
document
sort
of
carry
over
from
last
time
and
rejiggered
the
priorities
based
on
some
verbal
conversations
we'd
had
during
previous
sig
meanings.
I,
don't
think
we're
gonna
go
over
what
those
things
are
today,
necessarily
but
I
just
wanted
to
put
it
out
there.
So
people
could
add
other
things
they
think
are
important
for
1.9.
That's.
A
Maybe
if
we
have
time
at
the
end
of
the
agenda,
we
can
go
through
it
in
detail.
I,
don't
want
it
to
derail
from
the
other
agenda
items.
I
just
wanted
to
mention
that
the
doc
was
there
and
people
should
put
stuff
into
it.
So
maybe
we
can
review
it
and
agree
next
week,
I'm
happy
to
come
back
to
it
at
the
end
of
the
meeting.
If
we
have
have
time
and
the
agenda
looks
relatively
short,
so
we
might
have
time
I.
D
A
A
D
E
Yes,
we
are
so
I
think
by
next
week
we
should
have
a
rough
sketch
of
where
we
can
invest
I'm
into
I.
Don't
think
anything
is
out
of
the
ordinary,
because
we've
already
we've
already
started
down
the
path.
It's
a
question
of
whether
or
not
we're
gonna
have
resources
to
devote
towards
other
pieces
or
not
I.
Think
the
the
standard
ones
still
apply
that
we're
carryovers
for
a
couple
of
cycles.
Now
right.
A
Yeah
I
mean
I
think
that
a
lot
of
the
stuff
is
is
somewhat
in
flight
and
non-controversial
and
the
question
is:
are
there
other
things
we
think
are
either
more
important
to
displace
those
things
or
do
we
have
enough?
You
know
people
to
actually
add
add
to
that
sort
of
list
of
things
that
already
in
flight
I
would.
C
B
B
F
D
C
I
ask
a
huge
favor,
possibly
something
I'm,
hoping
to
see
really
materializing
1-9,
that
we
sort
of
started
at
too
late
and
1-8
was
to
have
cigs
driving
their
own
sort
of
product
themes.
So
essentially,
what
I'd
love
to
see
is
cholesterol,
icicle
have
whatever
their
one
or
two
big
themes
are
four
four,
this
one
nine
release,
sort
of
tie
it
to
your
backlog,
and
so
we
can
start
in
theory,
filling
out
the
release
notes
day.
One
like
here.
C
The
here
are
the
themes
that
we're
shooting
for
and
and
then
eventually
that
trickles
down
into
the
functionality
and
features
that
you
add,
so
that
I
basically
eliminate
some
this.
What
I
consider
relatively
high
level
term
with
the
product
management
group
that
happened
this
time.
So
that's
an
ask.
B
E
What
it's
worth
I
think,
there's
colorful
adjectives:
I
could
probably
use
about
themed
releases.
Have
you
ever
seen
a
Linux
kernel
with
a
themed
release
right,
there's
I,
think
I
think
they're
done
afterwards.
I
think
there
are.
There
are
major
features
for
a
push,
and
maybe
we
could.
Instead
of
using
the
word
theme,
we
have
a
listing
of
features
that
we're
trying
to
identify
and
execute
against.
We
should
probably
err
on
the
side
of
being
conservative.
Just
given
the
history
of
how
long
it
takes
yeah.
C
So
I
I
mean
you're,
definitely
talking
the
right
person
to
have
that
show
that
viewpoint.
What
I'm
trying
to
do
is
reconcile
what
is
an
existing
sort
of
process
thing
around
product
management
and
where
we're
really
at
1.8
I've
been
sort
of
torn
into
by
this,
and
it's
been
pretty
difficult,
so
I
actually
going
to
propose
a
relatively
radical
change
to
product
management.
Generally
speaking,
and
it's
more
aligned
with
what
you
just
said,
but
just
for
now,
if
we
don't
cook,
we
don't
call
it
themes
or
whatever,
but
I
just
want
to
know.
C
A
Why
I
was
gonna
say
too
I
think
what
Jase
is
really
looking
for
is
sort
of
a
roadmap
of
like
here's,
where
we're
headed
as
a
cig
and
also
like
here
are
the
big
changes
in
the
next
release,
and
we
want
to
know
that
at
the
end
of
the
release.
Obviously,
but
you
know
if
we
sort
of
can
set
our
sights
the
beginning
and
I
think
that
that
sort
of
also
ties
back
to
the
planning
doc
is
a
plain
talk,
I
sort
of
where
we
say
like
here.
C
That'll
just
make
this
all
the
scrambling
that's
happening
at
the
new
releases.
It's
just
pardon
me
for
saying
this,
but
it's
just
stupid
cuz
the
cigs
are
doing
a
fantastic
job
planning
and
doing
all
these
things
and
then
somehow,
at
the
end
of
the
release,
it's
a
fire
drill
about.
What's
in
the
feature
or
what's
in
the
release,
it's
like
why?
What
what?
C
Long
as
somebody's
dedicated
to
it
then
I
know
for
that
I
think
there's
some
somebodies
managing
that
which,
technically
speaking,
I,
think
that
the
product
products
should
be
embedded
in
the
team
and
the
sake
and
not
the
other
way
around,
because
frankly,
I
mean
an
agile
who,
who
has
a
who
has
a
dev,
go
to
agile
planning
meetings?
That's
I
mean
it's
just
not.
You
know
you
have
a
for
kickoff,
you
guys
decide
where
you're
gonna
commit
to
for
the
for
the
release,
and
then
you
do
it.
C
It's
that
okay,
I'm
gonna
have
myself
so
glad.
Maybe
it's
just
too
fresh
for
me
right
now,
I
mean,
though
the
lead,
but
I
I
just
want
to
thank
you
all.
You
guys
have
done
them
in
Dallas
have
done
amazing
work
and
I
just
appreciate
it
tremendously.
It's
been
super
helpful
and
I'm
done
cop
to
your
meeting.
A
Or
any
other
comments
from
folks
about
the
1-9
planning
duck
or
planning
process
in
general
before
you
move
along
okay,
as
I
said.
Well,
probably,
if
we
have
time
at
the
end,
we
can
come
back
and
sort
of
walk
through
the
dock
itself.
Next,
on
the
list,
I
put
a
link
to
the
upgrade
index,
so
this
was
something
we
created
quite
a
while
back
where
we
were
trying
to
sort
of
collect
from
the
community
how
different
tools
in
the
community
actually
perform
upgrades
and
I.
A
Finally,
convinced
someone
at
Google
laments
them
to
write
down
what
Google
container
engine
does
for
upgrades,
and
so
I
just
wanted
to
sort
of
bring
this
back
to
people's
attention.
We
haven't
looked
at
it
in
a
while.
The
point
of
this
talk
was
to
collect
best
practices
to
sort
of
inform
the
future
of
kube
admin.
There
are
a
couple
of
links
that
are
missing
here
still
so
the
cops
and
the
jewelry
bar
that
Rob
had
volunteered
to
write
upwards.
A
B
Okay,
yes,
so
I,
as
I
said
earlier,
I
was
gonna,
write
a
sink
update
for
one
night,
basically
like
what
have
we
done.
The
last
two
to
release
is
one
seven,
one,
eight
a
little
bit
of
one
six
as
well,
just
like
mentioning
it
was
much
more
secure.
Last
update
was
January
right
or
something
like
that
at
least
so
I
said
like
what's
probably
gonna
make
one
six
and
what
we're
planning
to
do
like
the
rest
of
the
year
and.
B
Yeah
now
I
have
one
for
review
and
I'd
appreciate.
If
you
took
a
quick
look
at
it
and
yeah,
maybe
comment
something
or
just
lgt,
em
and
yeah
also,
as
said
talks
a
little
bit
about
like
do
you
plan
for
1/9
in
terms
of
like
making
self-hosting
the
default
and
having
also
supports
YJ,
it
was
basically
that
and
yeah
I
think
that's
probably
and
mentioning
like.
Now
we
have
easier
upgrades
with
cubed
M
upgrade
and
experimental
so
forcing
sport
and
the
the
cubed
and
face
command
so
yeah
I,
don't
know
what
any
questions.
A
E
A
E
D
This
does
start
to
look
like
a
road
map.
In
sort
of
you
know
the.
What
are
we
planning?
Where
is
this
stuff
going
broadly?
So
you
know
I
think
there's
definitely
overlap
between
this
being
a
little
bit
sort
of
theme,
like
you
know,
the
more
nitty-gritty
of
the
planning
dog
that
Robbie's
been
sharing.
You
know,
I
think
those
things
are.
We
want
to
make
sure
their
assistance
yeah.
B
B
One
is
adding
support
for
core
DNS,
which
we
talked
about
two
weeks
ago
and
pretty
much.
Everyone
agreed
that
like
if
this
is
something
significant
satellite
with,
and
they
seem
to
be
well,
we'll
add
this
behind
a
feature
gate,
that's
all
fine
1/9
and
then
like.
If,
if
we
then
consider
Co,
DNS
general
to
be
stable
or
beta
at
meter
level
in
110,
we
can
enable
it
by
like
make
it
replaced
cube
DNS
by
default.
There
are
some
that
there
is
a
close
up.
B
B
E
D
Yeah
so
so
sig
architectures
set
a
self-imposed
timeline,
I
think
end
of
this
week
to
try
and
sort
of
close
on
the
proposal
at
least
some
sort
of
epic
of
the
proposal
stuff.
That's
going
on
right
now,
I
think
we're
gonna
continue
to
refine
it.
But
some
of
this
is
like:
let's
get
a
format,
let's
actually
get
try
and
get
people
to
start
using
this
stuff,
and
let's
get
some
sort
of
review
so
that
people
see
this
so
I
think
we're
trying
to
trying
to
not
sort
of
you
know
sit
on
there
forever.
I
think.
C
The
first
artifact
that
will
come
out
as
a
template,
so
I
think
that's
a
good
start.
So,
even
if
you
move
forward
with
whatever
proposal
you're
doing
else,
I
did
that
process.
If
you
could
retroactively
lis,
do
the
template
that
would
help
us
diagnose
what
we're
missing
as
far
as
metadata
or
what
other
stuff
so
we'll
be
in
strong
partnership.
Moving
forward
on
that,
so.
C
D
D
When
and
how
like,
we
shouldn't,
take
any
components
that
aren't
being
sort
of
you
know
beaten
it
into
submission
with
sort
of
the
ete
process
and
all
our
test
coverage
and
stuff
like
that
right
and
so
there's
a
chicken-and-egg
of
like
you
know,
do
we
lead
in
terms
of
integrating
new
stuff
versus
other
people
lead?
Do
you
know
if
we
switch
out
sort
of
Cuba
with
something
that's
cube
admin
base,
then
we
are
like.
So
how
does
this
fit
into
that
so
Lucas,
yes,
I.
B
Yes,
I
mean
it's
totally
undocumented
and
I
want
to
change
that.
I
put
up
my
just
a
personal
repo
you're
sharing
sales
well
like
with
some
of
the
some
kind
of
guide
like
how
to
do
it.
I
expect
Kody,
honest
and
maybe
like
integrating
making
a
cube
a
team
alpha
I
mean
we're
gonna.
Do
that
anyway,
like
creates
a
job
that
enables
all
of
all
four
features:
four
cubed
M,
or
at
least
some
like
self
hosting.
B
B
D
D
A
Yeah,
we're
also
trying
to
actively
peel
away
bits
of
the
temper
that
only
run
on
Cuba
right
now,
so
objets
working
on
trying
to
figure
how
to
create
skewed
tests
with
Cuba
and
Cuba
Nettie's
anywhere,
so
that
we
can
sort
of
peel
that
part
of
the
upgrade
tests
away.
We
don't
have
automated
upgrade
test
for
Cuban,
yet
so
we're
still
relying
on
the
cube
of
greatness
right.
D
B
G
B
B
Like
DNS
IP
and
like
it's,
the
second
IP
or
something
in
the
the
service
range,
we
like
just
take
the
tenth
and
yeah,
so
that
makes
basically
Cuban
in
return
one
exit
one
instead
of
like
it
says,
like
you,
can't
Cuba
and
tries
to
update
the
service
to
be
the
10th
10th
IT
like
this
I'd
important
thing,
and
then
it
basically
just
fails
so
I
mean
it's.
It's
work
around
the
ball.
B
He
has
done
a
workaround
for
this
now,
but
the
the
more
general
question
or
team
is
like
how
do
we
configure
add-ons
and
I
mean
this
is
a
painful
one,
we're
all
kind
of
like
waiting
for
that
I,
don't
API
to
just
pop
up,
and
nobody
has
any
time
to
like
research
into
all
the
nitty-gritty
details.
There
are
with
upgrades,
like
you,
reconfigurations
dynamic,
like
dynamic
configuration
and
I
mean
Bryan.
Bryan.
Has
luck
with
declarative
application
management
that
also
ties
into
this
and,
like
all
these,
it's
it's
a
so
large
topic
and
I
mean.
B
A
A
It's
it's
nothing
more
right
and
I
was
I.
Think
I
was
having
a
conversation
with
Tim
yesterday
when
he
was
in
town
about
add-ons
and
he
kind
of
said,
like
he
added
the
the
DNS
directory
that
add-ons
DNS
directory,
because
we
sort
of
needed
DNS
to
work
and
didn't
expect
add-ons
to
sort
of
grow
organically
into
the
terrible
state
that
there
and
now
and
so
I
think
we've
we've
sort
of
generated
some
technical
debt
for
ourselves
there
and
and,
like
you
said,
sort
of
haven't,
had
the
manpower
to
to
remove.
That
is.
A
B
B
The
third
PR,
or
whatever,
is
basically
making
it
configurable
we
are
the
the
config
file
is
again
like
if
we,
if
we're
gonna
push
for
cubed
and
the
cube
atom,
API
config
API
it
to
be
like
an
API
for
the
control
plane.
I
guess
this
somehow
I
mean
it's.
It's
hardware
should
like
DNS
service
IP.
Should
it
be
like
free
on
a
cluster
control,
plane,
level,
etc.
I
mean
the
the
PR
that
actually
calculates
the
thing
in
a
correct
manner.
Is
it's
fine?
It's
totally
fine,
but
I'm
wondering.
Should
we
make
this?
B
B
A
D
Here's
a
young
I
think
there
is
this
question
of.
Do
we
want
to
take
add-ons
as
a
general
thing
or
do
we
want
to
say?
Dns
is
special,
let's
just
fix
it,
and
and
I'm
leaning
towards
DNS
is
kind
of
special
here,
let's
just
fix
it
and
we
can
always
do
generic
add-ons
later,
but
we
got
a
real
problem
right
now
that
we
can
tackle.
A
D
B
Yeah
I
mean
I,
don't
think
we
could
like
solve
the
generic
I,
don't
issue
just
for
DNA
side,
as
we
don't
have
the
manpower
but
yeah
I,
think
I
think
just
fixing
DNS
now,
but
but
I
was
just
thinking
as
we
design
as
we
start
envisioning
the
kubaton
config
API,
which
we
are
gonna,
think
like
with
Kubik
own
and
and
maybe
something
like
the
cluster
api.
We
we
think
this
as
the
api
of
the
control
plane.
Do
we
want
this
to
belong
that
or
I
mean
that's.
B
A
Yes,
I
will
say
that
in
GK
we
have
not
found
that
to
be
necessary
right.
Nobody
that
complains,
if
we
just
pick
an
IP
that
works
and
DNS
just
works,
but
if
other
install
tools
that
are
built
on
top
of
cube
admin
or
require
us
to
be
able
to
configure
that
I,
don't
see
any
reason
why
we
wouldn't
make
that
you
know
optional
field,
you
can
you
can
configure,
because
it's
pretty
easy
to
pump
through
the
system.
A
D
B
B
Also
I
don't
want
the
tenth
IP,
it's
it's
this
problem
and,
and
it's
like
I'm
generally
hesitant
to
add
more
fields
to
the
queue
Batum
API
like
to
avoid
this
cube
up
things
where
we're
like
everything
can
be
configured
and
in
the
end
we
don't
see
like
the
important
stuff,
because
way
too
much
is
configurable.
So.
A
A
H
B
H
A
F
Please
set
as
a
flag,
basically
but
I
mean
I.
Think
I
think
it
really
just
depends
on
that
use
case.
I
mean
I,
can't
really
envision.
Why
you
need
to
override
it,
but
if
they
have
a
strong
use
case
where
they
need
to
determine
like
their
own
offset
and
one
more
I
guess,
one
more
config
will
hurt.
It
depends
how
strongly
these
cases
really
I
mean
I.
Don't
really
have
anything
for
a
game.
I
D
D
B
Yeah,
that's
that's
happening
now,
but
that's
because
we
don't
have
the
like
make
information
flow.
It's
a
separate
issue
right
because
it
make
information
flow
down
from
control
playing
the
like
system
yeah.
So
we
do
have
that
Incubus
yeah.
So
we
do
yeah
exactly.
We
do
have
that.
That's
all
fine
one-eighth
and
I'm
investigating,
as
we
speak,
to
enable
that
by
default
it's
gonna
be
GA,
probably
in
1/9
cubelet,
dynamic
configuration,
so
that's
great,
but.
G
D
F
One
one
sort
of
incidental
plus
of
adding
a
separate
sort
of
offset
config
parameter
that
kind
of
bubbles
up
the
implicit
expectations
of
what
we
sell
it
to
become
something
people
are
aware
of,
because
right
now,
I
guess
they
have
to
dig
through
the
codebase
and
find
out
what
that
offset
is
if
they
want
to
manually,
configure
the
couplets.
So
if
we
make
it
configurable,
then
there
might
be
more
obvious
to
people.
You
want
to
configure
the
couplet
DNS,
the
cluster
dinner,
so
yeah
there
could
be
another
cluster
I
think.
A
B
B
Dns
basically
has
two
knobs
right
now:
it's
the
DNS
IP
and
the
DNS
domain.
We
already
have
the
domain
in
the
cubed
M
configuration
so
I
mean
it's
because
it's
gonna
it,
the
I,
should
the
API
service
serving
cert
should
be
signed
for
like
kubernetes
default
service
and
the
the
DNS
domain.
Otherwise
you
will
get
some
nasty
errors
when
using
plus
b
NS,
so
yeah,
it's
hard,
I,
guess
I'll
I'll.
B
B
Yeah,
okay
cool,
so
then,
I
have
like,
as
we
talked
about
a
minute
ago,
that
we
should
switch
to
using
cube
life.
Dynamic
configuration
I'll
link
to
the
documentation,
PR
that
myself
and
Mike
saw
some
work
done
and
yeah
it's
basically,
you
create
a
config
map
can
be
named
whatever
basically,
and
it's
last
one
that
one
key
that
has
to
be
cube,
that
should
be
Kuebler
and
there
you
can
put
in
the
value
there.
You
put
an
API
group
like
the
using
the
current
API
machinery
we
have
for
the
cubelet.
B
This
is
currently
in
alpha,
but
it's
probably
gonna
graduate
the
GA,
like
version
1
in
in
1:9
and
yeah.
So
this
this
base,
so
the
the
it
basically
works
as
you
set
a
annotation
or
I,
think
it
was
an
annotation
or
label
or
whatever
on
the
cubelet.
Only
note
object
and
basically
tells
the
cube
like
look
for
configuration
here.
It
will
download
the
configuration
from
the
config
map
and
put
it
locally
on
on
disk
some
some
some
kind
of
directory.
B
It
will
then
exit
one
and
pick
up
the
configuration
of
next
on
the
next
reboot,
like
we
start
so
yeah,
and
then
that
makes
it
for
the
first
time
possible
for
cubed
m2
to
flow
this
information
down
from
like
currently,
as
joe
said
like.
If
you
go
ahead
and
specify
a
service,
a
specific
service,
subnet
dns
is
gonna
break
unless
you
manually
configure
every
cubelet
in
the
systems.
So
it's
gonna
be
really
really
useful.
A
By
only
hesitation
of
adopting
the
feature
is,
if
we
don't
think
it
will
actually
make
it
to
beta
or
GA
as
long
as
we
think
it
will
hit
those
one
of
those
gates.
I
don't
see
any
reason
we
wouldn't
want
to
take
it.
I
know.
In
the
past,
we've
adopted
things
that
have
only
sort
of
been
alpha,
but
I
think
as
cube
admin
matures,
and
we
want
to
push
it
towards
GA.
D
B
B
E
E
B
Basically,
what
I
see
like
on
the
technical
level
is
adopting
like
at
CD
three
one
adopting
cubelet
dynamic
configuration,
removing
all
the
old
stuff
in
the
code
base
like
that
required.
It
was
legacy
four
one,
seven,
that's
already
in
progress
using
priority
and
preemption,
as
we
talked
about
like
making
that
better
certificate
rotation
for
the
cubelet
serving
sort,
maybe
checkpointing.
B
Self
hosting,
by
default,
high
availability
and
yeah
one
one
interesting
one
is
we
should
start
running,
cube
mark
or
like
performance
tests.
Thank
you.
Madam
I
made
an
issue
for
that
as
well.
Basically,
to
move
from,
as
Robert
said,
like
move
more
and
more
things
from
being
runnable
I
mean
I
have
no
idea
how
huge
Marcus
is
configured
on
like.
Is
it
easy
or
hard
to
run
anyway?
B
A
Top
line
but
I
believe
cube
arc
is
create
a
regular
cluster,
and
then
you
run
pods
in
a
cluster
that
represent
more
cubelets
to
get
sort
of
fake
scale
right.
The
the
underlying
part
of
create
a
cluster.
You
already
know
how
to
do
that,
so
it
shouldn't
actually
be
that
difficult
to
switch
that
test
over
I'm
sure
there's
some
plumbing
that
they
reuse
some
of
the
variables
and
stuff
prior.
B
B
A
I
was
gonna,
say
is
this
is
part
of
the
reason
we
split
our
issues
out
of
the
main
repos
so
that
we
could
add
more
people
to
help
us
help
us
to
issue
triage
and
stuff
I
think
the
only
thing
here
is
like
I
think
you
need
to
be
part
of
the
org
to
easily
be
added.
So
if
you'd
like
to
be
someone
who
can
flip
labels
and
so
forth,
just
please
request
access
and
that'll
go
to
the
current
maintainer
x'
and
we
can
approve
you.
B
And
to
be
creates
flipped
labels
for
the
cube
and
in
repo.
As
far
as
I
know,
it's
you
or
you
have
to
be-
am
like
a
kubernetes
maintainer
in
order
to
get
right
access
to
the
main
repo
which
gives
you
label
access
and
there's
quite
a
high
bar
for
like
that.
Have
this
let's
process,
yes,
but
you
have
to
like
file
a
pretty
long
application
request
along
with
a
lot
of
endorsements
from
people.
So
it's
not
as
straightforward
as
like
give
as
a
side
effect.
B
J
B
J
B
This
is
to
reiterate
this
is
about
like
control,
plane
images
that
are
pushed
from
our
CI
jobs,
which
basically
makes
it
possible
to
just
do
cube,
am
in
it
or
cube
atom,
upgrade
and
say,
like
CI
and
a
specific
version,
and
you
can
just
it
will
automatically
use
these
these
images
that
are
built
from
basically
head.
So
that's
that's
really
useful
for
testing
and
we're
going
to
utilize
it
for
like
upgrade
to
master
job
like
as
soon
as
one
is
out.
Did
you
have
any
more
questions
and
as.
A
Yeah
I
moved
up
here
is
asking
coach
I've
a
great
test:
ok
upgrade
testing
I,
don't
think
I,
don't
see
Jessica
on
the
call,
but
cheeping
me
this
morning
and
said
that,
since
upgrade
the
automated
upgrade,
testing
for
1/8
is
not
quite
working
yet
she's
going
to
start
doing
the
man
upgrade
tests
today
so
that
we
do
have
some
signal
that
upgrade
tests
for
Qi
haven't
worked
before
we
cut
it.
Release
and
I've
got
that
sort
of
at
the
very
top
of
the
1:9
list
of
issues.
B
Well,
I'm
gonna
ping
them
again
this
so
basically,
last
night,
well
Europeans
were
sleeping.
They
discussed
cube
test
issue.
I
mean
this
is
the
upgrade
test
right
now.
Four
cubed
M
is
blocked
on
some
kind
of
really
weird
testing
for
a
pause
issue.
So
basically
it
starts
the
cluster
successful.
It
executes
some
initial
ete
test,
but
then
is
it's
gonna
like
invoke
cube
test
the
main
artifact
of
testing
for
again
and
it
just
xx
the
the
wrong
part,
and
it's
it's
pretty
annoying
just
actually
has
so
we.
B
A
I
linked
the
PR
and
the
meeting
notes
also,
but
barring
bar
not
getting
in
we're
still
gonna
do
manual
testing,
because
if
the
release
is
gonna
get
cut
tomorrow,
even
if
we
get
the
automated
testing
working
today,
we're
still
not
gonna
have
very
many
runs
of
it,
probably
by
the
time
everything
gets
merged
and
everything
and
we
work
out
any
other
kinks.
So
we'll
just
have
some
signal
for
manual
testing
and,
like
I,
said
I,
think
we're
close
on
the
automated
testing
and
so
we'll
get
that
finished
up,
regardless
of
when
the
release
is
kept.
B
Yeah
gay
Sian,
I
still
under
call
I,
am
indeed
cool,
so
yeah
just
to
bring
something
up
that
we
talked
about
in
sync
released
yesterday.
Basically,
everyone
noticed
that,
like
Cuba
the
Cuban
jobs
weren't
in
master,
blocking
which
nobody
seems
to
notice
before
and
yeah
some
we
we
added.
We
made
the
Cuba
diem
jobs
to
be
master
blocking
for,
like
all
four
releases
going
forward,
but
yeah.
Just
if
you
had
some
comment
or
something
to
say
there,
no.
C
I
think
that
we
absolutely
should
target
for
master
blocking,
because
the
life
cycle
of
Kobi
diem
is
so
tightly
coupled
with
the
the
nucleus
right
now.
I
think
long
term
I'd
like
to
see
more
of
a
helm
like
approach,
but
there's
no
right
now,
there's
absolutely
no,
no
way
that
we
can
do
that.
C
There's
there's
way
too
much
coupling
so
I
think
that
it
makes
sense
and
I
also
appreciate
that
we're
gonna
expedite
the
manual
testing
to
get
some
signal
right
away
on
this,
because
I
think
that
signal
is
important
and
getting
the
the
upgrade
test
from
running
and
and
monitored
throughout.
The
lifecycle
of
about
nine
is
also
super
helpful,
because
the
the
whole
getting
up
your
testing
and
all
that
working
at
the
last
minute
is
not
a
non-functional
thing
that
does
not
scale.
It's
been
huge,
a
huge
impediment.
C
H
A
A
H
H
I,
don't
think
this
is
the
particular
sig,
but
in
general
about
like
how
we
do
cross
over
cross
project
testing
and
all
of
that
and
how
we're
gonna
assemble
the
repo
and
the
same
thing
came
up
with
the
CNI
issue,
with
the
release
right
like
where
we
don't
really
have
good
controller,
with
versions
of
things
that
aren't
in
the
core
repo
and
we
have
to
we're
gonna
have
to
figure
that
out
somehow
and
I
I.
Don't
think
this
is
necessarily
this.
It's.
D
I'm
not
sure
who
does
that
honestly
I
mean,
like
we've
gotta
figure,
that
stuff
out
like
sig
architecture
or
I,
just
don't
even
know
like
who
that
yeah
I
mean,
but
we
need
I
think
we
need
a
good
proposal,
good
plan
around
the
multi
large
stuff
that
really
takes
testing
and
sort
of.
How
do
we
create
you
draw
versions
for
releases
together?
You
know
we're
gonna
need
something
like
the
the
tool
that
the
Android
folks
use
to
actually
construct
a
tree
out
of
a
ton
of
repos.
B
J
H
B
And
and
like
on
federated,
that's
why
I
tried
to
research
into
federated
testing
as
well?
We
should
definitely
didn't
get
a
better
story
there
and
may
make
it
a
lot
easier
for
others
to
run
like
Cuba
and
tests
on
after
or
AWS
and
oh
whatever
and
yeah.
Basically
I'm
Jacob
wasn't
here
today.
Is
there
anyone
by
the
way
that
wants
to
work
on
testing
here
like
for
QAM?
That
will
be
highly
appreciated.
Yeah.