►
From YouTube: 2017-09-05 17.05.07 SIG-cluster-lifecycle 166836624
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Oh,
hello,
everyone-
this
is
say,
cluster
lifecycle
on
the
5th
of
September
2017,
and
we
have
just
passed
the
code
freeze
so
interesting
to
find
out
where
we
got
to
with
everything.
But
before
we
do
that.
Let's
start
with
going
through
the
agenda,
so
jessica,
chen
who
is
trying
to
get
online
and
just
screw.
You
here
now
maybe
put
that
to
the
bottom
of
the
agenda
so
that
if
she
does
manage
to
get
online,
oh
did
that
yeah
copy
the
whole
thing.
A
D
C
E
Push
off,
did
not
make
it
default
by
four
one,
eight,
just
because
there
was
several
other
things
in
flight
and
we
had
that
long
conversation
about
it
and
the
PR
is
and
docks
are
in
place
for
the
checkpointing,
but
I
don't
think
they
would
have
reviewed
bandwidth
to
get
that
done
and
in
place.
Besides,
there's
there's
contentiousness
on
naming
of
things
too,
so
I
didn't
realize
I
thought
the
payments
that
feature
that
was
completely
orthogonal
to
this.
A
The
demon
said,
a
teacher
I
think
is
orthogonal
to
this.
So
this
is
the
pull
request.
Five,
zero,
nine
eight
four
I
think
was
the
check
pointing
proposal
or
the
check
pointing
implementation.
I
think
it
was
when
we
realized
that
that
that
the
check
pointing
implementation
was
the
thing
that
wasn't
going
to
land.
A
B
D
A
B
I,
what
I
think
we
should
do
here
is
deploy
the
out
of
three
boot
cube
style
check
pointer
in
now,
in
one
eight
call
it
off,
which
is
which
indeed
is.
But
then
we
have
pretty
much
a
fully
working
self
hosting
environment
I
mean
I,
implement
the
self
hosting
upgrades
as
well.
The
Dimas
of
thing
is
just
making
that
those
two
hundred
lines
of
code
a
little
bit
easier
to
implement,
but
I've
done
the
thing
like
the
hacky
way,
as
we
describe
it
by
like
basically
re-implementing
what
the
demon
said.
Demon
said.
B
Upgrade
strategy
would
do
not
exactly,
but
similarly
so
one
we
I
think
we
should
just
do
that.
Checkpointing
like
out
a
tree
and
say
that
hey
self
whole
thing
is
something
you
can
use
it's
in
alpha
like
quality
duty,
that
we
don't
have
the
the
checkpoint
thing
in
the
cubelet,
but
by
the
way,
otherwise
it
it
should
work
on
a
beetle
level
right.
E
Could
we
want
to
do
that?
I
mean
like
it's.
It's
all
about
upgrades
right,
that's
pretty
much.
What
most
people
will
care?
What's
the
primary
the
primary
motivation
for
self
hosting
or
my
understanding
was
upgrades
right.
So
if
we
have
upgrade
support
in
the
native
version
with
aesthetic
pods
do
we
need
to
do?
We
want
to
put
out
this
option
in
the
wild
and
potentially
have
it,
people
use
it
and
then
have
to
migrate
away
from
it.
E
E
E
I'm,
not
I'm
gonna
try
what
they
mean
like
so
so
you
want
to
you.
You
basically
stated
that
you
want
to
have
alpha
grade
support
for
self
hosting,
using
the
external
check
pointer,
along
with
your
modification
for
self
of
the
upgrade
self
hosted
upgrades
which
has
daemon
set
strategy
right
built
into
the
cuvette
diem
upgrade
yeah.
E
B
If
we
like
the
primary
motivation
for
hunting
on
self
hosting
by
default,
when
we
still
thought
we
should
like
have
a
full
featured
support
for
it
was
that
we
don't
want
to
get
too
crazy
and
enable
something
we
haven't
tested,
but
definitely
to
think
we
need
the
three
months
of
testing
self
hosting
out
in
the
wild,
and
it
does
work
in
like
in
all
the
cases
it
needs
to.
But
the
implementation
is
is
going
to
be
improved
over
time,
but
I.
E
B
B
A
A
The
thing
I
think
we
need
to
be
careful
about
is
messaging.
The
fact
that
we
haven't
given
up
on
self
hosting,
because
if
you
just
look
at
the
deliverable,
then
you
might
deduce
that
sig
class
and
life
cycle
had.
So
we
need
to
be
very
clear
about
that
in
what
we
in
how
we
publicize
this,
but
but
yeah
in
the
main
thing
that
we
do
have
is
the
new
upgrade
UX
I,
don't
know,
I
think
that
I
think
that
putting
it
different
actually,
maybe
Lucas.
A
B
It
would
be
like
we
have
Jamie
that
kindly
added
feature
flag,
support,
cube,
Adam,
so
I
think
it
would
be
great
if
we
had
like
self
hosting
and
self
hosting
in
C,
with
secrets
with
certificates
in
secret
or
are
already
like
behind
the
feature
flag
off.
You
too,
it
and
I
think
it
would
be
great
if
we,
when
the
Olfa
key
self-hosting
is,
is
enabled
cue
bedding
we'll
go
ahead,
create
the
soulful
to
cluster.
B
B
We
don't
have
to
provide
a
lot
of
guarantees,
but
I
think
it's.
It's
really
crucial
that
at
least
your
we
do
that
we
deploy
the
the
check
pointer,
a
that.
We
have
this
feature
like
we
already
do,
but
B
we
provide
a
little
more
stability
and
and
more
guarantees
and
make
sure
your
posture
doesn't
burn
to
the
ground,
but
is.
E
It
possible
for
us
to
instead
have
a
compromise
here,
because
we
already
got
feature
freeze
in
place,
but
that
doesn't
prevent
us
necessarily
from
most
most
of
the
code.
Is
there
right,
I
don't
know
if
we
could
actually
back
port
strategy
and
check,
pointing
though
I
guess
part
of
me
was
wondering
whether
or
not
we
could
back
for
it.
Some
of
these
features
onto
one
eight
so
that
we
could
just
release
as
is
and
they'd
have
like
a
one.
E
A
G
B
A
B
Not
that
much
I
mean
if
we,
if
we
can
rely
on,
like
checkpointing
being
in
the
cubelet
in
1/9,
which
I
hope
we
can.
Then
it's
just
a
matter
of
less.
There
are
things
like
after
you've
upgraded
all
of
their
cluster
control.
Plane
is
its
upgraded
cubed
an
upgrade
perform
some
post,
upgrade
kind
of
post
control,
plane,
upgrade
tasks
and
like,
for
example,
I,
have
a
PR
up
for
upgrading
to
alpha
boost.
Your
talk
is
to
be
the
ones
like
one
minor
change.
B
We
have
to
do
just
fixing
a
small
bug
there,
but
I
mean
that
would
just
delete
the
old
fast
static,
bald
checkpoint.
There
is
it's
kind
of
trivial
or
have
it
I
mean
we
could
also
have
it
there
for
one
nine
and
then
one
10
will
remove
the
we
have
like
double
I
I
know,
but
anyway,
I
mean
I,
don't
think.
That's
at
all
a
problem
and.
A
E
Cut
the
1/9
branch
like
midstream
of
the
1/8
release,
typically
they'll,
the
Alpha
very
I,
don't
know
why
they
do
it.
I
have
actually
been
opposed
to
this
because
you
get
this
weird
crisscross
of
features
that
occurs
and
I
see.
Lucas
not
because
he's
had
to
do
that,
put
it
in
the
code
because
I've
seen
it
in
the
code,
but
there's
it's
crisscross
of
features,
so
they'll
cut
the
alpha
version
pretty
early.
So
what
we.
A
If
I
yeah,
so
if
I
can
finish
the
thought
like,
if
we
can
give
people
away
of
like
instructions
of
trying
out
the
a
version
of
all
the
components
that
work
together
in
the
raid
in
the
way
that
we
actually
wanted
to
design
it
and
give
that
people
sort
of
out
of
and
with
respect
to
the
release
cycle.
But
during
the
1:9
release
cycle
then
we
can
give
with.
Then
we
can
achieve
the
goal
which
I
thought
which
I
have,
which
is
get
people
testing
this
stuff.
A
F
I
mean
I,
guess
to
summarize,
the
the
pushback
to
Lucas's
proposal
is
that
we
are
both
having
people
test
in
the
wild,
not
in
the
way
that
we
want
them
to
run
a
bit
eventually
who's.
What
Lucas
saying,
and
also
that
we're
creating
technical
debt,
which
what
Tim
is
saying
in
terms
of
because
they're
running
it
differently?
We
don't
need
to
provide
an
upgrade
scenario
to
get
them
back
to
the
mainstream
and
that
adds
extra
code
extra
complexity
in
the
meantime,
extra
test
burden
in
the
meantime
and
extra
test
burden
during
that
upgrade
scenario.
F
A
I
meted
sorry
yeah,
so
I
was
I
was
just
gonna
say,
like
I
feel
like
some
sort
of
like
package
of
an
early
one
9
alpha
cubelet,
plus
a
version
of
that.
We
also
need
the
diamond
set,
upgrade
the
diamond
set
rollout
strategy.
So
once
we
have
those
two
pieces
in
master,
we
can
cut
a
binary
or
set
binaries
such
container
images
that
we
encourage
people
to
test
with.
We
know.
F
So
that
the
one
point,
the
first
one
point
Knight
alpha
release
like
one
point,
nine
alpha
one
should
be
cut
as
Tim
was
saying.
You
know,
probably
two
weeks
after
we
cut
one
point
eight
or
something
like
that,
it's
very
early,
and
so,
if
we
sort
of
have
the
PRS
lined
up
for
Damon,
set
up
great
strategy
and
keep
myself
posting,
we
can
try
to
get
both
those
into
that
first,
one
nine
out
for
release
and
then
kind
of
cube
admin.
Based
on
that
and
say.
B
I
I
think
it's
well
it
it
might
work,
but
I'm
I
think,
it's
worse
than
giving
something
to
180
uses
and
having
a
really
easy
way.
I'm
I'm
not
at
all
worried
the
like
upgrade
case
they're,
giving
out
in
compliment
and
also
I'm,
not
I
mean
the
the
most
important
thing
is
like
user,
the
user
doesn't
care
about
how
checkpointing
is
implemented
and
and
I
mean
I.
Think
the
most
important
is
all
the
users.
E
E
A
B
We
could
say
to
use
as
that
we
have
like
advertised
in
blog
post
that
we
have
a
new
feature
called
self
hosting.
You
can
try
this
out
by
setting
the
feature
flag
and
you
don't
have
to
do
anything
more
and
it's
it's
convenient
and
those
users
who
are
interested
in
what
does
first
itself
of
the
cluster
look
like
can
also
like
tinker
with
it,
and
just
I
mean
when
they're
done.
Cubit,
impressive
and
I
mean
all
this
works.
B
B
Compared
to
like
providing
users
with
a
lot
of
a
lot
of
custom,
build
from
master,
what
all-4-one
basically
is,
which
is,
can
be
how
I
mean
understate.
We
don't
know
how
unstable
alpha
one
is
going
to
be,
but
freedom
stable
as
it's
just
a
scheduled
cut
from
master,
so
I
think
it's
it's
more
like
saying,
handling
expectations
and
saying
like
either.
B
A
All
right,
so,
if
I
can
summarize
what
I
think
you
just
described
as
the
goal,
so
your
goal
is
to
give
users
something
that
they
can
actually
use
off.
The
official
1/8
release
that
they.
So
we
don't
let
let
people
down
in
terms
of
expectations
that
we've
set.
That's
one
aspect
and
then
also
so
that
we
can
give
people
something
they
can
play
with
not
necessarily
used
for
real
but
but
play
where's
that
that
isn't
scary
and
alpha-1
9
alpha
1
and
cut
from
a
stone.
Okay.
So.
G
A
G
A
A
E
E
The
only
thing
that
we
would
need
to
deal
with
is
the
upgrade
strategy
to
get
them
from
A
to
B,
and
if
Lucas
is
confident
with
that
I'm
I'm,
pretty
okay
with
it.
It
is
some
level
of
debt,
but
it
is
again
orthogonal
to
the
main
premise,
but
the
question
then
becomes
because
we
have
everything
as
alpha
and
our
stated
things,
how
many
people
use
alpha
grade
things,
how
many
people
would
be
okay
with
using
alpha
great
things.
B
A
C
If,
if
you
do
end
up
using
the
pod
check
pointer
it
like
the
external
one,
it
ultimately
is
just
a
daemon
set
that
it
will
clean
up
everything
as
soon
as
deplete
it.
So
it's
not
that
big
of
a
deal
to
switch
from
one
to
the
other
just
beat
delete
a
daemon
set
and
then
add
whatever
flag
does
a
Google
so
that
I
wouldn't
be
particularly
worried
about.
It
is
different
like
we
should
be
I
agree
that
we
want
to
be
testing
native
kulit
checkpointing
at
some
point,
because
that's
you
know
a
meaningful
thing.
C
H
We
should
also
be
careful
about
turning
on
the
market
machine
if
it's
only
available
in
something
which
we
don't
really
want
to
use
in
production,
because,
like
we
see
in
like
the
other
projects
that
are
like
the
other
projects
that,
like
you,
users,
then
start
asking
for.
Why
don't
have
plug
check
point?
It
go
I,
don't
I'm,
still
hosting
in
gke
right
I'm
not
used
UK,
because
I've
checked
pointing
and
then
like
or
self-hosting
or
whatever
it
is,
and
it
becomes
tricky
to
explain
to
people
that.
Actually
you
know
this
feature.
H
A
H
G
G
Think
we
just
need
to
make
sure
that
it
gets
communicated
with
the
right
red
flashing
warnings
like
the
whole
sort
of
you
know.
If
you
could
empower
failure
and
your
whole
cluster
conch
goes
down,
it
won't
come
back
up,
automagically
right
like
or
you
know,
if
we
do
that
without
the
check
pointer,
if
we
do
the
checkpoint
or
like
hey
this
is
you
know,
there's
a
little
bit
of
chewing
gum
and
baling
wire
here,
and
this
is
not
the
eventual
thing
we
wait.
B
A
Not
bad
people,
but
I
I
do
think
that
having
that
that
that
explicitly,
documented
bodge
in
place
as
a
as
a
still
get
you
still
get
people
trying
this
stuff
still
get
people
try
and
stop
posted
clusters
and
upgrades,
if
indeed
it
is
truly
transparent
to
the
users
modulo
the
existence
of
a
daemon
sir
then
I
could
get
behind
it.
B
Yeah
I
think
so
on
and
of
course,
like.
We
should
still,
of
course,
like
a
but
said,
get
the
nostalgia
into
over
1.
We
should
get
like
checkpointing
in
as
soon
as
possible
in
the
in
1
line
cycle,
enable
it
for
kube
a.m.
at
master
run
a
lot
of
tests
like
boat,
automated
and
manual,
and
hopefully
also
get
the
like
one,
one,
nine
or
three
uses
like
we
should.
We
should
definitely
document
that
as
a
whole
like
how
do
you
test
out
cubed
em
I
mean
that's
actually
something
I
haven't
thought
about.
B
Like
came
in
sprang
to
mine.
Now,
with
this
discussion,
I
mean
we
should
document
how
how
to
run
any
given
all
for
our
beta
release
with
Hubert
him,
of
course,
which
we
should
we
haven't
done
so
I
mean
that's,
that's
a
good
task
practice
anyway.
So
I
think
there
were
two
proposals
and
we
could
get
away
with
both
yeah.
A
A
B
A
B
F
B
So
I
mean
the
the
bugs
we
found
in
so
one
of
one
of
them
could
have
been
avoided
with
even
better
tests
like
automated
ones,
but
most
of
them
were
like
environment
kind
of
specific
I
mean
there's
always
gonna.
Be
cases
work
that
we
aren't
able
to
test.
So
what
well
I
am
Justin
I've
been
looking
into
here
in
the
recent
days
is
federated
testing
and
myself
I've
been
thinking
about
like
federated
testing
for
kube
ATM.
B
We
shouldn't
I
mean
this
there's
a
way
to
upload
test
results
from
any
given
machine
to
the
to
the
test
script.
Right
so
I
mean
that's
what
we
should
do
for
all
the
different
environments
we
have,
and
it's
I'm
myself
and
Justin.
This
is
trying
to
figure
out
the
spec
as
the
design
Doc's
there
are
really
outdated.
B
So
it's
it's
a
non-obvious
API
there,
but
as
we
get
that
nailed
down,
I
mean
it's
it's
one
script
away
or
something
to
actually
having
it
really
easy
to
run
like
a
bash
for
loop
or
whatever
to
just
cube
a
them
in
it.
With
your
decide
configuration
on
your
decide
machine,
then
it
will
run
the
tests
in
cluster
kind
of
what
Sonny
Boy
does
and
then
just
upload
to
the
GCS
bucket
and
that's
all
then
like
fetch
in
the
next
build,
so
I
mean
I.
B
Have
that
kind
of
running
in
hundred
lines
of
bash
or
something
and
it's
pretty
straightforward,
I
mean
spinning
up
cubed
in
cluster
is
like
three
lines
of
code.
So
that's
that's
one
of
the
things
we
wish
should
get
done
and
documented
right
regarding,
like
the
overall
test
goal
of
1:8
I,
think
we're
in
pretty
good
shape,
with
added
like
coverage
for
beautiful,
kills
and
the
no
doubt
riser.
B
As
Robert
said,
we
have
Jessica
working
on
the
upgrade
eateries
for
for
the
upgrade
scenario,
and
such
things
also
now,
where
we're
testing
see
I
like
latest
master
in
the
CI.
Before
that
we
we
targeted
the
control
plane.
Images
at
the
latest
cut
release
version
of
Cabana.
It's
like
alpha
3
or
whatever.
Now
we
tested
really
at
master,
which
is
an
improvement,
so
I
think
will
progress
quite
nicely
there.
But
of
course
we
can't
do
too
much
testing.
G
F
B
It's
there
really
just
be
it
ace.
I
have
a
bug
fix
for
cream
which
are
tokens,
I
said:
I,
updated
the
Cygnus
lifecycles
release
notes
the
ones
I
could
spot
and
I'm.
Currently
writing
up
tasks,
items
quarterly
task.
Items
of
cube
am
like
what
what
should
we
put
up?
What
have
I
done
like
in
this
latest
three
months,
I'm?
Basically
writing
that
down
and
providing
a
release
Docs.
So
it's
obvious
to
others
like
how
do
we
do
things
with
cubed
M
and
like
when?
Do
we
do.
D
D
That
is
a
good
question.
I'm
I'm
doing
coding.
Now
it's
not
in
days,
but
hopefully
not
too
many
weeks,
I'm
uncertain.
That's
a
better
answer.
I
B
B
H
D
F
So
right
now
Justin
we
have
sort
of
both
of
those
sets
of
tests
and
the
verify
that
a
cluster
works
after
upgrade,
tests
are
basically
SKU
tests
right,
they're,
not
really
upgrade
tests
and
I
would
I
need
to
go
talk
to
the
release
team
about
maybe
renaming
those
for
this
release,
because
it
would
be
great
to
call
those
what
they
are,
which
is
SKU
tests,
and
then
the
new
upgrade
test
framework
that
Chris
wrote,
I
think
one
or
two
releases
ago
are
the
actual
upgrade
tests.
Where
we
create
objects.
We
do
an
upgrade.
F
We
verify
that
they
are
working
before
during
and
after
the
upgrade.
So
we
have
both
those
sets
of
tests
now
the
ones
that
are
sort
of
actual
upgrade
tests
test.
Many
fewer
parts
of
the
cluster
than
the
skew
tests
do.
But
we,
when
we
pick
the
set
of
tests
to
implement
for
those
upgrade
tests,
we
thought
they
covered
a
pretty
good
sort
of
cross
section
of
functionality,
not
everything.
B
B
E
F
F
D
F
That
would
be
sort
of
the
new
set
of
upgrade
tests.
I
was
referring
to,
which
is
the
you
know
like
creates
a
deployment
verify
it
works,
start
an
upgrade
verify
it's
still
working
finishing,
upgrade
verified
still
working,
so
we're
actually
verifying
the
continuity
of
that
object
across
the
upgrades,
as
opposed
as
opposed
to
run
ete
tests
delete.
All
of
your
objects
then
do
an
upgrade
then
run
new
e
to
e
tests,
which
verifies
that
the
cluster
is
newly.
It's
still
it's
in
a
new
functional
state,
but
doesn't
verify
that
the
upgrade
didn't
break
anything.
F
B
Do
we
wanna
do
both
like
both
skew
and
upgrade
tests
in
the
coming
days
weeks?
Or
can
we
like
focus
on
just
getting
the
skew
tests
first,
which
means
that
we
cover,
but
we
basically,
then
cover
is
just
cubed.
M
upgrade
apply,
do
the
right
thing
and
then
focus
on
like
the
actual
upgrade
tests.
Does
my
deployment
still
work
off
the
upgrade
with
that's
more
like
kubernetes
specific,
like
our
see,
gaps
have
see,
gaps
done
the
right
thing
right
and
that's
already
covered
by
Cuba
I.
F
Sort
of
covered
by
queue
up
it's
covered,
but
the
signal
is
pretty
terrible.
Right,
like
we
was
really
looking
at
those
tests,
they're
failing
a
lot
of
the
time.
It's
been
one
of
the
last
things.
We've
gotten
a
signal
from
for
the
last
two
releases.
So
if
we
could
get
a
better
signal
for
that,
that
would
actually
be
really
helpful,
but
I
think
Jessica's
plan
is
to
implement
the
upgrade
mechanics
and
then
we
can
figure
out
like
which
set
of
tests
we
run
and
then
what
order?
F
B
F
B
F
F
F
B
Yeah,
we
already
have
kind
of
skew
tests
that
are
cubital
specific,
where
we
do
like
one:
the
1/7
branch,
for
example,
we
set
kubernetes
version
to
1/6
and
use
cube
admin
to
deploy
a
1/6
cluster
test.
All
conformance
tests
are
working.
That's
the
cubed
M
version
of
Q
tests
on
the
cubed
M
CLI
binary
right.
F
E
A
B
Excellent
yeah
and
one
thing
unknown
issue
right
now:
if
cube
Adam
join
on
1:8
can't
can't
be
used
on
a
node
where
you
have
cubelet
1:7
a
cubelet
1:7
manifest
the
way
it's
currently
implemented
and
I
mean
we
have.
We
definitely
have
time
to
fix
this.
Yet
this
is
buta
in
1:7
we
did
the
TLS
bootstrapping
and,
like
fetch,
the
first
csr
waited
for
it
and
like
went
with
that,
wrote
it.
B
The
cubelet
conf
at
cpanel,
skillet
comes
in
in
1:8
as
we
we
should
mike
remove
that
the
application
of
code
and
now
pew
bellum
is
just
riding
etsy.
Kubernetes
bootstrap
cube
lot
of
cons,
which
means
that
a
1:7
cubelet
won't
recognize
that
new
file
we're
writing
at
all.
So
should
we
should
we
like
on
cue
bed
and
join
exit
out
cubelet
version
and
check
which,
which
version
we
have
and
do
the
TLS
boot
chuckling?
What,
if
one
seven
or
like
skip
it
and
delegate
that
the
cumulative
one
eight
or
if
we
have
any
opinions
here.
F
I
mean
you
just
mention
the
fact
that
we
do
skew
testing
between
one
seven,
two
one:
six.
We
think
we
need
the
same
few
testing
between
181,
seven,
where
cube
admin
and
one
eight
can
launch
a
one,
seven
cluster,
which
presumably
means
launching
at
one
7
cubed,
which
means
writing
the
cubelet
config
file
that
it
a
one
7
cubed
will
read
yeah.
F
Mike
is
out
at
least
this
week.
You
may
be
part
of
next
week
he's
on
a
vacation,
so
I
would
say
probably
not
right
now
I
will
mention.
We
have
another
person
at
Google
who
is
started
looking
into
how
to
configure
SKU
testing
using
cue,
raise
anywhere
last
week
also
he's
also
on
vacation
this
week.
So
we
won't
see
much
progress
there,
but
if
nobody
picks
that
up
when,
when
he's
back
next
week,
those
two
keep
pushing
on
SKU
testing
yeah.
B
B
A
F
B
C
D
C
B
E
B
F
F
Right
I
mean
kind
of
usable
production
like
there
are
people
that
are
willing
to
take
the
risk
of
a
novel
feature
in
production
because
it
works,
and
they
know
that
there
can
be
some
pain
around
upgrades
and
they're,
like
you
know,
if
we
think
you
know
that's
to
reboot,
might
destroy
your
cluster
or
something
like
that.
That's
a
little
bit
more
of
a
risk
than
like
upgrades
of
your
cron
jobs
will
lose
your
con
jobs
right
with
the
rest
of
your
question.
Will
work
well
yeah
just
higher
when
it's
the
control,
plane
itself,
yeah.
A
B
E
G
B
One
last
thing
is
like
I
think
we
can.
We
can
mark
the
implementation
of
cubed
M
init
data
in
1/8.
That's
because
now
we
have
why
we
didn't
do
it
before.
We
said
that
the
UX
is
stable
and
it's
not
going
to
change,
but
now
we
can
also
say
the
implementation
is
stable
or
be
the
level
as
we
have
bunch
of
tokens
and
these
things
spectre-2.