►
From YouTube: 20180417 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
April
17th.
This
is
the
standard
sync
cluster
and
life
cycle
meeting.
If
any
folks
have
anything
they
want
to
add
to
the
agenda,
please
do
it
now,
otherwise,
we'll
kind
of
start
to
burn
through
the
agenda
items,
at
least
in
order
for
right
now,
I
had
talked
with
Andrew
Scott
Kim
on
an
issue
in
the
backlog
with
regards
to
self,
hosted,
control,
plane
and
dealing
with
external
cloud
provider
and
there's
an
issue
there,
and
he
was
gonna
kind
of
give
a
tldr
summary
on
some
of
the
issues
that
they're
running
into
yeah.
B
Either
on
the
side,
Nattie's
and
I'm
part
of
the
cloud
provider
working
group
so
pretty
much,
the
problem
is
that
with
stuff
listed
kubernetes
we
rely
on
some
of
the
fields
that
the
queue
blip
sets
on
the
node
to
kind
of
figure
out.
You
know
what
what
daemon
starts
IP
and
what
it
should
be,
and
that's
not
a
problem
for
entry
of
cloud
providers,
because
we
kind
of
built
a
cubelet,
minor
e
with
all
that
information.
So
we
can
actually
fetch
that
information
using
so
or
external
cloud
providers.
B
The
problem
is
that
when
a
node
registers
it
gives
itself
a
taint
and
then
it
waits
for
some
external
controller
to
remove
the
taint
and
then
fill
in
all
the
necessary
fields
that
are
specific
to
a
cloud.
So
the
problem
there
is
that
you
need
the
API
server
running
to
set
those
fields,
but
then
we
rely
on
the
kind
of
self
hosting
mechanism
to
Rob
API
server.
So
there's
that
chicken
and
egg
problem
that
we,
the
cover
letter,
has
not
really
solved
yet
so
pretty
much.
It
seems
like
there's
two
options.
B
A
Yes,
that's
always
been
the
case
like
a
I
think
working
externally
from
the
outside.
In
there's
a
lot
of
context,
that's
lost
and
I,
don't
think.
We've
been
I
think
from
a
documentation
perspective.
We
haven't
necessarily
done
the
best
of
job
with
regards
to
articulating
the
constraints
of
self-hosting.
It's
still
alpha
right.
It's
never
been
graduated
because
there's
a
bunch
of
issues
associated
with
it
and
I'm
on
the
hook
to
write
a
proposal
with
how
to
fix
some
of
those
issues
in
the
long
haul,
at
least
for
then
this
cycle,
so
I
think
I.
A
B
Yeah
sounds
good
and
yeah.
The
second
option
else
it's
going
to
kind
of
save
and
it
seems
kind
of
hacky-
is
that
we
can
run
the
API
server
it
with
the
static
pod
and
then
wait
for
the
external
controller
to
do
whatever
it
needs
to
do
and
then
can
shift
to
self
hosted
version.
But
again
it
sounds
like
it's
operationally
complex
and
code
might
get
a
little
messy
trying
to
the
dialogic
a
so
so.
A
To
give
you
a
tldr
of
kind
of
the
idea
behind
the
change
that
I
want
to
make
or
the
proposal
is
to
basically
have
a
sentinel
pod,
which
does
exactly
what
you
mentioned,
the
sentinel
pod,
which
which
would
basically
be
a
single
static,
manifest
that
gets
startup
whose
sole
goal
in
life
is
to
detect
whether
or
not
this
self
hosted
control
plane
is
up
and
if
it's
not
up,
bring
up
a
static,
pod
and
then
pivot
and
then
shut
itself
down.
So
it's
only
purpose
in
life
is
to
do
that.
Mecca
nation.
B
A
Regards
to
that
other
PR,
where
I
kind
of
like
stopped
it
using
the
pod
IP
for
something
that
is
self
hosted
where
you
have
a
load
balancer
in
front
of
it.
That
I
don't
know
even
know
how
that
could
even
work.
So
with
the
PR
that
was
proposed
because
there
was
an
issue
there,
the
you
need
the
host
IP
s,
because
the
the
load
balancer
will
be
pointing
to
the
multiple
master
situation
without
having
that
that
host
IP
value
percolated
throughout
your
system,
I,
don't
see
how
you're
gonna
couldn't
even
run
that
way.
Yeah.
B
So
that
was
also
the
whoever
wrote
the
PR
I
think.
The
intention
was
that
if
you
run
a
plotted
post
network
and
then
you
try
to
get
it
spot
up,
you
would
essentially
get
the
host
hanky
and
we
thought
that
we
go
through
a
different
code
path
if
we
try
to
do
spot
IP
instead
of
using
that
was
techie,
but
it
turns
out
if
you
try
to
get
the
plot
IP
with
the
downward
API
on
host
network.
It's
the
exact
same
cold
tap,
so
it
the
problem
was,
was
still
there
yeah.
Okay,
all.
A
B
A
C
I
can
start
if
you
want
late,
so
we're
still
plugging
away
trying
to
find
a
reasonable
band-aid
fix
for
one
dot
and
for
fixing
the
upgrades
we,
you
know,
I
have
a
PR
out.
There
are
ready
to
fix
the
race
condition
that
we
have
around
checking
the
pod
status,
but
as
we've
kind
of
iterated
on
different
fixes.
For
that
at
least
fixes
that
we
get
back
forward
to
110.
We
can
do
much
better
for
111.
C
We
keep
running
into
issues
with
the
way
that
we're
generating
the
manifests
right
now
and
and
some
of
the
plumbing
needed
to
kind
of
rectify
the
situation
and
be
able
to
handle
the
non
TLS.
The
TLS
SED
upgrade
last
night
I
kind
of
had
a
little
bit
of
a
revelation
that
you
know
we
could
just
use
the
ETD
status
to
verify
the
HDD
static
pod
is
redeployed
rather
than
worrying
about
checking
that
the
static
pod
has
been
redeployed.
So
I
have
a
PR
out
there.
C
I'll
get
a
link
in
the
meeting,
notes
that
basically
just
disables
the
static
pod
hash
check
for
the
sed
static,
pod
and
then
just
pulls
well.
It
has
a
pause
for
30
seconds
to
wait
for
the
API
server
or
the
cubelet
to
delete
the
old
static
pod,
and
then
it
just
pulls
the
sed
URL
to
verify
that
EDD
comes
back
up
and
if
it
doesn't
come
back
up
in
a
reasonable
amount
of
time,
then
it
initiates
a
rollback
like
we
have
today.
It
seems
to
be
a
reasonable.
C
D
Yeah
my
personal
opinion
on
this
is
like
what
level
of
functionality
do
you
expect
to
be
able
to
achieve
out
of
cuvee
idiom
or
a
resulting
backboard,
so
I'm
the
interesting
case
that
I
ran
into
and
ended
up,
putting
together
a
solution
for
in
the
very
large
PR
that
I
put
up
yesterday
morning?
Is
that,
unfortunately,
so
in
the
current
upgrade
dag?
If
you
successfully
upgrade
at
CB
and
then.
E
D
Server
fails
to
upgrade
Cuvier
Liam
thinks
hey,
you
know,
that's
pretty!
Ok,
the
data
store
is
fine,
but
it
doesn't
take
into
consideration
that
the
API
server
manifest
may
not
be
configured
with
a
compatible
protocol
setup
for
talking
to
SED,
and
so
you
need
to
handle
the
case
for
when
the
API
server
is
not
going
to
come
up
when
you
roll
it
back.
D
If
you
have
a
successful
at
CD,
TLS
upgrade,
and
so
that
is
one
of
the
cases
that
is
handled
in
the
PR
that
I
put
up
yesterday,
because
I
noticed
that
there
were
two
reports
on
the
issue
about
how,
after
getting
passed
via
to
the
upgrade
that
DD
is
not
rolled
back
when
the
API
server
potentially
fails.
So
that
is
that's
something
that
my
PR
handles.
I.
D
One
thing:
that's
encouraging
about
the
patch
that
I
put
up
yesterday
is
that
it's
a
lot
of
util
and
so
the
actual
code
that's
getting
hit
in
the
execution
path.
The
logic
changes
are
relatively
simple:
it's
just
that
final
command.
So
I
think
that
that
is
that's
the
the
solution
itself
is
pretty
simple.
It
just
means
help.
A
So
why
don't?
That
is
an
edge
case?
Do
we
have
I
haven't
looked
at
the
details
of
the
original
issue
filed
is
the
is
upgrading
the
API
server
and
then
rolling
back
the
API
server,
a
common
condition
that
people
are
running
into,
because
that's
that
seems
more
of
an
edge
to
me.
But
I
don't
have
data
right
for
me.
It
just
works
most
of
the
time,
but
I'm
also
I
could
also
my
MacGyver
my
way
out
right.
So.
C
E
D
A
So,
if
we're
gonna
do
this,
we
need
to.
We
need
to
go
all
in
and
combine
and
reduce
the
patches
on
to
a
single
patch
change
set
and
make
sure
we
test
the
hell
out
of
it,
and
so
we're
gonna
need
folks
who
have
time
for
testing
to
verify
this
patch
set.
So
I
know
I've
gotten
a
couple
of
emails
recently
from
folks
who
want
to
contribute,
don't
know
where
to
contribute
or
what
how
to
help
and
get
engaged.
This
is
a
concrete
way
for
them
to
you
know.
A
The
testing
scenario
is
really
simple:
you
install
one
nine
and
then
you
and
you
upgrade
to
one
ten
and
just
in
that
scenario,
everything
should
work
properly
and
there
should
be
some
no
issues
and
we
might
even
come
up
with
a
couple
of
failure
conditions
for
testing,
but
we
need
to
test
the
heck
out
of
it
and
that's
a
very
simple
way
for
folks
to
contribute.
Who
are
we
want
to
in
the
meantime,
Jason
and
Lee?
What
do
you
guys
want
to
do?
You
have
so
many
patches
now
that
I've
lost
track.
A
We
need
to
rally
to
one
and
try
to
make
that
one
include
all
of
the
Meccans
that
we've
just
determined
are
the
cleanest
and
simplest
ways
from
for
dealing
with
it
like
using
the
coolant
manifest
versus
actually
I,
like
the
query.
Yet
city
status
portion
for
the
SD
upgrade
I
think
that's
far
simpler.
A
So
if
we
can
crimp
that
piece
out
that
would
be
clean,
I'm
I'm
not
sold
on
the
coolant
API
for
the
API
server,
because
if
it
API
server
fails
its
will
people
to
either
query
it
or
not
right
after
a
timeout
interval,
so
I
think
I
think
that
that's
I
don't
think
we
need
the
couplet
interface.
If
we
have
the
sed
check
in
place.
Is
that
fair?
A
C
C
G
D
Only
bit
that
I
really
like
is
that
I
have
to
skip
the
layer,
seven
checks
for
a
TD,
because
I
didn't
have
a
waiter,
an
HTTP
waiter,
for
the
sed
status
check.
So
if
we
would
take
your
code
from
your
new
patch,
which
does
the
sed
status
check
in
a
waiter,
then
we
can
put
that
bit
back
in
on
top
of
my
patch
and
then
that
should
be
pretty
ready
to
go.
Okay.
A
Thank
I
think,
let's
keep
let's
rally
on
that
single
PR,
then
that
you
have
and
let's
let's
combine
and
distill
those
bits
and
I'll.
Let
you
two
work
out
the
details
and
try
to
close
down
the
other
PRS,
because
we
have
that
I
think
there's
four
or
five
separate
PRS
that
are
working
on
pieces
of
this
closed
on
the
other
ones
and
just
point
point
them
all
to
rally
on
on
the
single
PR
and
then
we'll
we'll
try
to
hash
out
the
details
on
that
one
and
try
to
get
that
merged.
A
D
C
Just
I
can
just
open
up
a
PR
to
it'll
work,
cool.
D
D
Know
what
I'm
talking
about
with
the
with
the
how
I
skip
the
layer
sevens
check
on
his
TLS
upgrade
yep
yeah.
So
if
we
just
pull
out
that
it's
and
then
put
it
in
a
waiter
or
some
and
I
think
it'll
be
pretty
solid.
Okay,.
A
So
I'll,
let
you
guys
work
out
the
logistics
and
we'll
close
down
the
other
issues
and
ping
me
when
it's
ready
for
review
again
and
I.
Think
I
want
to
probably
loop
in
other
folks
do
especially
the
folks
in
China
who
are
very
prolific
on
the
reviews.
Then
we
can
try
to
get
that
in
hopefully
back
before
the
end
of
the
week
and
I'll
talk
with
the
release
team
too,
as
well.
That
kind
of
lens
in
to
the
next
conversation
piece.
Is
there
anything
else
there
yeah
I
kind
of
want
to
ask
a.
A
D
C
C
D
A
For
putting
patches
together,
I
think
it's
it's
much
appreciated,
I
think
going
forwards,
as
I
mentioned
in
the
last
call.
I
need
to
work
with
Bend.
The
elder
I
was
going
to
do
that
later
on
today
to
fix
the
upgrade
testing
because
it
was
broken
in
the
1:10
cycle
and
it
was
broken
because
of
the
mecca
nations
and
the
apparatus
that
were
legacy
that
no
one
wants
to
maintain.
So
I'm
gonna
fall
on
the
sword
there
and
and
try
to
get
those
back
in
place.
I
would.
D
A
Beatty
improper
are
not
very
complete
and
having
a
symbol
set
of
the
broader
scope,
testing
infrastructure
of
end-to-end
tests,
which
stand
up
whole
clusters
and
do
everything
else
are
one
piece
of
the
puzzle
but
having
a
localized
set
of
intent
and
tests
that
allow
us
to
do
very
focused
failure,
driven
scenarios
that
has
always
been
a
thorn
in
our
side
and
again,
if
folks
are
interested
in
contributing.
That
is
an
area
for
them
to
contribute
in
that
would
be
highly
beneficial.
A
G
Exactly
look,
you
may
be
a
master
configuration
format
to
be
specific,
so
now
things
that
are
serialized
out
from
cube
a
DM
1.9
cannot
be
serialized
into
cube.
A
DM
1.10
without
causing
a
bunch
of
errors.
Are
our
solution
was
to
read
these
in
as
unstructured
manipulate
them
into
the
new
floor.
Mat
that
1.10
expects
and
then
serialize
them
into
the
communities
objects
that
we
expect.
G
Unfortunately,
it
was
not
quite
so
straightforward.
For
example,
our
initial
thought
was
to
gate
doing
this
on
the
previous
version,
and
the
current
version
so
only
run
the
migrations.
If
cube
a
DM
is
one
point.
Ten
and
the
past
version
is
one
point
nine,
but
if
you're
doing
that,
you
can't
serialize
you'd
be
running
the
migrations
to
serialize
into
cube
a
DM
cube.
A
DM
would
then
serialized
out
that
one
point
ten
compatible
format
and
if
you're
then
trying
to
upgrade
one
point,
nine
it's
going
to
fail
because
the
changes
are
incompatible.
G
So
for
breezy,
oh
and
I
talked
a
bunch
about
the
prop
ways
to
solve
this,
and
by
far
the
least
worst
option.
I'm
not
gonna,
say
good,
but
least
worst
is
to
end
our
support
for
upgrading
older,
older,
minor
and
major
versions
of
kubernetes,
using
a
newer
using
a
newer,
cube,
ATM
binary,
because
these
issues
with
configuration
file
formats
and
other
I
suspect
that
we
could
easily
introduce
errors
like
this
in
the
future.
In
other
ways
we
need
to.
G
We
need
to
have
much
better
testing
apparatus
before
we're
going
to
say
yes
we're
going
to
guarantee
that
you
can
do
this.
The
way
that
I
have
been
testing.
This
is
simply
by
building
the
110
binaries
myself
and
just
SC
peeing
it
to
the
servers
that
I'm
doing
that
I'm
upgrading.
Obviously,
that's
not
a
solution
for
like
bulk
fleet
deployments,
but
if
you're
doing
a
strict
upgrade.
All
you
have
to
do
is
the
plan
step
then
upgrade
in
whatever
what
you
have
allocated
to
do
that
and
then
the
110
upgrade
step.
G
The
plan
is
to
not
back
port
the
changes
that
we've
made
to
110
to
1/9,
so
those
will
continue
to
function
as
normal
and
then
once
people
decide
to
use
cube
ATM
to
upgrade
to
to
110.
Our
changes
will
be
back.
Ported
to
the
110
branches,
so
those
upgrades
will
work
from
10.2
onward
as
we
expect.
That
was
a
lot
of
words
that
I
just
said.
Does
that
make
any
amount
of
sense
to
anybody.
D
G
And
one
of
the
things
that's
going
to
make
this
sort
of
thing
much
easier
to
support
is
eventually
we
would
like
to
get
our
configurations
trucks
into
beta,
which
means
they
are
much
less
likely
to
change
in
backwards,
incompatible
ways
and
we're
also
I'm
working
on
e
to
e
tests.
Right
now,
Jason
is
working
on
getting
the
upgrade
tests
working
again
and
I.
A
General
life
I've
always
felt
slightly
uncomfortable
with
having
a
upgrade
pattern
for
a
newer
version
of
software
for
a
new
installer
version
on
an
older
version.
That
just
generally
is
an
uncomfortable
scenario
that
I've
never
supported
in
the
past.
I,
don't
I,
don't
know
if
other
people
are
using
it
or
what,
if
other
people
have
thoughts
there,
so
I'm
gonna
open
it
up
for
comment.
G
D
I'm
interested
in
this
topic,
because
I
actually
ran
into
I
I,
can
foresee
a
bug
with
the
SPD
TLS
upgrade
that
actually
is
also
everywhere
else.
That
has
to
do
with
the
fact
that,
during
the
upgrade,
we've
never
like
have
a
version
of
the
old
configuration
and
a
new
configuration,
and
it
would
be
very
beneficial
to
kind
of
like
before
that.
Api
goes
into
beta.
So
there's
our
actual
master
configuration
object
that
we
would
have
important
versioning
aspects
like
placed
into
the
struct
like
when
the
current
cluster
was
deployed.
D
D
G
Kind
in
version
info
don't
right
now
those
fields
are
just
unset
I,
don't
know
why
it's
probably
just
nobody
ever
got
around
to
doing
it.
What
they
do
have
is
the
version
of
communities
that
I
don't
know
exactly
how
this
is
set.
I
think
it's
the
version
of
communities
that
was
running
when
the
map
was
serialized.
G
It's
in
that
struct
as
it's
called
like
current
Kate's,
or
something
like
that.
I,
don't
remember
off
the
top
of
my
head,
so
some
of
that
version
information
is
available,
but
I
think
the
having
the
kind
in
version
actually
be
filled
out
is
the
sort
of
kubernetes
way
to
do
this
sort
of
thing,
and
then
we
can
have
converters.
F
C
G
C
G
H
I
can
tell
you
what
we
do
in
cops
for
our
version.
Skew
strategy,
which
is
we
scoped
cups,
1
9,
for
example,
supports
installation
of
kubernetes,
one
nine
one,
eight
one,
seven
one,
six
I
think
still
one
five
technically.
In
other
words,
we
go
all
the
way
back
and
the
reason
we
do
that
is
simply
pragmatic
that
we
don't
want
to
have
to
back
port
a
fix
into
cops
one
18
cups,
one
seven,
so
we
instead
say
always
use
the
latest
version,
because
otherwise
we'd
have
to
back
port.
G
H
Mean
yeah,
if
you,
if
you
put
up
a
test
at
all,
I
mean
you
put
them
to
test
in
for
and
it
will
run
and
then
you're
getting
on
that.
You
may
not
want
to
be
on
the
on
each
PR.
But
yes,
it
is
certainly
it's
very
good
to
be
on
the
on
the
desk,
where
to
run
against
master
and
I.
Think
you
can
even
get
a
PR
builder
that
doesn't
actually
appear
sort
of
stealth,
PR
builder,
which
is
probably
the
ultimate
the
the
optimum
scenario
there.
A
A
The
upgrade
job
is
totally
busted,
and
that
was
the
problem
and
it
was
busted
for
all
of
110
cycle
and
we
knew
it
was
busted.
We
knew
how
it
was
busted
and
it
was
all
a
better
of
like
everyone.
Looking
in
a
circle
saying
we
see
this
issue,
we
know
what
it
is.
We
know
why
it's
there,
but
like
who's,
this
piece
and
the
answer
was
nobody
and
so
now
I
think
I'm
gonna,
take
it
on
weather
and
like
it
or
not,
to
try
and
fix
the
issue.
A
So
I'm
going
to
talk
with
the
test
and
for
folks
to
try
and
address
this
Oh
Stace,
so
it
I
can
it
might
be
worthwhile
to
maybe
have
a
conversation
as
part
of
the
comedian
office
hours
tomorrow
to
talk
about
the
infrastructure
that's
there
and
how
we
want
to
kind
of
modify
it
because
it'd
be.
It
is
a
thorny,
thorny
problem.
A
I
A
Totally
agree:
I
think
there
needs
to
be
a
cap
that
we
put
up
for
two
things
for
the
configuration
file,
as
well
as
the
upgrade
semantics
and
support
support
structure
that
we
wish
to.
You
know
have
going
forwards,
I
think
that
give
a
contract
to
folks
so
that
they
can
rely
on
it
and
make
sure
that
we
keep
ourselves
honest.
I
Yes,
I
agree.
Just
comment
is
that
we
are
now
fixes
the
program
and
but
if
we
want
to
add
recipes,
definitely
we
we
have
to
polish
the
user
experience
a
little
bit.
For
instance,
a
great
planner
should
stop
to
suggest
to
user
this
kind
of
great
the
part
that
is
not
supported.
So
for
the
time
being
where
we
are
fixing.
I
D
Just
on
this
topic,
the
bug
that
I'm
talking
about
with
the
configuration
files
is
that
CV
data
path
or
sorry
data,
sir
and
certificates,
sir,
our
user
configurable,
and
if
you
change
them
between
an
upgrade,
it
will
break
the
upgrade
because
a
we
have
no
code
to
handle
the
directory
is
changing
and
B.
We
don't
even
have
access
to
the
previous
configuration
so.
A
This
that
I
would
consider
to
be
separate
concerns,
but
I
do
believe
there
needs
to
be
a
reconfig
option.
We
have
upgrade,
but
we
don't
have
like
a
reconfig.
It's
basically,
you
change
the
config,
no
reset
all
the
parameters.
It's
almost
like
an
it
me
too,
but
the
I'm
not
as
concerned
about
that
so
long
as
we
spell
out
the
contract.
If
somebody
changes
in
the
middle-
and
we
have
a
reconfigure
option-
I'm
totally-
ok
with
that,
but
if
they
change
they
do
an
upgrade,
that's
kind
of
in
their
own,
their
in
their
own
space.
D
What
I'm
really
getting
at
is
that
that
kind
of
ability
to
compare
structures
could
have
been
a
good
escape
hatch
for
us
in,
for
this
particular
fix,
and
so
that
should
another
code
smell
where
I'm
like.
Oh,
we
have
no
ability
to
compare
the
previous
configuration
structure.
When
is
the
requested
one
and
looking
at
the
code
there,
it's
very
single
purpose,
I
think.
A
The
lesson
from
the
110
released
into
111
or
this
series
of
lessons
is:
we
need
to
have
well-thought-out
designs
that
we
agree
upon
and
make
sure
we
document
the
hell
out
of
them.
So
that
way
as
new
people
come
on
board
and
come
in
line
into
the
cig,
you
know
we
can
handoff
and
the
contracts
are
clean
and
well-defined.
A
Super
hard,
but
yes,
we
shouldn't.
We
shouldn't
be
promoting
things
that
are
alpha
grade
into
QB
DM
in
general.
If
there
are
alpha
API
objects,
we
should
try
to
avoid
them.
The
the
the
only
I
think
the
only
reason
why
some
of
those
things
were
done
was
to
allow
for
easier
deployments
at
that
time,
but
you
know
again,
I
think
going
forwards,
even
like
the
dynamic
of
the
configuration
we
had
like
a
number
of
PRS
to
enable
that
feature,
but
we're
avoiding
that
until
it
reaches
beta.
A
I've
already
talked
about
updated
tests,
I
wanted
to
make
a
PSA
and
I
sent
an
email
to
cyclist
or
life
cycle
with
regards
to
how
we're
kind
of
like
approaching
triage
backlog,
execution
ordering
yada
yada
yada,
because
not
everybody
can
attend
this
meeting.
So
I
know
that
there
are
folks
in
China
who
are
actively
contribute
to
this
thing
and
I
want
to
make
sure
that
the
process
that
I
talked
about
last
time
is
pretty
clean
and
simple.
A
We
have
a
111
backlog
and
we
basically
assign
people
who
I
know
are
going
to
be
like
active
with
a
certain
percentage
of
their
time
on
the
sig,
but
that
does
not
mean
that
they
are
the
sole
person
that
can
execute
on
these
things.
So
if
the
person
who's
been
the
default
assignee,
you
know
marks
it
as
active.
That
means
that
they're
there
they're
about
to
make
a
patch
right.
So
let's
let
them
finish
their
patch
and
then
we
can
contribute
that
way.
A
But
if
they're
not
marked
as
active
or
if
they're
not
working
on,
it
feel
free
to
take
the
the
issue
on
and
talk
with
the
assignee
and
also
communicate
on
the
sequester
lifecycle.
Slack
channel
I
do
know
that
it
can
be
kind
of
intimidating
trying
to,
like
you
know,
vector
into
a
tornado,
but
there's
a
lot
of
folks
here
and
the
saying
who
can
help
you
and
on-ramp
you
into
that
right.
A
A
F
I
just
wanted
to
talk
about
h-a
clusters
for
a
second,
we
have
that
one
kept
out
for
Kubb
ATM
join
master
and
that
kept
intentionally
punts
on
sed.
It
says,
use
an
external
@cd
essentially
and
if
it
would
be
great
to
have
a
phase
in
the
F
CD
or
a
command
and
FC
B
phase
to
help
us
spin
up
an
XE
d
and
hasn
t
cluster,
not
something
totally
automated,
but
something
to
help,
at
least
with
the
manual
process
of
having
members
just
clusters.
E
D
D
When
talking
with
Lucas
about
it,
he
said
that
DC's
that
it
would
be
possible
to
put
together
a
PLC
of
documentation
about
how
it's
been
a
multi
node
at
D
cluster,
with
current
kuba
ATM
machines
are
just
using
rubidium
alpha
phase
at
CD
local,
with
the
configuration
object
to
spin
up
the
most
efficient
cluster.
We
believe
that
it
is
already
possible
so
cool
yeah,
that
that
relates
what
you're
talking
about
awesome.
F
A
Think
we
also
want
to
do
too
is
trim
down
some
of
the
goo
that
we've
added
over
time.
That
was
originally
intended
to
be
part
of
the
operator,
but
given
that
core
OS
for
now
Red
Hat
has
no
intent
intent
on
supporting
that
we
should
probably
deprecated
any
of
that
stuff
because
it
was
never
fully
was
never
fully
enabled.
A
It
was
always
the
plan
to
get
that
online,
but
it
was
never
all
there
and
with
the
central
storage
of
much
of
the
information
in
the
config
I
think
it's
totally
possible
today
to
do
a
lot
of
that
configuration
apparatus.
I.
Think
the
one
challenge
will
be
the
cert
copying
across
the
members
and
but
that
is
totally
accomplishable
through
through
temporary
secrets
or
through
CR
DS,
which
is
already
part
of
the
part
of
the
operator
stuff.
So
you
can
trim
out
a
piece
of
it
right,
which
is
also
stuff.
We've
commented
on
and.
H
I
reminder
I
also
have
the
sed
manager
project,
which
I
am
continuing
with,
so
you
know,
I
think
not
necessarily
that
particular
project,
but
just
the
idea
that
we
probably
want
external
tools
that
we
interface
with
rather
than
you
know,
building
it
into
could
secure
a
DM
and
I
also
would
welcome
anyone
who
wants
to
work
on
it.
City
manager.
H
But
you
know
there
are
lots
of
other
tools
out
there
that
that
do
similar
sort
of
things
that
bootstrap
HASC
D
clusters,
typically
on
you
know,
AWS
or
a
particular
cloud,
and
integration
with
those
tools
could
be
a
good
way
to
to
do
that
and
I.
Don't
think
necessarily
has
to
be
built
into
cube
a
DM,
but
could
be
a
shell
out
world
command
that
is
printed
or
something
the.
A
Long
term,
what
I'd
like
to
see
cube,
ATM,
do
and
I'm
going
to
talk
about
this
with
tomorrow,
too.
In
many
respects,
it's
starting
to
it's
nearing
the
edges
of
bubbling
up
into
the
spaces
where
we
didn't
want
to
go
into,
and
ideally
we'd
sort
of,
punt
out
or
rip
out
the
code
into
factored
tools
and
I
think
that's
a
totally
reasonable
approach
in
the
long
haul
right.
A
So
that
way,
you
know
it's
Kubb,
ATM,
then
sort
of
says
this
is
its
scope,
and
then
we
use
this
other
tool,
and
it's
composable
for
this
vision,
for
this
feature
set
I
do
think
that
making
it
tightly
bound
and
dependent
and
clean
and
simple
will
will
will
be
required,
but
I
don't
think
we're
there.
Yet
I
think
right
now
having
the
this,
for
the
person
who
wants
to
come
to
this
with
a
a
simple
environment
is,
is
it
have
a
very
clean,
simple
user?
A
Experience
is
what
we
want
to
optimize
for
and
then
factoring
those
pieces
out
is
definitely
something.
I
would
like
to
do
like,
for
example,
the
self
hosting
feature
itself.
I
would
like
to
completely
factor
that
out
of
qadian
and
put
that
into
what
I've
been
calling
the
Sentinel
right.
That
pivoting
apparatus
doesn't
need
to
belong
and
could
be
improper
and
could
be.
It
could
belong
as
part
of
the
Sentinel
right.
A
Same
with
HCG
management
right
we
could.
We
can
completely
punch
to
a
separate
tool
right,
but
I
think
for
the
time
being
talking
about
the
UX
workflow
and
the
ideal
scenario
is
a
beneficial
thing
right,
like
anybody
who
wants
to
spin
up
an
ST
cluster,
we
want
to
be
able
to
make
it
as
simple
and
as
clean
as
humanly
possible.
A
There's
a
bunch
of
there's
a
bunch
of
separate
meetings,
but
I
have
not.
We
don't
have
a
separate
menu
or
location
for
a
face-to-face,
but
I.
Don't
think
that
would
be
a
bad
idea.
I
know
there's
the
contributor
summit,
which
is
on
Tuesday
Justin.
Did
you
set
up
some
separate
time?
I
know
you
how
we're
gonna
set
up
a
separate
location
or
you're
thinking
about
it
at
least.
H
Oh
for
sorry
for
the
cube
Connie,
you
you
mean
yes,
there
is
ie
I
have
to
catch
up
on
that.
There
is
a
AWS
thing
and
there
is
a
Cox
thing.
So
the
I
don't
know
where
there's
that's
at
least
what
it
is
right
now,
I,
don't
know
whether
the
I
don't
know,
there's
a
separate
cube,
ADM
tract,
but
is
there.
A
A
E
D
Specifically
for
the
big
you
know
where
we
can
help
people
like
on
board
onto
an
issue
or
that
kind
of
thing
and
get
together
to
talk
about
roadmap,
so
I
I,
remember,
Lucas
and
Chris
Nova
of
being
kind
of
the
main
like
people
behind
this
I
am
gonna,
see
Chris
in
about
30
minutes,
so
I
can
think
up
with
her
on
it
and
see
if
she
was
going
to
continue
doing
something
in
that
manner.
I
do.