►
From YouTube: SIG Cluster Lifecycle - Cluster Addons 20190917
Description
https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit
Thanks Cornelia Davis for recording
A
Perfect
all
right
looks
like
we
have
a
bunch
of
people
here
already
I
will
make
up
for
for
this
and
stretch.
You
take
some
notes
so
at
the
17th
of
September
and
we
have.
We
don't
have
much
on
the
agenda
for
day.
It's
mostly
just
review
of
action
and
from
last
time,
if
you
have
anything,
please
please
add
it
to
the
document.
A
B
I'll
jump
in
and
so
on
and
Cornelio
Davis
worked
for
pivotal
work
on
kind
of
architecture
and
emerging
tech
and
helping
our
customers
and
partners
tied
to
business
value.
I've
been
working
on
little
container
service,
so
PKS
for
the
last
two
and
a
half
years
and
I'm
keenly
interested
in
continuing
to
build
out
a
platform
for
building
platforms,
and
so
cluster
Adams
I
don't
know
very
much
about
it.
But
I
had
like
an
opening
in
my
calendar
this
morning
and
I
was
like.
Oh
I'll,
join
this
to
see.
B
A
A
C
Am
here
I'm
just
joining
my
phone?
Every
I'm,
not
a
great
connection,
I
think
I
had
two
action
items
from
last
time,
one
of
which
was
integrated
cops
which
I
have
not
done,
but
the
other
one
is
the
Builder
PR
which
we
did
finally
get
merged
in.
It
is
dated
behind
a
feature
flag
and
it
uses
a
plug-in
model
which
we
are
currently
iterating
on
entry.
But
yes,
so
that
took
longer.
A
C
There
will
be
other
plugins
that
go
into
that
goo
builder
and
undoubtedly
there
will
be
fixes
that
we
or
anyone
can
make
to
that
plug-in.
But
yeah
I'm
gonna
try
to
get
continuous
sort
of
the
Meg
master
plan,
which
is
to
create
an
operator
and
get
going
in
cops,
build
some
experience
there
and
like
then,
hopefully
get
into
or
from
my
perspective,
then.
But
anyone
else
is
welcome
to
go
in
a
different
order.
Get
it
into
true
beta
Amanda.
A
C
C
C
I
think
I
think
you'd
probably
need
relatively
similar
to
the
docks
we
have.
Yes,
we
had
a
on
stage
for
when
using
the
hatch
version
of
people
where
we
built
it
in
with
a
flag.
We
should
update
those
Doc's
because
they
are
likely
changed
a
little
bit
in
terms
being
set
this
this
environment
variable
sorry,
which
activates
the
feature
flag,
but
the
that
should
be.
We
should
be
able
to
update
the
docs
on
our
side
and
I.
C
A
C
A
A
E
E
C
Would
say
we
do
I
had
I.
Think
no,
don't
go.
Be
honest.
I
think
one
of
the
things
we've
like
one
of
the
goal
is
to
do
other
things.
So
if
you
would
rather
do
Cardenas
and
Orchard
proxy,
we
should
create
those
operators
also
and
see
that
they
work
I.
Think
that
makes
I
think
that'd
be
great,
but
yeah
I
think
I
think
getting
getting
foolin.
B
E
Okay,
understand
contest
from
my
perspective,
I
think
that
we
should
go
initially
initially
for
something
which
is
equivalent
with
what
we
are
today,
but
uses
the
dance
table
and
all,
and
when
we
are
settled
with
this,
we
can
let
the
user
choose
different
advanced.
This
is
why
I'm
asking
for
to
approach
him
courtliness,
because
it
they
are
what
the
user
f
today
I.
C
Mean
that
sounds
reasonable
to
me,
I
think
yeah.
We
should
certainly
build
them
out
I,
like
the
idea
of
like
using
the
configuration
that
rubidium
has
today
like
and
I,
think
that
should
extend
also
to
the
manifest
itself
right.
We
should
try
to
make
sure
it
is
exactly
identical
manifest
where
these
row,
the
identical
manifest,
is
created.
E
C
E
Yes,
of
course
now
we
should
agree
that
it.
That
is
something
that
we
can
achieve,
and
then
we
have
to
figure
out
how
so,
yes,
but
but
I
can
give
you
that
as
soon
as
we
start
as
more
chance,
we
have
to
get
it
done
right
before
the
end
of
the
cycle,
because
if
we
wait
too
late,
then
there
is
Kubik
on
so
it
started
getting
a.
F
If
somebody
else
wants
to
jump
on
that,
hopefully
it's
simple
enough
to
attack
what
I
need
to
get
back
to
probably
next
week
is
finishing
up.
The
kopitiam
integration
example
for
the
Installer
portion
of
this,
so
that
we
can
use
the
component
configured
in
cubed
and
to
describe
the
operators
that
are
going
to
be
installed
and
then
once
I
get
that
up
I'll
be
open
to
help
out.
F
Yeah,
thank
you
for
all
of
the
input
on
that
document.
That's
been
a
helpful
discussion.
I
haven't
seen
any
major
roadblocks
with
what
I
have
on
paper
versus
like
what
that
documents
kind
of
intending
and
describing
for
it
has
surfaced
some
additional
and
use
cases
that
I'm
gonna
accommodate
for
the
first
stab
at
it,
but
I
think
it
is
important
that
we
have
something
that's
working,
that
we
can
iterate
on.
F
Yeah
I
guess
just
to
clarify
I'm,
just
trying
to
prioritize
like
getting
the
plumbing
to
actually
just
work
and
then,
let's,
let's
do
everything
we
can
to
the
accommodate
it
good
UX.
F
F
F
I
would
like
to
field
this
for
discussion.
Maybe
Justin
has
some
thoughts,
but
from
what
I
was
reading,
I
looked
into
the
structure
of
note
local
DNS
for
a
little
bit
like
few
months
ago
and
I
didn't
see
a
platform-agnostic
way
to
deploy
the
solution
in
H
a
manner
meaning
that
when
the
node
local
DNS
pod
restarts
that
there
is
a
mechanism
for
gayness
to
still
succeed,
two
pods
are
not
in
it
on
the
shared
node
and
there
are
a
few
mechanisms
that
are
like
specific
to
IP
tables.
I.
F
Think
that's
the
primary
one.
There
are
also
ideas
of
like
using
multiple
daemon
sets
or
falling
back
to
coop,
proxy
or
sorry
falling
back
to
a
central
DNS
in
the
cluster
and
switching
that
on
intentionally
during
an
upgrade.
All
of
these
require
some
orchestration,
which
are
definitely
ripe
for
the
operator
use
case,
but
I'm
curious.
What
your
thoughts
are
in
this
area,
Justin
I.
C
Guess
I
was
imagining
that
we
wouldn't
sort
of
edit
to
realize
that,
like
we
wouldn't
try
to
do
anything
particularly
much
better.
So
if
the
node
local
doesn't
offer
this,
we
shouldn't
write
necessarily
go
further,
but
I
basically
haven't
I
was
gonna.
Try
exactly
some
of
the
things
we
talked
about
like
multiple
daemon
sets
and
that
sort
of
thing
that
would
be
if
we
can
do
better
because
we're
doing
it
because
we
have
an
operator
I,
think
that's
great,
but
if
we
can't
I
think
I
think
that's.
C
F
C
F
C
I
did
review
the
original,
like
PR
and
I.
Think
I
think
what
we
sort
of
realized
is.
We
can
put
in
a
lot
of
machinery,
but
it's
not
necessarily
better
than
just
restarting
quickly
like
like.
You
could
change,
and
there
are
exceptions
to
this,
but
you
try
to
create
like
low
bounce
or
pod.
That,
like
does
this
but
you're
doing
a
lot
of
work
and
it
felt
like
just
restarting
quickly
would
probably
be
the
best
strategy.
But
I
can
look
at
that
again.
A
You
know
what
we
have,
what
what
code
needs
to
be
written?
How
close
we
are,
so
we
would
look
things
from
general
user
standpoint
would
look
at
from
an
operator
standpoint
many
my
standpoint
and
come
up
with
things
like:
okay,
install
a
simple
class
boy.
It
is
simple
cluster
and
if
there's
things
that's
using
I,
don't
operators
use
those
like
generally
use
it
as
really
care
or
will
notice
and
also
use
canes
like
updating
or
finding
a
specific
operator
or
the
against
you
know,
I
mean
there's
like
a
lot.
A
If
you
start
thinking
about
it
and
my
idea
was
like,
let
me
start
collecting
them
and
then
maybe
just
start
writing
up
like
small
scripts
like
even
if
it's
you
know
pseudo
code,
so
you
do
anything,
it
doesn't
too
much
it
just
so
everyone
can
imagine.
So
this
is
the
steps
we
would
take
and
then
we
could
sort
of
look
at
this
and
see
how
close
are
we
or
which
you
know
other
SIG's
or
or
working
groups
that
we
need
to
talk
to
you.