►
From YouTube: Kubernetes kops office hours 20200131
Description
Recording of the kops office hours meeting held on 20200131
A
A
A
Pasted
a
link
in
zoom
chat,
I'll
replace
it
because
I
never
know
whether
the
zoom
deletes
those
things
and
then
please
do
put
items
on
the
agenda
if
you
would
like
to
get
to
them
and
we
have
a
bunch
of
Stephanie
chat
already.
Please
feel
free
to
put
your
name
in
the
attendees
list.
If
you
would
like
to
and
otherwise
I
suggest
we
get
right
into
it.
A
C
Basically,
the
default
storage
class
that
cops
adds
to
clusters
for
AWS
does
not
have
the
volume
binding
mode
field
set,
and
there
is
interest
in
sending
that
to
wait
for
first
consumer.
The
an
issue
is
that
it's
an
immutable
field,
and
so
we
were
hoping
to
figure
out
a
good
way
to
get
that
applied.
A
A
They
tweaked
it
so
that
instead
and
those
heuristics
where
they
basically
create
the
volume
spread
across
the
zone
and
then
the
pulse
would
follow
the
volumes
and
they
tweaked
it
so
that
now
they
don't
create
the
volumes
upfront
and
instead
they
schedule
the
pods
and
then,
when
the
first
pod
these,
if
volume
is
created,
it
looks
at
the
zone
of
the
pod
and
says:
aha
now,
I
will
create
a
volume
in
that
particular
zone.
It
is
a.
It
is
a
difference
in
that.
If
you
have
like
a
zone.
B
A
Is
unavailable
temporarily,
it
will
behave
differently,
but
it
is
probably
the
right
thing
to
do.
I,
don't
know
whether
we
it's
I,
don't
know
whether
we,
because
if
we,
if
we
manage
to
change
it
in
place,
somehow
sorry
if
we
don't
want
us
to
change
it,
while
we,
if
we
don't
want
to
change
it
in
place,
then
use
the
upgrade
will
have
a
different
behavior.
Then
users
that
don't
which
is
I,
think
but.
D
A
D
Be
the
default
has
upstream
init
the
default
I.
Don't
know
if
I'm
transmitted
to
the
fault,
we
currently
take
the
default
and
disable
it
and
leave
it.
There
are
making
it
there.
This
there's
one
that's
called
default
and
we
remove
its
annotation,
making
it
the
default,
and
so
we
have
one
that's
called
default:
it's
not
used
well,
it
can
be
used
by.
A
Name,
but
yes,
because
that
we
know
that
there
are,
there
are
many
manifest
out
there
in
the
wild
that
refer
to
these
things
by
various
name.
So
we
can't
really
stop.
We
can't
really
wait
to
be
very
careful
about
changes
of
behavior
significantly
of
one
of
those,
but
it
is,
and
we
also
don't
just
delete
them,
because
otherwise
workloads
will
presumably
fail
to
work
on
cops
clusters.
I'd
be
in
favor
of
I.
A
A
Am
very
biased
because
I
wrote
the
opener,
but
yes,
yes,
I'm
happy
to
follow.
It
would
certainly
it
should
have
a
really
snowed
about
about
the
like.
If
we're
gonna
change
the
default,
but
I'm
happy
to
bow
to
other
people's
opinions,
about
which
one
should
be
the
default.
But
yes,
I
I
certainly
should
because
it's
because
it's
immutable
and
because
we
probably
don't
want
to
change
the
we're
gonna,
have
to
split
brain
type
thing
which
on
versions,
we
should
create
a
new
one
and
then
whatever
people
want
in
terms
of
default,
Ursuline.
E
F
F
G
Last
I
looked
I
thought
that
was
in
beta.
We
use
it.
We
enabled
on
our
clusters,
but
we
and
it's
the
field
is
mutable
I,
believe
I'm,
trying
to
find
the
code
now
so
I
can
look
that
up,
but
I
I
would
I.
Don't
think
we
need
to
make
that
change
at
the
same
time
as
this,
because
we
adjust
that
real
time
on
our
clusters.
So
I
think
that's
you
know,
but
I
like
it
I
mean
it's
a
feature.
I
think
a
lot
of
cops
users
would
like
yeah.
F
H
On
the
old
feature
versus
new
one,
I
actually
have
a
comment.
Now
that
are
looking
through
it,
I
probably
would
want
the
old
feature,
as
in
well
in
my
users
create
pause.
They
may
create
a
few
hundreds
that
need
volume
and
waiting
for
the
first
consumer
would
not
be
an
issue
if
AWS
was
actually
fast.
What
you
call
outs
of
the
api's,
so
the
configuration
to
have
either
way
would
be
very
nice,
so
I.
A
But
yes,
so
it
sounds
like
we're
at
least
an
agreement
on
the
naming
front
and
we'll
have
a
discussion
around
about
creating
a
new
team,
a
newly
named
one
and
then,
while
discussion
around
the
default,
and
probably
as
you
suggest
you
on
the
loop
and
six
storage
about
like
ysus
immutable
immutable
in
the
first
place
and
all
these
things
should
it
be
the
default
that
sound,
good,
okay
and
and
because
we
believe
expansion
is
a
mutable
field.
Ie
can
be
changed.
Then
we
don't
necessarily
need
to
include
it
in
the
same
room.
C
Hear
the
AWS
be
PCC
and
I
provider
supports
configuring
various
settings
through
environment
variables,
for
example,
you
can
keep
a
warm
a
pool
of
IPs
or
en
is
attached
instances
that
are
not
yet
in
use
by
the
pods
I
know.
There
have
been
requests
on
slack
and
github
issues
to
configure
that
so
I
decided
to
make
that
more
flexible
by
allowing
any
environment
variables.
D
H
D
D
A
H
A
A
C
So
I
I
was
debating
between
whether
a
map
or
list
and
decided
to
do
map
because
I
think
the
main
benefit
of
the
list
is.
You
can
do
value
from
and
reference
secrets
or
complete
maps,
but
none
of
the
none
of
the
BBC
CNI
providers,
values
I,
would
consider
secret.
So
I,
don't
know
if
that's
necessary,
but
I
could
certainly
switch
it.
I.
A
A
A
G
A
F
A
A
But
yes,
so
this
is
a
good
time.
I'll
give
a
quick
update.
I
did
the
so
idea.
Lisa
said
we
got
114
one
out
and
I
think
that
one's
a
good
one
and
that's
good,
because
that's
the
actual,
like
release
release
on
116
0
beta
1
I
forgot
to
bump
the
cops
controller
version
as
I
believe,
as
I
also
did
in
117
zero
alpha
2.
A
I
know
also
that,
like
something
else
got
into
117
something
else
merged
into
the
117
branch,
like
probably
should
have
waited
for,
but
like
just
missed
the
window,
so
that
will
now
make
that
what
will
then
be
117
0,
alpha
3
and
116
0
beta
2?
If
anyone
else
has
anything
they
want
to
make
sure
it's
in
there,
please
merge
it
if
it
isn't
already
on
the
branch
there.
I
A
A
A
G
I
F
A
C
A
Great
and
then
on
the
other,
the
other
releases
I
think
the
other
ones
that
were
on
the
calendar
were
114
was
proposed,
but
I
didn't
really
see
anything
in
there.
That
would
necessarily
warrant
a
bump.
I,
don't
know
if
anyone
had
anything
in
particular
that
would
they
were
thinking
of
so
I
did
not
do
that
yet,
but
if
we
feel
strongly
about
that,
we
can
certainly
do
that,
but
we
have
115
coming
out
as
well.
F
A
Okay,
well,
if
I
see
that
if
I
see
a
back
port,
if
I
see
a
I
turned
to
go
into
114,
then
we
can
certainly
do
it.
I
I
would
suggest
we
focus
I
want
the
team
now,
instead,
if
we
mess
for
some
reason,
I
don't
know
if
a
recent
one
18
zero
alpha
one
I
cut
but
did
not
push
so
you'll,
see,
there's
a
tag
as
a
release
note.
A
This
is
a
there's
a
commit,
including
that,
but
the
challenge
is
it
doesn't
work
because
so
we're
jumping
on
on
the
agenda,
but
this
is
I
guess
down
on
the
agenda,
which
is
that
there's
this
issue
with
permissions
and
so
we're
gonna.
Once
we
address
figure
out
what
we
want
to
do
there,
then
we
can
cut,
fix
and
cut
one
eighteen
zero
alpha,
because
I
did
tagged
it.
A
Yeah
and
alphas
are
like
so
yeah.
We
should
already
get
that
in
alphas,
like
don't
close
the
branch
either
so
and
we're
gonna
keep
it
on
master
as
well.
So
it's
not
gonna,
be
we're
not
gonna,
create
the
1:18
branch.
Yet
we're
gonna
like
keep
the
branch
open
for
features.
So
we
have
two
months
till
the
release.
I
think
two
months
till
the
release
of
1:18.
So
it
is
that
sure
I
don't
know
we
have
around
between
one
and
two
months,
so
there's
one
18.
So
it's
a
little
early
to
close
for
features
right
now.
A
So
but
yes,
we
should
also
get
your
your
fix
and
I.
Think
I
did
follow
up
and
find
out
the
that
the
lot
of
functionality,
the
health
check,
has
moved
to
the
no
problem
detector,
and
so
we
don't
necessarily
need
to
run
health
check
going
forwards,
but
we
probably
should
start
to
run
the
problem
detector
at
some
stage.
F
A
A
My
feeling
is,
we
should
so.
We
have
two
choices
when
it
comes
to
people
that
choose
darker
as
it
were,
we
can
actively
remove
it
from
a
certain
version
of
cops
work,
sorry
from
a
submersible
kubernetes,
or
we
can
just
leave
it
in
there
and
let
the
the
usage
of
docker
people
that
choose
docker
go
down
and
be
able
to
choose
container
to
go
up
and
then
gradually
people
will
no
longer
be
running.
The
health
check
do.
F
A
And
I
think
it's
going
to
come
down
to
the
that
sounds
reasonable.
It's
just
going
to
come
down
to
our
we.
What
are
we
doing
about
older
versions
of
kubernetes
or
of
cops,
and
are
we
changing
behavior
for
those
users
and
do
we
want
to
do
that?
So
I
can
take
another
look
at
the
PR
around
that,
but
that
would
be
the
my
concern.
I.
F
Don't
have
a
strong
preference
there,
but
I
think
it's
your
call,
so
our
code,
we
can
figure
it
out.
But
yes,
so
my
my
worry
last
time
was
that
I
wouldn't
want
to
run
that
in
my
production,
because
it
will
hide
some
issues.
Maybe-
and
it's
unpredictable
I
doubt
that
many
people
know
that
it
exists
in
cops.
So
it's
a
component
known
only
by
the
people
that
needed
it
I,
don't
doubt
that
it
helps
some
others,
but
it's
I,
don't
know
it's
hard
to
to
keep
track
of
everything
that
runs
like
that.
A
A
All
right,
well,
I,
can
well
certainly
try
to
get
that
in
and
looking
I
don't
have
any
wants.
Any
views
like
it.
No
problem
sector
is
the
way
to
make
it
visible
right,
like
it
addresses
that
visibility
point,
and
it
certainly
addresses
my
like
on
the
other
end,
it
also
dressed
as
the
it
shouldn't
fire
because
I
far
as
I
know,
we
don't
actually
do
anything
with
my
perform
detector
so
different.
If
she
won't
do
anything
when
it
sees
this
problem,
they
will
just
mark
the
note
as
bad
or
small.
It
restarts
docker.
That's
what.
F
A
A
F
A
Said
a
comment
there,
all
right:
let's
see,
we've
jumped
around
the
agenda
a
little
bit.
Let's
see
we
did
cover
the
180
now
for
one
should
we
try
an
early
alpha?
Yes
got
those
buzzer
I'm
just
going
yes
move
doctor
health
check.
Doctor
builder,
we
talked
a
little
bit
about
that.
I
think
we're
follow
up
a
little
bit
more
the
core
DNS
cache
metrics
regression,
guys
or
anything.
We
did
not
talk
about
on
that
front.
I
I,
don't
think
so,
I
think
I,
suppose
I'll
just
cherry
one
size,
marriage
can
cherry-pick
back
to
or
and
cherry
pepper,
1:15
I,
don't
think.
That's
an
issue
I've
raised
at
upstream
in
KK
as
well.
Yeah
I
was
pretty
much
just
covered.
A
A
I
think,
first
of
all,
I
think
it
means
we'd
like
to
have
a
missing
test,
although
maybe
not,
but
I
depend
on
how
the
version
thing
went
down,
we
might
have
a
missing
test
and
I
want
to
make
sure
they
were
actually
testing.
The
cops
controller
is
coming
up
correctly,
I
think
like
should
we
even
try
to
run
Cubs
controller
under
a
non
root?
A
User
would
be
one
question
because,
like
it's
gonna
end
up
pretty
privileged
anyway,
like
if,
if
it's
gonna
get
the
CA
key
anyway,
which
it
probably
will,
then
it's
basically
weird
in
the
cluster,
no
matter
what
we
do
and
then
the
other
topic
is
in
general,
like
how
should
we
do
these
log
files
and
honestly
I,
don't
know
what
the
right
approach
is?
I,
guess
we
have
to
make
them
world
writable,
but
I,
don't
know
if
there's
some
trick
so
I
don't
know,
couldn't
make
them
owned
by
user
1000.
Can
we
do
that?
A
A
A
C
A
Controller
e
to
e
should
be
running
the
built
cops
controller
that
it
builds
itself
and
uploads
to
a
GCS
bucket
it.
Yes,
it
was
running
the
old
tagged.
First,
yes,
uh-huh
I,
don't
know
it's
magic,
yes,
there's
definitely
something
there,
like
I
I,
think
his
might
it'll
be
interesting
to
see
whether
et
eat
breaks
once
I've
done
the
tag,
because.
A
C
A
Do
that
or
we
can
do
that,
but
yes,
it
should
definitely
output
version
like
yeah
ranking.
No
to
that,
oh
I
cannot
I
can't
make
a
note
of
it
because
cups
because
zoom:1,
let
me
okay,
uh-huh
all
right,
but
I.
Think
if
we
make
make
the
five
pre
create
the
file
in
node
up
with
the
user.
One
thousand.
We
don't
technically
need
to
start
handing
out
user
IDs
yet
so
we
can
keep
it
as
one
thousand,
because
DNS
controller
does
not
currently
mount
anything.
A
A
A
A
A
F
D
A
F
A
I
So
it
affects
my
allows
you
to
choose
to
use
their
launch
templates
when
you're
not
using
mixed
instance,
policies.
I.
Think,
there's
probably
argument
owed
reasons
for
people
wanting
to
still
use
non
launch
template.
They
can
use
logic
configurations
with
non
mixed
instance
policies,
so
yeah
I
think
in
varying
the
flag
would
be
Michael
as
well.
The.
F
Is
it
something
different
for
someone
that
uses
them
in
the
cops
configuration
or
is
just
different
for
programming
or
setting
them
up,
because
that's
the
side
I
didn't
dig
that
much
into
it,
but
seems
that
it
should
work
pretty
much
the
same
for
someone
using
cops,
not
sure
Ryan.
Are
you
listening
yep.
H
B
Just
the
future
flight
just
lets
you
flip
it
on
for
the
dawn,
but
when
you're
not
using
them
extensions
policies,
I
I
could
see
a
reason
to
make
it
a
field
instead
of
a
feature
flag
just
so
you
could
specify
which
of
your
instance
groups.
You
may
want
to
switch
that
on
you
weren't
comfortable.
We
know
making
that
change
right
away.
You
might
not
wanna
turn
on
on
all
of
them,
but
that
being
said,
we've
been
running
the
mixed
instance
policies
with
the
launch
templates
for
almost
two
years
now.
F
A
I
agree
that
we
should
it
sounds
like
we
should
make
it
the
default
in
that,
like
it
sounds
like
it's
the
future
and
we
should
try
to
get
there
and
I
think
it's.
The
issue
raised
about,
like
not
breaking
people's
existing
clusters
feels
important
as
well.
So
from
that
point,
if
you're
like
we
have-
and
we
can
certainly
can
do
that
with
a
field-
in
other
words,
we
can
make
it
there
like
the
default.
If
it's
not
specified,
is
the
legacy
behavior.
F
A
E
E
A
Okay,
Greece
Allah,
okay,
I,
pour
that
song,
alright
I'm.
Actually
this
creepy
before
my
laptop
melts,
didn't
work.
I
did
work
alright,
so
the
intention
is,
let
me
jump
to
the
punch
line
here,
which
is
adding
in
the
ability
to
manage
essentially
arbitrary
objects.
Obviously,
kubernetes
objects
alongside
your
commands
cluster.
So
today
you
have,
we
have
to
sorry
it
called
cluster
we
have
to.
A
We
will
now
also
be
able
to
manage
effectively
arbitrary
objects,
and
so
we
have
two
arbitrary
objects
here:
a
a
custom
resource
definition
itself,
an
instance
of
that
custom,
resource
definition,
and
so
in
this
case
and
those
objects
when
we
create
the
cluster
when
we
update
the
cluster
should
say,
are
applied
to
the
cluster
using
the
existing
manifest
mechanism.
So
essentially,
we
are
able
to
extend.
A
Project
is
I,
think
producing
aiming
to
produce
a
lot
of
these
operators,
and
so,
if
you
want
the
default
configuration
for
no
double
dns,
you
just
do
that
which
is
a
sort
of
empty
object,
and
this
says
please
install
a
notebook
DNS
for
me,
but
you
can
also
do
things
like
spec
dot
version
to
specify
an
absolute
version
of
a
manifest.
You
can
do
things.
A
I
expect
that
channel
I
think
to
say,
like
I,
want
to
subscribe
to
the
stable
channel
or
the
alpha
channel
of
upstream,
and
we
haven't
really
defined
where
those
channels
live
yet,
but
the
idea
is,
we
are
essentially
taking
these
sub
sub
objects
that
were
previously
in
the
cluster
and
effectively
splitting
them
out
into
their
own
objects.
So
we
are
hopefully
getting
rid
of
the
problem
around
versioning.
A
We
always
have
to
update
cops
and
then,
if
I,
when
I
have
when
I
add
so
I've
added
this
add
argument,
another
PR
where
we,
when
you
create
a
cluster,
you
can
add
some
llamó
files
effectively
alongside
it,
and
we
can
so
that's
how
you
do
arbitrary
mo
and
then
we'll
probably
have
some
nicer
workflows
around
some
of
the
built
in
I,
don't
operators
and
then
finally,
the
another
change
which
is
I,
think
more
problematic.
Is
we
build
in
some
of
those?
A
So
no
local
DNS
has
an
operator
for
it
and
we
can
build
in
those
controllers
those
operators
into
cops
controller,
which
means
you
don't
have
to
end
up
with.
You
know
50
different
controllers,
running
on
on
your
50
different
add-on
operators
running
on
your
master,
which
can
help
with
sort
of
resource
utilization
and
everything
and
sort
of
understand.
What's
going
on.
It
does
mean
if
we
do
that,
we
need
a
way
to
turn
those
off.
A
Otherwise
we
end
up
back
with
versioning
problem,
but
that
is
something
we
can
I
guess
figure
out,
but
that
is
I.
Think
I
want
to
show
that
that's
the
first
time
we've
actually
I've
shown
like
the
big
picture
of
the
model
from
like
10,000
feet
and
how
it
could
work
and
there's
we
want
to
get
to
a
Rover,
there's
nothing
special
about
these
add-ons
and
if
you
want
to
add
ons
or
if
you
want
stole
your
apps.
That
way,
you
can
do
that.
A
Guys
that
is
a
good
feed
piece
of
feedback
the
we
need
to
make
that
work,
and
we
also
need
to
make
it
work
seamlessly
right.
So
we
both
need
to
work,
make
that
work
for
add-on
operators,
and
we
then
need
to
like
bring
it
into
the
cops
work.
But
oh,
that
is
great
feedback
and
I.
Think
I
will
write
that
down
on
the
note
says:
like
must
fix.
A
Feedback,
how
is
it
going
to
work
with
assets?
It
certainly
does,
and
it
does
enable
you
to
like
we.
There
channels
should
just
be
a
like
something
similar
to
what
we
actually
have
where
it's
just
like
a
HTTP
endpoint,
and
it
would
certainly
be
relatively
straightforward
to
build
your
own
channels
if
you
wanted
to.
But
we
that's
me
waving
my
hands.
It's
not
a
it's,
not
a
mat.
Sorry
yeah.
D
C
A
C
A
Point
I
think
we're
gonna
need
a
another
field
which
is
like
like
what
to
do
in
that
case,
there's
like
I
think
the
default
should
probably
be
like
put
it
in
status
so
that
I
can
choose
to
update,
but
we
might
also
have
yellow
mode
where
you
know
we
just
apply
it
hot,
but
I
can't
imagine.
Many
people
will
run
that
in
broad,
but
in
for
dev
it
might
be
super
handy
right
like
if
you,
if
you're
a
core
DNS
developer,
and
you
want
to
like
stay
on
the
bleeding
edge.
Then
maybe
you
want
that.
A
A
F
A
A
It
that
seems
right
if
people
start
using
it
for
that,
like
there's,
that's
up
to
them,
but
the
use
case
were
optimizing
for
is
like
the
cluster
add-ons
that
live
that
share
a
sort
of
life
cycle
with
the
cluster.
So,
like
typically
like
things
where,
when
you
update
them,
you
might
have
to
bounce
nodes
or
something
like
that
sort
of
saying
or
where
you
think
about
it
with
a
kubernetes
version.
Okay,.
F
A
F
A
A
There
are
some
exceptions
but,
like
we
know,
we're
gonna
need
some
problems,
but
yes,
we
we,
we
record,
we
gonna
end
up
with
some
form
of
configuration
objects
that
can
be
passed
around,
but
it
it
would
be
nice
not
to
have
not
to
have
a
tight
coupling
between
cops
and
these
cluster
add-ons
so
that
you
can
add
them
at
any
time.
You
can
sort
of
be
it
just
feels
nicer,
but
we
know
that
I
think
so
I
think
cost
a
lot
local
there's
a
trick
to
get
which
is
you
can
like?
A
Look
at
your
Etsy
hosts
I.
Think
there's
a
trick
to
get
your
DNS.
You
can
find
by
the
the
DNS
IP
you
can
get
by
looking
at
the
service.
I
think
one
of
the
tricky
ones
is
like
the
cluster
cider.
So
like
knowing
the
cider
block
in
theory,
you
can
get
that
by
like
looking
at
the
our
server
pod,
but
that's
just
even
more
horrible,
so
I
think
in
general.
We
want
to
save
it
from
that
one!
That's
where
we
like
get
into
tricky
territory,
but
we're
sort
of
still
exploring
the
space.
A
But
it's
I
think
what
I
wanted
to
sort
of
get
at
today
was
this
model
of
breaking
up
the
cluster
object
into
typed,
CR
DS
that
you
can
manage
alongside
and
that
we
would
continue
to
support
the
work
flow
for
again.
They
just
want
the
workflow
forget
mechanism,
I
get
and
I
can't
buy
files
effectively
and
the
s3
workbook.
Okay.
D
So
actually,
I
want
to
make
a
confession
about
how
we
installed
the
see
Liam
cups
out
on
to
get
the
cluster
on
bootstrapped
and
running,
and
then
we
remove
the
manifests
and
then
reinstall
our
helm,
charge
of
Celia,
and
then
we
have
to
neuter
the
channel
or
the
cops
at
mon-sol
cops
doesn't
try
to
upgrade
it
again
yeah.
So
we
would
rather
kind
of
use
helm
because
we
can
roll
back
and
do
kind
of
the
helm
have
the
helm
features.
A
It
would
be
great
to
dig
in
to
that
with
you
sometime
I,
don't
know
how
much
you're
able
to
say
on
a
public
Channel
but
yeah,
finding
out
like
the
use
of
templating
versus
versions
and
like
work
so
you're,
some
kind
of
controller
reversions,
for
example.
But
yes,
it
would
be
nice
to
maybe
I'll
clean
you
offline
or
something
you.
H
G
Do
you
foresee
being
able
to
I
think
I
know
the
answer
based
on
this,
but
you
know
some
of
the
discussion
earlier
about.
You
know
like,
for
example,
WSC,
and
I
once
it's
in
this
world
the
work
peter
was
showing
of
the
environment
variables
thing.
Would
we
be
able
to
set
that
at
some
level?
Where
do
you
know
what
I'm
asking
where
we're
not
in
here
specifically
from
the
channel,
but
we
we're
not
changing
it
every
time,
but
I
do
want
to
pull
down
the
new
version.
A
A
Yeah,
it's
we're
basically
breaking
another
cluster
object
into
a
new
like
top-level
object,
rather
than
like
piling
on
more
stuff
into
there,
and
we
have
I
think
there's
a
migration
path
as
well,
so
I
I
think
we
can.
We
should
still
continue
to
move
forwards
like
enhancements
that
the
modified
cluster
or
a
child
object
off
cluster
in
it.
C
A
The
dream
is
not
us,
the
the
dream
is
that
it
would
be
the
aid
of
us,
V,
PC
CNI
project
would
itself
maintain
an
operator.
The
theory
is
that
they
are
the
best
people
to
understand
how
to
operate
their
add-on.
We
know
in
the
constraints
project
that
we
have
to
bootstrap
this,
and
we
know
in
cops
that
we've
had
to
bootstrap
this
as
well.
So
in
the
short
term,
we
can
certainly
maintain
these,
but
we
we
shouldn't
be,
we
shouldn't.
A
We
would
hope
to
not
maintain
ownership
of
that
forever
like.
So.
If
there
was
a
requirement
for
something,
then
we
could
put
it
we,
the
upstream
project,
would
do
it.
I
think
there
is
an
interesting
question
there,
which
is
like
what,
if
they
aren't
wanting
to
expose
things
that
we
know
we
need,
and
in
which
case
we
can
fork
it
not
necessarily
in
cops
but
somewhere,
but
it
doesn't,
it
doesn't
necessarily
need
to
be
the
goal.
Is
it
doesn't
need
to
be
a
cops?
It
doesn't
need
to
integrate
into
the
cops
workflow.
A
G
B
A
A
Like
he
native
for
examples,
when
I
always
use,
K
native
relies
on
a
particular
version
of
orc
any
of
his
testable
eyes
on
sto
and
is
tested
with
a
version
of
this
do
and
so
K
native
needs
I'm
an
inversion
sto
and
if
you
go
to
our
future
version
of
K
native,
that's
okay
or
we
should
just
basically
should
not
update
K
native
until
a
CEO
has
been
updated.
So
we
assume
that
someone
is
like
updating
both
them,
but
I,
don't
you're
right.
We
don't
have
a
good
answer
for
how
we
drive
the
updates.
I.