►
From YouTube: Kubernetes SIG Cluster Lifecycle Cluster Addons 20201208
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
everyone
to
the
classic
add-ons
meeting
today
is
tuesday,
the
8th
of
december.
A
I
think
I
recognize
all
the
names
on
the
participants
list
and
we
had
one
that
one
action
item
from
last
time,
just
in
myself
to
ping
evan,
and
he
responded
to
me
about
the
manifest
bundle
cap
and
he
said
we
should
merge
it
now
and
if
there
are
small
edits
to
make,
we
could
still
make
them
later
on.
A
A
Super
yeah
well
done
everyone
all
right,
yeah.
Looking
at
the
agenda,
chris
had
an
agenda
item.
C
Yeah
everybody
cool
I'll
jump
into
it,
real
quick,
so
we
started
a
project
a
little
bit
a
little
bit
ago.
We're
slow
rolling
on
the
beginning
of
it,
but
we're
in
the
process
of
moving
over
code
base
from
the
multi-tenancy
sigs
repo
that
came
out
of
that
multi-tenancy
working
group
called
virtual
cluster
and
we're
and
we're
re-pivoting
the
actual
api
definition
to
be
a
cluster
api
provider.
C
Instead,
you'll
notice
the
link,
that's
in
the
in
the
agendas
cluster
api
provider
nested
and
we're
experimenting,
and
we
kind
of
wrote
a
design
doc
that
proposed
that
we
were
going
to
go
down
this
path
of
using
the
declarative
patterns
project
and
pretty
much
following
what
what
this
group
has
been
doing
around
building
add-ons
but
specifically
to
bring
up
control,
plane
components,
and
so
I
really
wanted
to
bring
it
up
to
this
group.
Justin
came
to
our
last
meeting
last
tuesday.
C
Thank
you
for
doing
that,
but
also
just
wanna
wanted
to
to
kind
of
propose
a
question
and
just
see
if
there's
any
special
special
learnings
that
you
all
have
had
from
doing
this
from
building
these
things.
Any
major
problems
you've
ran
into
that
you
found
as
as
serious
blockers
from
from
you
being
able
to
progress
forward
and
yeah.
C
So
in
essence,
what
we're
going
to
be
doing
is
bringing
up
we're
going
to
entry,
have
a
an
scd
provider
that
will
be
brought
up
or
an
etcd
cluster
that'll
get
brought
up
as
a
cluster
add-on
style
likely
that
won't
be
the
most
supported
piece
of
the
code
base.
That'll
just
be
so
that
you
can
actually
use
it.
C
The
idea
is
that
we'll
be
replacing
the
you,
you
will
be
able
to
replace
the
ftd
implementation
with
whatever,
whatever
you
want
from,
like
all
the
prior
art
around
coreos's
etcd
operator
or
the
one
that
jet
stack
worked
on
for
improbable
a
lot
of
different
things
in
that
space
right
now,
but
the
other
two
components
within
our
stack
because
we
don't
deploy
a
scheduler
are
the
api
server
and
the
controller
manager,
and
so
in
essence,
we're
gonna
package
those
up
similar
so
yeah.
C
So
if
there's
any,
if
there's
any
feedback,
if
you've
all
done,
if
there's
things
that
we
shouldn't
do
up
front,
I'd
love
to
to
gather
as
much
of
that
information
as
possible
before
we
start
building
these.
D
D
So
I
think
it
sounds
to
me,
like
the
main
value
out
of
using
the
declarative
pattern,
is
going
to
be
the
ability
to
translate
those
cluster
specific
objects
for
the
provider.
Nested.
D
Into
instances
of
like
those
templated
packages
that
are
linked
from
the
declarative
pattern,
the
packaging
format
either
supports
embedding
the
manifest
directly
on
the
file
system
or
loading
from
channels.
D
So
this
could
be
like
a
like,
very
useful
thing
or,
like
part
of
your
security
model,
that
you
would
lock
down
just
depending
on
who's
using
it.
C
A
C
Single
controller
manager
and
so.
C
If
that's
something
that
you
all
have
done
in
terms
of
like
being
a
being
a
a
good
idea
or
a
bad
idea
rather
of
like
if,
for
example,
to
deploy
the
entire
control
plane,
we
need
the
controller
manager
and
api
server
together.
Should
we
be
packaging?
C
Those
two
pieces
in
in
your
opinions,
based
on
based
on
deploying
other
add-ons
in
this
mechanism
together,
or
does
it
make
it
more
customizable
to
be
able
to
separate
those
two
things
out,
specifically
we're
going
to
be
doing
a
lot
of
patching
for,
like
the
cider
range
that
we
need
these
clusters
to
have
we're
going
to
be
changing,
feature
flags
similar
to
what
we
do
with
cube
adm?
In
essence,
yeah.
D
This
is,
this
is
an
interesting
point.
I
didn't
read
the
design
doc,
so
I
didn't
know
about
this
separate
packaging
suggestion
I
foresee,
like
you,
have
to
ask
yourself
like
what
what
you're,
winning
from
that
separation
of
controllers,
because
you're
going
to
end
up
with
separate
custom
resources
that
then
need
to
be.
Like
I
mean
each
custom
resource
is
probably
just
gonna
have
to
have
the
cider
range
copied
into
it,
or
you'll
have
to
have
some
parent
object.
D
So
then,
there's
just
a
lot
of
complexity
there
as
soon
as
you
have
a
multi-object
kubernetes
api.
So
it's
like.
Are
you
getting
multiple
objects
because
you
have
like
these
things
are
not
a
one-to-one
coupling.
Do
you
want
to
separate
the
creation
of
like
api
servers
versus
controller
managers
via
our
back?
Probably
not.
D
You
know,
I
mean
I
guess
you
could
expose
different
status
for
each
one,
which
is
kind
of
powerful,
but
you
could
also
just
have
different
conditions
on
the
same
custom
resource.
D
I
I
certainly
could
see
it
going
either
way.
My
initial
intuition
was
that
there
would
just
be
a
single
object,
packaged
together,
because
that's
the
abstraction
that
you're
trying
to
win
is
like.
I
want
to
put
these
numbers
in
one
place
and
then
get
emergent,
behavior
from
like
multiple
objects
becoming
like
existing.
Because
of
that,
and
then
also
I
mean
the
one
object
allows
you
to
like
put
a
state
machine
in
a
single
custom
resource
for
like
orchestrating
control,
plane
upgrades
it's
kind
of
hard.
D
I
mean
I
I
would
just
want
to
be
convinced
like
if,
what's
the
benefit
of
splitting
the
objects
up
and
then
having
multiple
controllers
failure
modes
as
well,
right
like
there
might
be
some
benefit
where
it's
like.
Okay,
the
thing
that
like
makes
the
controller
manager
run
is
like
less
failure
prone
or
like
it
has
like
a
crd.
D
C
The
idea
there
is
really
is
realistically
being
able
to
decouple
upgrades
so
that
you
could
upgrade
the
api
servers
before
you
upgrade
say,
for
instance,
the
controller
manager,
so
that
we
could
have.
We
could
be
rolling
api
servers
while
we're
still
waiting
to
roll
our
yeah,
our
control,
our
controller
managers,
at
least
because
it's
a
pretty
limited
subset
of
resources.
The
same
reason
why
we're
also
breaking
out
ncd
and
not
having
all
three?
In
the
same
like
it
was
originally
with
virtual
cluster.
D
Yeah
I
mean
you
could
program
that
state
machine
into
a
single
customer
resource
pretty
easily
by
just
having
two
versions
for
different
things,
and
then,
like
I
mean
it
just
depends
on
like
your
rollout
strategy
like
if
you
only
have
one
rollout
strategy
where
you
want
all
your
api
servers
to
upgrade
beforehand,
which
is
recommended
to
my
understanding
then,
like
the
controller
can
just
put
in
in
the
status
object
that
it's
currently
doing
the
api
servers.
D
D
Okay.
So
I
it's
just
a
matter
of
like
yeah.
I
mean
like
in
in
the
flux
api
right,
there's
a
valid
reason
why
you
would
want
like
to
separate
get
repositories
and
like
applies
of
those
repositories,
because
you
could
restrict
our
back.
You
know
like
for
one
or
the
other
and
have
feasibly
two
different
people
maintaining
those
things
but
then
like
it's.
It
adds
complexity
to
have
like
multiple
controllers
and
like
there's
like
eventing
and
object,
references
and.
D
C
Yeah,
no
that
definitely
makes
sense
what
about
specifically
from
the
add-ons
perspective
and
the
way
that,
like
keyboard,
declarative
project
patterns
has
worked,
and
things
like
that
does
it
does
it
expose
anything
specific
that
would
make
us
want
to
go
at
one
pa,
one
way
or
another.
D
B
I
think
that
the
the
way
we've
built
kubota
declarative
pattern.
It
doesn't
expect
more
than
one
source
like
more
than
one
manifest
source.
I
think
that
may
be
an
easy
change.
I
may
be
overlooking
a
trick,
and
so,
if
you
wanted
to
have
a
split
version
where
api
server
went
before
controller
manager,
that
would
imply
two
controllers.
I
believe,
that's
not
to
say
you
couldn't
or
shouldn't
wrap
them
in
a
over
like
a
facade,
object,
called
cluster
or
something
or
virtual
cluster
that
that
is
able
to
orchestrate
the
subcontrollers.
B
But
yet
the
pattern
certainly
expects
a
single
manifest
version
or
a
single
manifest
source
per
cr.
I
guess
so
that
would
imply
like
more
viewers,
thinking,
first
and
yeah.
I
think
it
would,
I
think,
it'd
be
easier
from
that
perspective,
I
I
don't
know
about
packaging
in
multiple
projects
versus
one.
I
know
you
want
to
do
that
for
sort
of
organizational
reasons.
I
think,
while
it
should
work,
I
think
that
controller
runtime,
the
structure
of
the
cubot
controller,
doesn't
necessarily
make
it
easy,
especially
if
you
wanted
to
reference
each
other.
B
Like
you,
you,
I
think,
you'll
end
up
merging
them
into
one,
but
I
don't
think
like
it
used
to
be
easier
in
terms
of
like
having
each
controller
in
its
own
directory.
The
real.
The
real
problem,
I
think,
is
gonna,
be
in
the
api
directory,
where
you
have
them
register
into
a
scheme
using
a
global
variable.
So
it's
gonna
be
a
little
bit
messy.
I
don't
know
how
you
would
overcome
that
and
like
yeah,
I
don't
know
how
that
works.
D
C
D
Yeah,
I
think
that
that
should
still
you
should
still
be
able
to
create
multiple
controllers
pretty
easily
with
coupe
builder,
but
I
haven't
done
it
with
the
d2
stuff.
B
You
can
certainly
create
them.
I'm
not
aware
of
this
option.
They're
talking
about
chris,
the
it
used
to
be
that
the
controllers
will
go
in
their
nice
separated
directories,
and
so
that
made
it
very
easy
to
like
mix
and
match
with
now
they're
all
in
one
controller's
directory.
I
don't
know
if
you
can
override
that
is
that
what
you
were
saying,
I
think
yeah.
D
D
Yeah
yeah
it
it's
go
ahead
and
talk
about
the
multi-group
option.
C
Yeah
so
after
you
generate
your
project,
you
just
do
q
build
or
edit
dash
dash,
multi-group,
true,
and
then
it
gives
you
a
couple
directions
and
says:
move
these
into
apis
versus
api
and
then
move
the
controllers
into
controller.
Whatever
the
group
name
is,
and
then
it
auto-generates
in
those
directories
from
there
on
out.
C
Yeah
because
at
least
all
the
control
plane
components
will
be
under
control,
plane,
dot,
cluster
dot,
x,
dot
case
dot,
io
it'll
be
when
we
switch
over
and
do
the
infrastructure
group
to
create,
create
the
cluster
version
as
well,
that
we're
gonna
be
introducing
that
third
or
that
second
group
fun
stuff.
C
Okay,
cool.
This
has
been
yeah.
This
is
helpful
so
far,
if
there's
ever,
if
there's
any
thoughts
that
you
all
come
up
with
or
like
major
learnings
or
pitfalls
that
you've
that
you've
run
into
or
that
come
to
you
after
this
hit
me
up
in
slack
I'd
love
to
hear
more
of
those
things
before
we
fully
embark
on
on
building
out
all
four
of
these
or
three
of
these
components.
D
So
if
you
change
the
things
like
in
the
upstream
channel,
you
need
to
make
sure
to
keep
that
r
back
like
up
to
date
or
or
scope.
Your
changes
like
within
the
controller
rpex
limitations.
D
D
So
that's
that's
like
something
that
would
be
a
maintenance
concern
and
if
other
people
are
trying
to
like
extend
those
packages
like
say
they
want
to
install
something
alongside
their
api
server.
This
is
like
an
area
where
that
gets
a
little
bit
hard
to
install.
D
You
know
like
there's,
not
a
technical
limitation,
but
there's
certainly
a
usability
issue
there
that
you
may
have
to.
You
know
just
deal
with
because
yeah
if
you've
got
like
controller
v1
like
in
the
cluster
and
then
you
added
some
object
and
that
object
and
then
the
controller
doesn't
have
permission
to
like
apply.
D
D
I
mean,
if
you
give
the
you
know
like
wider
permission
set.
You
know
then
you'll
have
less
of
a
chance
of
running
into
that
problem
like
using
editor
or
admin
or
cluster
admin
even
depends
on
what
you're
doing
and
then
yeah.
D
I
I
do
think
that,
like
having
the
sub
object,
thing
would
work
with
the
multiple
controllers
and
multiple
packages,
but
I
would
just
caution
against
the
complexity,
so
just
make
sure
you
know
why
you're
doing
it
and.
D
Yeah,
I
think
the
the
our
back
thing
shouldn't
be
that
big
of
a
problem,
because
the
amount
of
extension
and
like
changes
in
the
objects
that
you're
administering
is
probably
pretty
minimal,
but
so
for
a
virtual
cluster.
It's
a
pretty
good
use
case
in
comparison
to
like
something
else
that
somebody
wants
to
bolt
on
weird
things
like
an
ingress
controller
or
serve
manager
or
something.
D
Yeah
we
certainly
we
would
want
to
see
the
demos,
that's
for
sure
here,
right,
cool
justin.
You
have
the
next
bullet,
which
is
a
pull
request
that
I
didn't
look
at
yet.
B
Yes,
thank
you,
so
the
next
bullet
is
around.
I
don't
think
I
put
this
on
here,
but
it's
around
mvp
yaml
transformation
helper,
it's
basically
to
serve
the
idea
of
creating
manifests.
This
probably
naturally
comes
after
the
next
topic,
but
there's
some
work,
there's
some
progress
on
the
k-ops
integration
and
as
part
of
that,
we
will
likely
need
to
be
manipulating.
B
Yaml
manifests
from
upstream
to
do
things
like
splitting
out
our
back
as
we
just
talked
about,
and
so
I've
started
to
create
the
tooling
that
that
I
need
in
order
to
do
that,
manipulation
programmatically.
B
B
This
is
sort
of
kubernetes
are
where
yaml
transforms
and
we've
started
with
one
trivial
one
which,
because
it's
an
mvp
which
is
basically
just
the
machinery
up,
which
is
just
removing
a
label
but
other
ones
that
are
sort
of
coming
down,
the
pipe
are
doing
things
like
removing
volumes.
B
D
If
people
have
people
their
big,
one
could
be
like
finding
all
the
services
and
changing
them
from
like
node
ports
to
load,
balancers
or
something
specific
to
your
implementation
or
downgrading
them
yeah.
Exactly
those
are
services,
yeah.
B
Those
are
those
are
yeah,
so
please,
this
uses
a
a
toolkit
from
customize
which
preserves
yaml
comments.
So
that's
why
the
the
initial
mvp
is
bigger
than
perhaps
you
might
expect,
but
it's
from
here
on
in
it
should
then
be
smoother
sailing.
But
if,
if
this
is
the
wrong
approach
to
use
altogether,
if
there's
some
other
tool
for
kubernetes
aware
yama
manipulation,
please
let
me
know.
D
I
now
remember
looking
at
this.
I
think
one
time
when
you
some
touchy
and
I
were
working
together.
The
intention
here
is
that
you
have
it
being
built
as
a
binary,
but
it's
also
useful
as
a
library.
B
Right
it
can
be
used
as
a
library.
I
don't
know
if
we
want
to
encourage
that,
but
yes
like
we
could
start
using
these
same
files.
The
same
approaches
to
also
do
transforms
inside
our
our
operators.
B
So
if
you
did
want
to
remove
a
label
like
we
could,
we
could
have
another
like
you
can
already
pump
pipe
through
a
pipeline
and
we
could
make
make
it
these
can
be
plugged
into
that
pipeline.
I
believe.
B
B
Run
it
manually
and
commit
the
output,
and
today
it
uses,
make
and
and
commands.
I
think
we
can
also
put
this
into
customize
if
that's
like.
With
the
same,
you
know
the
same,
because
the
libraries
come
from
customized.
I
think
we
can
actually
put
the
plug
this
into
customize
using
some
of
their
extensions
yeah.
D
I
think
you
can
it's,
you
can
either
create
a
transformer
or
a
generator
something
along
those
lines.
You
can
add
it
to
your
customization
yaml.
D
E
This
is
nick.
I
just
wanted
to
point
out
alternative
tooling.
That
could
be
useful
here.
I
don't
know
if
folks
have
heard
about
queue,
but
it
lets.
E
You
also
like
include
schema
validation
as
part
of
the
data,
so
essentially
like
everything
has
to
shake
out,
according
to
a
certain
schema
that
you've
defined
and
or
else
it
like,
just
won't
evaluate
at
all,
and
I
don't
know
if
you
could
do
like
removal
transformations
with
that,
like
removing
a
label
or
something,
but
just
note
that
it's
another
kind
of
like
tool
out
there,
that
has
a
reasonable
amount
of
work
in
it.
D
E
I
haven't
used
it
for
anything
beyond
like
toy
trivial
things,
so
I
don't
want
to
leave
people
astray
with
this.
Just
want
to
point
out
that
maybe
it's
something
to
look
into.
C
If
you
want
a
use
case,
ilia,
I
always
pronounce
his
last
name
wrong.
So
I'm
not
going
to
try
that
used
to
work
with
you
lee.
C
He
has
been
doing
a
bunch
of
stuff
with
q
at
on,
like
I
think
he
was
building
an
operator
for
gke
or
something
like
that
for.
D
So,
and
one
of
the
handy
things
that's
probably
very
attractive
to
somebody
like
ilia,
is
that
q
is
usable
as
a
go
library
directly.
D
Cool
well,
I
think,
honestly,
the
this
patch
is
super
uncontroversial.
For
me,
I
think,
like
there's,
certainly
a
niche
you
know
for
this
tool
to
exist,
particularly
since
we
need
some
ways
that
we
can
talk
with
each
other
about
how
to
maintain,
manifests.
D
B
Cool
yeah,
thank
you.
The
I
think
the
primary
thing
was
just
to
make
sure
there
wasn't
some
other
tool.
I
think
I
can
certainly
have
a
look
at
cue
and
what
ilia
is
doing.
I'm
just
browsing
around
the
docs.
I
don't
see
transforms
highlighted
that
doesn't
mean
it's
not
there.
I
just
haven't
found
it
yet
and
then
but
yeah
then
we
can.
B
If
there's
nothing
else,
then
like
that
that
was
the
first
thing
to
figure
out.
There
was
some
other
tool
that
I
should
be
using
and
thank
you
for
that.
Nick
and
chris,
the
pointer.
D
Yeah
I
mean
beyond
like
using
something
more
full-featured
like
starlark
or
cdkates
or
jk
config,
which
all
could
import
the
ammo
you
know,
and
then
you
could
use
a
programming
language
to
mutate
it,
but
those
those
tools
are,
you
know
you
would
be
writing
up
these
transformations
like
by
programming
against
the
manifest
it
would
not
be
dissimilar
from
you
know,
like
writing
the
go
code
that
you
have
in
this
library
necessarily
it'd,
probably
be
more
terse
because
they
have
a
helper
function
already
available
but
and
like
there's,
a
library
in
cdks,
so
it'd
all
be
typescript
called
cdkatesplus,
which
has
this
kind
of
intent
based
like
chaining
style.
D
So
you
can
like
get
a
defaulted
container
object
with
a
single
function,
call
and
then
like
template
it
with
some
optional
arguments
and
add
volume,
outs
and
stuff
with
change
functions,
and
so
that's,
like
the
those
tools
are
certainly
capable
of.
You
know,
taking
an
existing
manifest
and
transforming
and
outputting
new
ones.
But
it's
a
wider.
D
D
I
mean
you
could
probably
implement
that
with
like
some
typescript
libraries,
I
suppose,
and
then
just
have
a
very
small.
You
know
cdks
or
jk
config
project
that
that
could
fit
that
niche.
D
But
then
you
have
it
yeah,
you
know
it's
not
up
here,
go
dependency
at
that
point.
If
you're
running
jk
config,
the
v8
interpreter
is
embedded
into
the
binary.
D
D
F
D
Really
good
question,
I
think,
like
what
justin's
doing
with
the
kml
is
it's
basically
what
you're
talking
about
except
more
structural?
So
it's
not
line
based
like
said
enoch,
it's
structural
based
off
of
like
the
keys
that
are
inside
of
the
like
the
parsed
yml,
but
it's
just
go
when
you
write
it,
it's
just
not
very
terse.
It
doesn't
deal
well
with
dynamic
types
and
there's
all
the
error,
handling
and
interfaces
and
etc.
D
F
Yeah,
maybe
somebody
can
look
at
starlark
if
you
say
if
it
can
be
part
of
the
goal,
because
these
substitutions
generators
and
all
this
in
the
all
the
places
which
we
have
been
seeing,
have
to
do
with
something
parsing.
And
if
there
is
something
common
parsing
which
can
be
exploited
for
the
go
objects.
B
B
It's
certainly
interesting
to
think
about
how
the
libraries
are
building
here.
We
could
fit
them
into
other
places
and
including,
like
our
own
operators,
like
that's
an
interesting
like
pivot,.
D
Yeah
and
I
think
kml's
definitely
like
a
piece
of
the
puzzle.
It's
like
a
powerful
building
block
but
like
the
whole
reason
that
you're
writing
this
tool
is
that
it's
not
high
level
enough
for
like
a
lot
of
the
things
that
you
would
want
to
do,
and
it's
maybe
a
bit
tough.
I
I
almost
think
that
some
of
these
things
would
be
easier
from
a
functional
programming
perspective.
D
If
we
look
at
fp
libraries
for
go,
then
a
lot
of
those
kinds
of
vector
you
know,
map
dictionary
reduction
transforms,
are
really
like
exactly
the
kind
of
thing
that
you
want
to
do,
but
it's
just
a
totally.
You
know
different
programming
model
than
you
know,
iterating
or
things
in
the
loop
and
stuff.
So
sometimes
the
ideas
for
operating
on
these
kinds
of
nested
collections
could
be
more
concise
yeah.
It's
also
not
clear
to
me.
I
guess
skylark
might
be
the
go
version
of
starlark.
D
B
Naming
debacle,
and
so
the
name
was
changed,
so
starlark
is
now
skylark
and
it
does
have
implementations
in
go
java
and
I
believe,
rust,
the
future
language
for
all
of
us.
D
Seems
like
you
knew
a
little
bit
about
skylark
already.
Did
you
have
an
opinion
on
using
it
for
these
kind
of
transformations,
justin.
B
I
did
I
did
debate,
so
the
kgamel
library
is
the
one
that
keeps
the
comments
and
basically
preserves
the
structure
which
I
wanted
to
use.
I
did
debate
wrapping
the
k,
yaml
library
in
skylark
and
then
exposing
it
to
effectively
skylark
scripts.
That
was
more.
That
would
be
even
more
work,
so.
B
It
almost
does
nothing
out
of
the
box
like
out
of
the
box.
Skylark
is
just
a
very
sandboxed
python,
environment,
python
style,
environment,
with
very
limited
abilities
to
do
things
by
design
and
to
make
it
do
anything.
You
plug
in
extension,
objects-
and
I
know
that,
like
crews
and
stripe,
created
sky,
config
and
isopod,
I
believe,
which
are
kubernetes
extensions
for
skylar,
skylark
and
basil,
uses
skylark
and
extends
it
with
their
own
objects
for
building
stuff,
but
yeah.
B
So
the
the
way
it
would
the
most
natural
way
would
work,
is
either
use,
isopod
and
or
sky
config,
and
basically
not
worry
about
preserving
comments
or
bring
across
put
the
k.
Yaml
library
expose
the
k,
yama
library
to
skylark
and
write
these
things
in
skylark.
I
felt
like
this
was
a
good
better
way
to
start,
but
the
kml
library
is
actually
fairly
complicated
to
use.
B
If
you
look
at
the
code,
you'll
be
like
it's
surprising,
you
basically
have
to
manipulate
yaml
like
not
even
a
dom
tree
more,
like
I
remember
always
before
the
dom,
like
yeah
more
like
an
ast
or
more
like
a
token
stream
than
a
yeah,
a
slightly
minimally
parsed
token
stream.
In
order
to
oh.
D
That's
that's
pretty
hard
to
use
okay
yeah.
Well.
In
that
case
I
mean,
I
guess,
one
idea
that
could
come
out
of
this
conversation
is
like
if
you
have
sky,
config
and
isopod,
what
is
the
kind
of
like
high
level
interface
that
those
things
are
producing
and
how
can
we
take
bits
and
pieces
of
that
interface
and
do
something
similar
in
kml
or
on
top
of
kml?
D
Then,
if
we
have
camel
right
now
for
messing
around
with
that
interface
and
producing
something
useful,
it
doesn't
prevent
us
from
graduating
to
you
know
using
those
exact
same
libraries
in
some
other
interface
later
on.
I
know
yeah.
That
sounds
good
to
me.
D
Sweet
any
any
more
comments
on
that
just
speak
up
about
it.
Daniel
did
was
this
bullet
point
in
here
from
like
last
time
or
something
I
did
make
this
cool
repo
that
you
shall
look
at
if
you
have
some
time
but
yeah,
no.
A
I
think
it's,
I
can't
remember
that
you
showed
this
the
last
time.
I
don't
think
so.
D
Yeah
well
thanks
for
for
throwing
it
on
there.
If
you
did,
I
made
a
repository
called
capiflux
demo.
I
guess
I'll
just
share
my
screen.
I
don't
know
if
I
mentioned
this
before.
D
Oh,
I
guess
screen
sharing
is
disabled
but
yeah
the
links
in
the
in
the
thing-
and
this
is
just
an
example
of
how
to
bootstrap
multiple
clusters
with
cluster
api
using
flux,
and
you
could
see
this
creating
some
pretty
complex,
emergent
behavior
for
like
managing
things
from
day
zero
with
git
get
ups
and
add-ons.
D
Those
add-ons
could
be
out
on
operators
using
declarative
pattern,
which
could
also
be
you
know,
sourcing
their
packages
from
oci
registries
or
git,
or
some
channel
and
yeah
it's
it's
I'm
finding
that
there's
not
a
lot
of
like
really
open
examples
of
multi-cluster
stuff.
That
actually
is
useful.
D
D
You
know
feel
free
to
mention
those
demos
here
I
guess
we
could
also
probably
share
that
kind
of
work
and
how
it
cross-cuts,
with
cluster
add-on
stuff
directly
in
the
cluster
api.
Call
there
this
is
basically
using
flux
as
an
alternative
to
something
called
the
oh.
I
can't
remember
what
it's
called
anymore.
It's
too
many
words
in
my
brain,
but
there's
a
there's,
an
experimental
feature
inside
of
the
cluster
api
controllers.
That's
able
to
basically
apply
a
bundle
of
manifests,
and
this
is
an
alternative
approach
to
that.
D
Resource
set:
that's
what
it
is
so
yeah
possible
to
use
other
things
from
the
you
know
community
to
accomplish
similar
results,
and
this
relates
to
our
group
and
that
the
bootstrap
flow
would
work
for
you
know
getting
declarative
pattern.
Operators
running
and
that
kind
of
thing.
D
B
Thank
you
sure,
so
there
is,
I
feel
like
after
many
months
of
not
making
a
lot
of
progress
on
caps
integration.
I
have
made
a
little
bit
more
progress
on
and
got
some
things
into
place.
The
pr,
inter
chaops
now
has
have
been
updated
with
the
minimal
rbac
permissions
based
on
satoshi's
work.
So
basically
like
pre-creating,
the
rbac
permissions
in
alongside
the
operator,
tricky
should
have
been
trickle
and
so
they're
a
bunch
of
those
sort
of
pr's.
B
I
think
we
are
in
a
much
better
place
security,
wise
sort
of
overcoming
some
of
the
chaos
objection
or
the
yeah
chaos,
maintainers
objections
to
the
approach,
and
so
that
is
great.
B
Thank
you
santoshi
for
that
work
and
then
there's
a
and
what
I
think
is
a
more
exciting
one,
which
is
the
other
big
objection
was
some
people
or
quite
rightly,
some
people
want
to
have
a
clear
understanding
of
what
the
operator
or
what
is
going
to
be
installed
onto
their
cluster
and
the
operator
because
of
its
indirection
sort
of
makes
it
harder
to
understand,
certainly
compared
to
the
status
quo
today,
where
the
manifesto
basically
hard-coded
into
the
chaos
binary,
and
so
therefore,
we
are
able
to
tell
you
exactly
what
it's
going
to
be.
B
The
flip
side
of
that
is
it's
every
time
we
want
to
change.
One
of
those
manifests.
We
have
to
release
a
new
version
of
k-ups,
which
is
a
pain,
and
so
and
it's
you
know,
coupling
of
versions
also
problems
of
that
nature.
So
we
want
the
operators
and
their
separate
release
cycles,
but
we
still
want
the
ability
to
understand
exactly
what
what
the
operator
is
going
to
produce
and
what
this
second
linked
pr
does.
Is
it
uses?
B
It
runs
the
operator
client
side
in
what
we've
I
think,
we've
previously
called
it
one
shot
mode
that
basically
produces
the
yaml
on
standard
out
that
it
would
apply,
and
so
the
chaos
binary
is
able
basically
to
it,
has
to
run
a
docker
container,
but
it
is
able
to
like
expand
out
an
operator
that
it
has
sourced
from
you
know
the
web
or
your
private
registry
or
whatever
it
is,
and
there
there
is
a
trick
which
I
was
quite
proud
of,
which
is
so
obviously
on
your
standard
in
you
pass
the
cr
the
instance
of
the
crd
that
you
want
the
operator
to
expand.
B
You
can
also
pass
other
objects,
and
the
use
case
for
that
is,
for
example,
the
core
dns
operator,
which
is
the
one
I've
been
playing
with,
looks
up
the
ip
address
to
use
for
the
cube
dns
service
from
I
think.
Currently
it
uses
like
the
kubernetes
default
kubernetes
defaults
service,
the
the
kubernetes,
the
service,
that's
confusing
the
service
named
kubernetes
in
the
default
namespace,
and
it
does
that
to
get
the
side
range
to
infer
the
cider
range
for
services.
B
So
what
you
can
do
is
you
can
pass
additional
objects
on
standard
in
and
then
the
operator
we
run
the
operator
in
a
special
sort
of
sandbox
mode,
where
it's
able
to
see
those
objects
as
if
they
were
coming
from
an
api
server.
So
this
is
that
second
or
last
link-
and
so
the
k,
ops
or
cube
atm.
B
We
had
a
mock
client,
and
so
I
took
the
mock,
client
and
copied
and
pasted
it
and
tweaked
it
a
little
bit
for
our
use
cases
to
be
less
test,
oriented
and
more
like
different,
like
real
oriented
and
yeah.
So
it
mocks
out
the
api
server
or
the
client
runs
it,
and
so
the
idea
is
this
way.
We
actually
have
a
a
way
to
pass
variables
or
state
into
these
operations,
dependent
objects.
Basically
you're.
D
B
But
you
can
also
pass
as
many.
You
can
always
overpass
objects
right.
So
the
question
is:
how
would
how
would
chaos
or
cuba
q8m
know
what
objects
to
pass?
I
imagine
we
will
basically
build
up
a
set
of
objects
that
we
need
to
pass
and
we'll
just
pass
them
to
all
the
operators
and
who
cares,
like
you
know,
we're
going
to
stream
an
extra
couple
of
kilobytes
over
standard
in
a
big
deal.
D
So
we
could
have
a
package
called
like
like
a
standard
cluster
info
or
something
you
know
that
people
can
write
their
operators
against.
B
Yeah
I
mean
we
haven't
exactly
we
haven't,
I
mean
you,
don't
even
need
to
you,
don't
even
need
to
yes,
we
we
we
have
so,
I
think,
actually
some
toshi.
Sorry,
I'm
just
like
rambling.
B
I
think
satoshi
identified
some
patterns
from
the
core
dns
operator
and
refactored
those
out
into
a
package,
in
other
words,
to
discover
the
cluster
cider
or
discover
the
coop
dnsip
address
service
ip
address,
and
so
those
I
think,
will
be
a
go
package
which
operators
can
use
and
will
eventually
discover
a
set
of
objects
which
a
cluster
administration
tool
should
pass
in
so
like
today
it
passes
or
the
the
project
presses
in
the
cube
dns
service
and
the
kubernetes
service.
B
But
if
we
have
others,
then
we
can
pass
this
in,
like
I
know,
some
things
use
the
uuid
of
the
cube
system.
Namespace,
that's
a
trickier
one
because
that
you
don't
know
in
advance,
but
we
need
to
figure
out
what
those
what
those
objects
are,
and
basically
just
stream
them
in.
B
But
yeah,
hopefully
I'm
hoping
that
unblocks
us
on
the
chaop
side,
because
I
think
those
two
things
will
hopefully
overcome
the
main
objections.
The
third
objection
is
around
running
images
and
how
we
mirror
images
that
should
become
not
a
not
like
the
sec.
B
Client-Side
expansion
should
overcome
that
objection
as
well,
because
it's
it's
uses
the
same
mechanism
that
we
always
use.
B
So
k
ops
has
a
a
nascent
image,
mirroring
solution,
and
so,
as
long
as
it
happens,
client-side
it'll
plug
into
that
the
problem
is,
if
it's
an
operator,
then
the
k-ops
client-side
mirroring,
can't
see
the
second
image
it
can
only.
B
Yes,
well
so
the
the
way
we've,
I
think,
actually
called
it
dry,
run
amusingly
the
the
way.
The
way
kops
is
consuming,
that
driver
and
output
is,
we
then
do
not
run
the
operator,
so
it
is,
it
is
a.
It
is
a
mode.
It
is
an
additional
mode
for
people
that,
for
whatever
reason,
don't
want
to
run
operators-
and
I
think
we.
B
That
in
in
a
more
acceptable
way
like
pre-create
the
manifests
and
run
the
operator
and
have
it
adopt
it,
but
you
know
this
is
a
starting
point
that
I
think
will
get
us
able
to
merge.
I
hope.
D
For
like
things
that
can't
be
known
ahead
of
time
like
the
eu
id
of
the
coup
system
name
space,
which
you
could
certainly
generate
and
inject
yourself-
I'm
not
sure
how
you
know
recommended
that
would
be,
but
you
you
could
run
them
in
a
non-dry
mode
from
the
client
side
accessing
the
api
server
directly
from
your
machine.
D
B
Yeah
I
mean
I
feel
like
we've
been,
we
need
to
find
a
path
to
get
it
integrated
and
then
I
think
we
can
start
to
think
about.
D
Yeah
the
manifest
expansion
thing
in
the
dryer
mode
on
the
client
just
certainly
allows
that
image
mirroring
to
be
possible.
That's
pretty
cool
okay,
because
that
was
the
third
third
kind
of
snafu.
B
Yeah-
and
we
also
know
that,
like
you
know,
some
people
aren't
wild
about
the
idea
of
operators,
I
think
the
nice
thing
is
we
essentially
say
even
if
some,
even
if
you
have
users
that
aren't
worried
about
operators,
you
can
write
an
operator
using
the
kubota
declarative
pattern
and
you
can
still
meet
their
needs
as
well
by
just
following
this
pattern.
D
Yeah
we're
running
into
some
similar
issues
with
like
the
or
with
like
the
flux,
installation
and
bootstrap
sub
commands
of
the
flux
command
line
tool
which
uses
a
little
bit
of
customized
internally,
but
also
like
builds
abstractions
on
top
of
it
and
people
are
like.
D
I
don't
want
this
magic
thing,
we're
like
hey,
but
it's
item
coding
like,
but
I
don't
want
this
magic
thing
and
and
then
now
we
have
a
terraform
provider,
and
now
we
have
people
who
are
using
the
customization
raw
and
all
kinds
of
other
things,
folks,
with
lots
of
different
opinions
on
how
to
manage
their
manifests.
I
guess
that's
just
the
world
that
we're
in
it's
our
future.
D
E
I
guess
the
last
thing
I
wanted
to
mention.
Oh
yeah,
what's
up,
I
just
wanted
to
ask,
since
we
got
that
kept
merged
for
bundles
is
one
of
the
next
steps
for
that
to
kind
of
like
apply
or
have
cops,
be
able
to
apply
a
version
or
schema
of
said
bundles
for
add-ons.
D
B
That's
true,
I
think
there
are.
There
are
two
places
I
think
where
so,
yes,
there
are
also
two
places
where
your
work,
that
that
kept
work
applies
here,
the
first
one
of
which
is,
we
need
to
know.
We
need
some
way
of
knowing
whether
an
operator
supports
driver
mode
and
how
we
would
invoke
it.
B
So
I
was
imagining
using
your
same
approach
of
using
labels
to
identify
that
it
does
support
it,
and
then
I
think
another
thing
we
could
do
is
we
could
say
this
operator
is
actually
fairly
trivial
and
you
don't
need
to
execute
it
and
you
can
actually
just
go
and
take
the
underlying
manifest.
B
So
if
we
had
this
sort
of
bypass
approach,
you
could
you
could
mark
it
as
safe
for
bypass
or
in
some
way
and
then
the
tool.
D
B
Well,
we
have
to
package
that
up
yeah,
so
the
operators.
So
today
the
today
the
integration
runs
the
operator
directly
and
magically
knows
the
name
of
the
image
the
that
it
has
to
pass.
The
dry
run
flag,
that
it
can
pass
the
dry
run
flag
and
that
it
has
to
run
the
operator
and
can't
just
go
and
get
the
underlying
manifest
directly.
B
It
would
be
great
to
be
able
to
just
get
the
underlying
manifest
directly
and
skip
running
docker
for
my
performance
and
like
dependency
point
of
view,
but
in
any
case
we
still
need
to
know
what
the
how
to
how
to
get
dry
run.
Output
out
so.
D
We
could
use
yeah,
oci,
registering
client
libraries
inside
of
cops
payoffs
to
then
talk
to
the
registry,
open
up
the
manifest
bundle
for
the
operator
itself
and
then
examine
the
oci
labels.
So
basically
exactly
what
nick
was
talking
about,
which
is
a
direct
client
integration
as
well
for
packaging
operators
themselves.
Instead
of
something
bespoke.
B
D
E
D
Probably,
I
guess
immediate
action
items
for
helping
there,
since
we
all
think
that
the
cap
is
good
is
make
the
make
which
oci
labels
to
use
concrete
and
document
that
and
then
add
support
for
a
source
into
declarative
pattern,
since
we
have
full
control
over
that.
So
if
we,
if
we
have
the
labels
and
then
we
add
the
source
to
declarative
pattern,
then
that's
already
two
things
that
can
happen
like
right
now
that
we
can
do
code
reviews
and
things.
E
On
cool
awesome,
so
I'll
work
on
concreting
those
oca
labels
for
our
barriers,
and
maybe
the
client
client
library
approach
for
that
that
kind
of
thing
yeah.
What
may
be
what
may
be
different
is
or
the
place
where
it
may
diverge
for
cops-
is
that
the
labels
that
tell
cops
whether
it
needs
to
dry
run
the
operators,
don't
necessarily
need
to
be
in
the
up
on
the
cube
builder.
Declare
yourself
doesn't
really
need
to
know
about
that
correct
or
incorrect.
I
guess
yeah.
I
guess.
D
D
D
So
just
the
stuff
is
public
and
open.
It's
not
super
ready
to
show
yet,
but
I,
after
doing
some
code
reviews,
it's
looking
super
promising
we're
going
to
be
putting
an
example
together.
This
just
relates
to
writing
controllers
that
are
not
buggy.
So.
D
I'll
probably
come
back
with
the
next
update
for
that
yeah
cool
thanks.
Everyone
super
awesome.
Discussions
today
appreciate
all
of
the
pr's
presentations
etc.
Take
care
of
yourselves
and
we'll
see
you
later.