►
From YouTube: SIG Apps' Zoom Meeting 20020504
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
everyone
to
may
4th
2020
see
gaps
meeting
should
I
say:
may
the
force
be
with
you
so
today
we
have
one
item
in
the
agenda:
we're
going
to
do
a
demo
of
a
open
source
tool
called
capped.
It's
spelled
like
kpt
and
I'll,
let
phil
and
morton
to
talk
about
it.
B
I
just
want
announcement
real
quick,
oh
sure
it
was
a
it's
a
follow-up
from
a
couple
weeks
ago.
So
if
with
the
staple
set
buy
and
resizing,
I
talked
to
sig
storage,
there's
an
implementer
from
stick
storage
that
wants
to
move
forward
with
modifications
to
the
existing
cap,
go
by
kk.
So
I'm
going
to
work
with
them
and
try
to
get
that
implemented
and
anon
did
express
some
interest
in
potentially
contributing
to
that
as
well.
So
I
think
we
have
enough
people.
B
A
C
Janet
are
you
able
able
to
enable
screen
sharing,
or
is
there
something
special
on
my
going
on
my
end.
D
It'll
be
easiest
to
add
phil
as
a
co-host
because
they
change
the
zoom
settings
for
all
of
the
community
meetings
so
that
only
hosts
and
co-hosts
can
share
the
screen
and
that's
okay,
that
usually
works.
The
simplest
and
the
fastest.
A
C
C
It's
helpful
to
give
the
context
of
what
we're
trying
to
do
and
and
why
it
was
designed
the
way
it
was,
and
one
of
the
themes
is
about.
Shifting
the
resource
model
left
the
kubernetes
resource
model.
C
We
often
as
we
talk
about
is
often
in
the
api
server
or
in
the
control
plane
and
is
built
using
controllers
and
mutating
web
hooks
and
validating
web
hooks
and
resources
right,
and
these
are
kind
of
the
primitives
that
allow
systems
to
be
built
on
top
of
the
resource
model,
where
the
resource
model
are
a
set
of
declarative
apis
that
have
different
programs
which
read
those
apis
and
write
to
them.
C
I'll
loosely
couple,
a
number
of
different
apis
together
and
what
kept
is
doing
is
taking
the
same
resource
model
and
shifting
it
left,
and
so
that
means
that
operating
on
the
kubernetes
resources
can
be
done
before
they're
applied
to
the
cluster.
Before
you
even
talk
to
a
cluster,
for
instance,
and
some
of
the
reasons
this
shift
list
is
useful,
more
abstractly.
C
You
can
think
that
the
earlier
you
catch
errors
on
in
the
system,
the
better
the
less
likely
they
are
to
go
out
into
into
impacting
production
systems
like
an
example,
is
compile
time
errors
versus
like
runtime
errors,
if
it's
possible
to
catch
an
error
at
compile.
D
C
That
would
be
better
than
and
then
waiting
until
it's
actually
been
pushed
and
then
catching
there,
then
so,
the
more
we
can
shift
the
work
left,
the
more
opportunity
we
have
to
catch
errors
before
they
go
out.
C
And
one
thing:
I'd
really
like
folks
watching
this
to
to
kind
of
take
note
of
is
the
architecture.
I
think
this
is
probably
the
most
interesting
topic
of
discussion
rather
than
what
individual
feature
happens
to
exist
today,
because
kept
is
still
new
and
it's
going
to
develop
new
features
and
probably
evolve
in
different
ways,
but
this
architecture
is
really
the
foundation
of
what
we're
trying
to
do,
and
so
the
first.
I
have
three
bullet
points
here
that
I
think
capture
a
lot
of
it.
C
The
first
is
the
output
of
kept
should
be
kubernetes
resources
and
maybe
open
api
right,
and
so
this
follows
the
kubernetes
resource
model,
where
controllers,
for
instance,
are
writing
to
resources
in
the
api
server,
and
this
basically
means
like
you
need
to
be
able
to
control,
apply
whatever
kept
outputs
or
you
should,
or
it
should
follow
roughly
like
with
the
kubernetes
examples
on
github
show,
it
should
look
like
these
things
and
it
can
write
these
things
to
different
locations
right,
so
it
can
write
resources
to
files
to
standard
out
api
endpoints.
C
These
are
all
valid
locations.
What's
important
is
that
it's
writing
resources
that
the
kubernetes
api
under
server
like
could
conceptually
understand
right?
Maybe
one
is
a
crd
isn't
installed
for
one,
so
it
doesn't
mean
they
need
to
be
the
native
apis,
but
they
look
and
feel
like
kubernetes
resources,
and
I
think
most
tools
kind
of
follow
this
eventually
at
the
end,
right
like
there's,
since
this
is
the
native
speak
of
the
kubernetes
api
server.
C
Eventually,
if
you're
gonna
have
to
you're
gonna
create
a
resource
it's
going
to
have
to
output,
something
that
looks
like
a
resource
the
next
one.
Not
all
tools
do-
and
this
is
what
the
input
looks
like,
and
this
is
that
the
primary
input
must
also
be
kubernetes
resources.
This
is
been
put
in.
The
output
are
symmetric.
C
It
may
read
open
api
in
addition
to
kubernetes
resources.
This
follows
the
conventions
of
what
the
like,
for
instance,
the
tools,
do
control
reads:
open
api
from
the
api
server
and
the
resources
also
follows
what
the
how
the
controllers
tend
to
work
where
the
controllers
are
also
reading
resources
from
the
api
server,
and
so
you
know
this
is
kind
of
the
the
gut
check
here
is.
Is
it
cube
control
getable,
like
the
inputs?
Are
there
something
you
could
get
from
two
control
git?
C
For
instance,
do
they
look
and
parse
like
that
or
do?
Are
they
something
that
you
get
from
github
kubernetes
examples
and
do
those
are
all
the
are
the
kubernetes
examples
like
functional
inputs
to
kept
the
sources
it
can
read
from
files
standard
in
api,
endpoints
git,
so
again
where
they
happen
to
read
the
resources
from
is
less
important.
What's
more
important,
is
this
reading
resources?
C
And
then
the
third
point
I
think,
is
especially
noteworthy
because
it's
it
follows
what
the
controllers
do,
but
a
shift
from
many
web
traditional.
C
The
way
traditional
tools
have
operated
right,
which
is
that
it
needs
to
be
able
to
read
what
it
is
written
and
perform
like
level
driven
updates
sort
of
like
a
reconciliation
rather
than
just
regenerating
everything
from
scratch.
C
So
this
is
the
sort
of
the
microservices
architecture
right
where
you
have
different
modular
components,
reading
and
writing
data
from
some
backend
or
from
the
api
server
or
that
sort
of
thing
and
interacting
by
reading
what
each
other
written
and
reading
what
they've
written
in
the
past,
rather
than
sort
of
a
model
where
it's
having
a
central
piece
that
generates
a
bunch
of
stuff
from
scratch.
C
And
so
what
do
we
get
from
shifting
the
model
left
right,
so
it
can
run
in
ci,
cicd
workflows.
You
can
script
configuration
changes
that
can
then
be
reviewed
through
pr
checks
enrolled
back
through
git.
C
You
can
do
different
development
workflows
around
like
ide
integration,
validation,
so
ides
already
have
some
validation
based
on
the
open
api,
but
you
can
imagine
your
what
would
normally
be
in
a
validating
web
hook.
Wouldn't
it
be
great
if
your
ids
could
just
capture
that
same
validation
and
apply
it
right,
an
ide
as
you're
developing,
rather
than
having
to
actually
try
and
push
something
to
production
to
catch
that
validating
weapon?
C
It's
not
gonna
work,
and
one
advantage
of
the
shift
left
approach
is
that
the
scope
of
what
it
can
validate
is
is
greater
because
it
can
validate
holistically
all
the
resources
that
are
going
to
be
applied
in
the
bundle,
whereas
validating
web
hooks
only
work
on
a
single
resource
at
a
time
so
trying
to
catch
something
in
validation
like
this
deployment
plus
this
service
plus.
This
config
map
must
look
like
this
collectively
is
going
to
be
harder
to
do
with
the
validating
web
hook
than
it
would
be
in
shift
left
and
then.
C
Finally,
on
operations,
the
shift
left
provides
a
nice
solution
for
doing
cross-cutting
transformations,
injecting
unit
containers.
These
sorts
of
things
this
can
also
be
done
with
mutating
web
hooks.
C
A
lot
of
the
working
cap
can
be
done
on
the
server
side
right,
that
is
by
design
that's
the
architecture,
but
there
are
reasons
that
maybe
you
want
to
do
it
on
client
side,
sometimes
or
at
least
ship
left
in
ci,
cd
or
before
you're,
trying
to
push
something
in
production
right
and
you
can
visualize
the
changes,
for
instance
in
a
diff
with
shift
left,
and
you
can
have
them
reviewed
and
rolled
back
pretty
with
pretty
clear
workflows,
whereas,
if
you're
doing
a
mutating
web
hook
that
operates
on
a
pod
to
inject
an
innate
container,
and
that
creates
a
problem.
C
C
How
do
we
do
that
low-level
piece
of
not
copying
and
pasting?
The
kubernetes
example
right
and
you
can
get
clone
that
thing,
but
there's
some
challenges
like
git
is
not
designed
ordered
around
resources.
C
If
you
want
to
do
an
update,
for
instance,
it's
going
to
do
file
based
updates,
as
opposed
to
resource
based
updates
and
these
sorts
of
things
another
little
piece
it
does
is
static
modification
of
configuration.
So
how
can
you?
How
can
we
define
things
like
set
image
or
scale,
or
these
sorts
of
things
which
read
and
write
configuration
files,
but
basically
statically
like?
I
know
how
to
set
this
value
because
it
lives
at
this
particular
location
or
that
sort
of
thing
and.
C
Modification
of
configuration
right,
so
these
are
things
where
it's
not
simply
a
matter
of
go,
find
this
field
and
set
its
value
to
this,
so
go
find
these
different
fields
and
set
their
values
based
on
these
inputs.
It
becomes
more
like
this
is
more
the
controller-based
approach,
where
it's
like:
okay,
look
at
the
holistic
system
and
then
what
the
desired
state
is
based
on
what
these
are
specifying
and
then
figure
out
how
to
make
it.
C
Look
like,
like
you
wanted
to
right,
like
the
program
thinks
it
should
and
then
finally
there's
an
actuation
piece
which
is
reading
and
writing
from
the
api
server.
These
are
the
extensions
or
or
they're
like
apply
and
status,
and
these
sorts
of
things
which
we're
pretty
familiar
with
all
right
so
now
I'll
give
a
quick.
C
C
Let
me
give
a
quick
demo
kept
I'm
in
this
kind
of
empty
directory.
Here,
I'm
going
to
use
demo
magic
to
kind
of
run
the
commands
for
me,
so
I
can
talk
as
it
as
it
types
the
commands
for
me.
So
the
first
thing
it's
going
to
do
is
fetch
a
package
from
git,
and
so
this
is
saying
go
find
in
the
subdirectory.
From
from
a
little
get
repo,
I
wrote
for
this
example
to
pull
down
the
configuration
right
and
so
at
this
particular
version.
C
And
then,
typically
in
a
workflow,
you
want
to
go
ahead
and
add
this
to
get
right
away
and
that
will
make
the
diffs
easier.
I
can
use
git
diff
to
kind
of
show
you
the
changes
as
we
modify
this
thing
and
so
kept
has
some
commands
for
just
viewing
package.
Configuration
here
happens
to
be
one
of
them
that
allows
you
to
kind
of
quickly
add
a
glance
see.
What
have
I
fetched
right.
C
But
if
we
look
at
what
is
actually
fetched
over
here,
you
can
see
that
it's
basically
yaml
configuration
file.
C
And
so,
following
that,
it
has
to
apply
to
a
cluster
thing
like
we
can
just
control
apply.
This
thing
kept,
has
its
own
apply
utilities
which
allow
for
some
some
nice
functionality,
but
right
now,
I'm
going
to
use
two
control
just
to
demonstrate
the
philosophy
again,
and
you
can
see
that
there
is
this
cd
thing
running.
C
And
so
now,
maybe
you
want
to
like
modify
this
thing
locally
and,
and
so
this
is
not
something
you
necessarily
have
to
do,
but
when
one
may
want
to
modify
their
their
package,
that
they've
pulled
down
and
add
some
stuff
right
and-
and
this
is
demonstrating
that
again,
it
has
to
be
able
to
read
what
it
is
written
and
do
level
based
transformations
rather
than
like
regenerating
from
scratch.
C
So
as
we
modify
this
thing,
it
should
be
able
to
keep
those
things,
because
it's
reading
and
updating,
and
so
we're
gonna
do
here-
is
we're
gonna,
go
ahead
and
update
this
package
from
upstream
to
a
new
version,
and
this
should
go
ahead
and
keep
that
annotation.
I
added
while
pulling
in
upstream
updates,
because
it's
doing
a
merge
right,
and
so
you
can
see-
and
here
that
it's
just
changed
this
to
a
five
from
before
you
can
see
that
change
here
as
well.
C
So
that's
what
was
changing
the
update.
You
can
also
see
that
it's
kept
that
annotations
bar
again
reading
what
it
is
written
and
doing,
updates
to
it
going
ahead
and
and
bi
into
these
animal
files
is
not
always
the
best
way
of
not
always
the
way
people
want
to
edit
stuff
people
do
are
there's
different
reasons
you
may
want
to
just
have
quick
commands
to
go
and
change
these
things.
C
It
allows
tools
to
do
kind
of
the
heavy
lifting
for
humans
so
that
it's
less
likely
humans
are
going
to
make
mistakes,
and
you
can
see
what
I've
done
here
is
just
added
run
the
set
command.
So
these.
D
C
The
notion
of
setters
these
are
the
static
transformations.
I
was
talking
about,
where
it's
possible
to
define
sort
of
static
ways
that
the
configuration
may
be
manipulated.
In
this
case
we
set
the
replicas
and
then
maybe
we
added
some
metadata
about
why
we
set
it
that
way
and
who
said
it,
and
so
now
you
can
see
when
you
list
these
setters
that
replicas
have
been
set
from
one
to
three
and
you
can
see.
C
C
One,
and
so
you
can
see
here,
what
that's
done
is
it's
gone
and
it's
just
updated
the
yaml
file
so
again
going
back
to
that
philosophy
if
it
reads
yaml,
if
right,
tml
reads
what
is
written
previously,
and
so
this
is
loosely
coupled
with
the
packaging
piece
like
if
the
packaging
piece
reads
and
write
yamls
the
setters
piece
reads
and
writes
animals:
they
don't
really
need
to
know
about
each
other
or
how
each
other
operate,
allowing
a
better
interoperability,
and
so
this
case
has
just
gone
ahead
and
updated.
C
That
and
I'll
talk
a
little
bit
about
this.
This
open
api
yeah
piece
actually
is
how
these
headers
are
performed.
The
inputs
are
open
api,
the
outputs
are
open
api
and
so
we're
using
open
api
to
describe
metadata
about
the
configuration
just
like
kind
of
kubernetes
uses
open
api
from
the
api
server
to.
C
The
configuration
so
we're
going
to
go
ahead
and
apply
this
thing
with
our
changes
and
we're
going
to
see
that
actually,
it's
not
working.
There's
these
errors
right
and
that's
because
the
way
I've
designed
this
is
that,
while
I
change
the
replicas
just
changing
the
replicas,
isn't
enough
right
in
this
case,
doing
a
pure
static
transformation
which
would
work
great
for
certain
things
like
image
and
replicas,
maybe
for
deployment
and
other
things
in
this
particular
case
for
the
staple
set.
C
It's
saying
you
can't
just
change
the
replicas,
because
the
other
replicas
now
are
are
not
configured
correctly
with
the
right
set
of
initials.
The
initial
cluster
configuration
we're
going
to
fix
this
by
adding
dynamic
configurations,
so
I'm
going
to
go
ahead
and
pull
in
upstream
update,
and
this.
C
Add
a
couple
things:
it's
going
to
add
this
function,
declaration
and
and
there's
different
function,
runtimes,
and
this
one
happens
to
be
written
in
star
wars,
which
is
kind
of
a
experimental
runtime
that
were
that
hasn't
really
been
supported
yet.
But,
but
I
think,
demonstrates
very
well
the
the
architecture
that
we're
trying
to
show
here,
which
is
because.
C
You
to
quickly
perform
a
dynamic
sort
of
scriptable
transformation
without
executing
arbitrary
code
on
your
machine.
There's
other
runtimes
that
allow
this
too
and
containers
are
actually
the
primary
one
time
we're
supporting,
but
this
one
happened
to
be
a
little
bit
easier
for
the
purposes
of
demonstration,
and
so
it's
updated.
This
configuration
to
say
hey.
This
thing
now
has
this
function,
which
is
a
dynamic,
dynamic
piece
of
code
which
should
perform
transformations,
and
it's
done
through
kind
of
this
star
like
script
here
again.
C
This
is
the
starlark
piece
is,
is
experimental
and
really
great
for
demonstration,
but
the
containers
are
the
more
mature
of
the
two,
and
so
this
time
we're
gonna
set
the
replicas
to
two
we're
gonna
see
something
different.
This
time
the
replicas
have
been
changed,
but
also
this
value
of
the
initial
cluster
has
also
been
changed.
You
see,
and
so
that
initial
cluster
is
instead
of
just
being
statically
defined
as
the
first
pod.
It
now
includes
all
of
the
different
pods
that
we
expect,
based
on
the
replicas.
C
Delete
that
pod,
for
some
reason
I
was
having
trouble
restarting,
but
now
you
can
see
that
it's
running
successfully
here,
but
now
that
we
changed
that
and
so
I'll
show
you
quickly
what
the
reconcile
function
looks
like
it's,
this
small
snippet
of
code,
but
what
it
does
is
it
takes
in
a
list
of
items
and
this
this
schema
is
well
defined
or
the
input
output
is
well
defined
for
what
functions
look
like
in
this
case,
and
it
takes
in
the
items
this
way
and
then
so.
C
It
pulls
out
those
different
items
and
then
goes
and
finds
the
matching
staple
set,
based
on
the
name
of
kind
of
what
was
annotated
with
the
function
and
these
things
and
then
goes
pulls
out.
C
The
replicas
and
the
service
name
and
the
port
and
then
figures
out
what
the
list
of
pods
are
that
are
going
to
be
part
of
the
initial
cluster
based
on
that
and
then
it
goes,
and
it
sets
here
on
that
environment
variable
that
list
of
pods
is
the
initial
state
right,
and
so
it's
going
it's
gone
and
modified
that
particular
resource
in
the
items
and
then
kept
itself
will
then
write
that
back
out,
and
so
this
is
automatically
triggered.
C
C
Values
and
then
I
can
show
you
so
the
open
api
for
the
setters.
This
is
kind
of
how
the
setters
are
defined.
C
It's
done
through
these
open
api
definitions
which
describe
like
here's,
the
name
of
the
sorry,
here's
a
name
of
a
setter
called
replicas,
here's
who
was
set
by
and
here's
his
current
value
and
so
going
back
to
that
cd
yaml
file.
This
is
just
an
open
api
reference,
and
so
these
resolve
using
standard,
open
api
libraries
there's
nothing
kept
specific
about
this
reference
or
resolution.
C
C
Is
there
an
open
api
definition
that
comes
as
a
comment
on
this
particular
field,
and
then
it
also
parses
additional
open
api
definitions
from
the
kep
file,
and
so
using
those
two
things
together,
you
can
augment
the
open
api
provided
from
kubernetes
with
your
own
extensions,
and
then
you
can
customize
the
individual
objects
open
api
to
be
something
more
than
just
like
the
generic
open
api
for
that
type
that
you
could
say
for
this
particular
object,
even
that
these
are
how
these
particular
fields
look,
which
opens
the
door
for
in
the
future.
C
Doing
things
such
as
saying
this
particular
maybe
image
should
have
this
regular
expression
and
putting
restrictions
on
certain
fields
for
particular
objects,
while
still
allowing
end
users
to
do
modifications
to
them
in
customizations.
C
E
I
can
just
run
through
it.
It's
it's
just
a
smaller
part
of
kept
than
what
phil
went
through,
so
it
shouldn't
take
too
long.
E
I
think
I'll
have
to
do
it
like
this,
because
I'm
going
to
demo
something
which
might
might
if
I
change
the
font
size.
It
might
not
work
too
well.
So
yet,
since
it's
still
early
so
so
it's
going
to
show
it's
the
kept
live
functionality,
and
this
is
for
applying
cad
packages
to
a
cluster
and
but
kept
like
it.
Just
really
is
just
convenience
resources
so
just
like
fill
the
mode
kept
with
using
control
I'll
just
demo
kept
live
with
regular
resources.
E
So
I
just
have
a
set
of
five
different
resources
here
and
I
want
to
apply
those
to
cluster
and
kit.
Live
adds
a
couple
of
things
on
top
of
what
you
get
from
cube
control,
so
it
has
pruning
which
is
slightly
different
than
how
pruning
is
done
in
cube
control.
E
So
first
thing
we
need
to
do
is
because
the
way
pruning
works,
we
create
inventory
objects
as
part
of
apply,
and
to
do
that,
we
need
to
know
where
we
want
to
put
those
inventory
objects
and
also
generate
an
id.
So
first
thing
we
need
to
do
is
just
do
kept,
live
init
and
now
create
an
inventory
template,
which
is
just
a
config
map.
I
can
show
it
here.
E
So
it
just
contains
the
important
thing
is
namespace
and
the
annotation
at
the
bottom.
E
Now
can
do
that
is
I
can
do
a
preview
of
what
will
happen.
If
I
apply
this,
so
I
can
use
the
kepler
preview,
I'm
going
to
show
that
will
create
pod
disruption,
budget,
a
crown
job
and
yeah.
You
can
see
the
inventory
object
on
the
top,
and
then
I
can
apply
this
and
then
I
wanted
to
wait
for
reconcile.
E
A
E
So
and
there
everything
is
rolled
out
so
now
everything
is
not
only
applied,
but
it
also,
we
know
that
it
reconciled.
You
can
also
get
another
view
of
this,
and
this
is
why
this
is
new.
It
doesn't
handle
soft
wraps
properly.
So
that's
why
I
I
needed
a
larger
screen,
so
this
shows
all
the
resources
it
shows
the
threads
that
are
generated
and
the
status
conditions
and
some
information
about
all
of
them.
E
What
we
can
do
is
now,
if
you
want
to
update
something,
let's
say
we
want
to
change
the
stateful
set
here.
Let's
say
we
want
to
increase
the
number
of
replicas
to
four.
E
So
this
view
is
partly
inspired
by
the
cubecontrol
tree
command.
Okay,
so
this
this
was
a
timeout.
So
now
this
didn't,
I
forgot
to
set
the
timeout
for
longer
than
60
seconds,
but
they
still
still.
The
reconcile
was
still
happening
in
the
background
and
what
we
can
do
now
is
we
can
just
delete
the
stable
set
and
then
we
can
do
a
preview
again.
E
And
then
we
can
see
here
that
the
when
we
run
apply
next
time.
We
expect
the
staple
set
to
be
pruned
the
service
and
also
the
existing
inventory
object,
as
it
will
create
a
new
one
and
if
you
do
apply
again,
we'll
see
that
the
stateful
set
is
no
longer
there,
and
this
will
be
the
link
in
the
background
we're
working
on
functionality
to
wait
while
pruning
happens
too.
E
E
B
C
Sorry
can
you
say
that
one
more
time
the
audio
went
out
halfway
through
what
you
were
saying
and
then
so?
Okay
with.
B
The
functions
that
you
use
to
do
the
updates
right,
like
the
little
the
callbacks
are
called
on
set.
For
instance,
those
are
specified
via
open
api,
correct.
C
No,
the
so
there's
two
ways
of
there's
a
couple
ways
of
running
the
functions
and
it's
there's
layers
in
the
functions.
So
so
the
layer
you're
talking
about
is
effectively.
How
do
you,
how
are
they
orchestrated
right
like
what
what
causes
them
to
be
invoked
and
what
I
there's
a
number
of
ways
of
different
of
of
specifying
them
to
be
orchestrated.
C
The
one
I
demonstrated
is
through
an
annotation
on
a
resource
that
says
that
effectively
describes
how
to
invoke
the
function
like
how
to
set
up
the
the
runtime
right
and
so
for
containers
that
would
be
like
does.
Should
you
enable
the
network
or
like
by
default?
The
network
would
be
disabled
so
that
so
that
it's
relatively
safe
to
ride,
and
it
won't
send
your
configs
to
some
malicious
endpoint
or
something
like
that.
C
But
you
can
say:
oh,
we
need
network
to
be
enabled
and
then
that's
possible
to
do
or
those
sorts
of
things
those
can
be
set
on
any
those
are
set
on
any
resource.
Then
it
will
cause
the
function
to
be
read
with
the
resource
that
contains
the
annotation
to
be
provided
in
a
special
field.
In
the
input
called
function,
config,
and
so
you
can
build,
you
can
design
a
system
to
look
kind
of
like
a
controller
right
where
you
could
create,
for
instance,
a
new
client-side
crd.
C
If
you
will,
this
is
the
way
a
lot
of
people
have
described
it
right
where
you
have
some
abstraction
type,
and
then
you
could
put
the
annotation
on
there
and
then
that
would
cause
cap
to
go
invoke
the
function,
providing
that
sort
of
abstraction
type
as
input
just
like
a
controller
crd
model
would
work
and
then
the
rest
of
the
resources
that
are
scoped
to
it.
C
Those
those
being
in
the
same
directory
or
subdirectories
would
be
provided
as
items
and
then
the
function
would
go
and
look
at
that
function,
config
and
then
look
at
the
resources
and
then
make
sure
that
the
whole
system
looked
correct.
C
What
I
demonstrated
is
is
a
little
bit
different
model
than
that,
so,
instead
of
doing
an
abstraction
right
where
it's
like
a
new
type,
I
demonstrated
a
function
on
like
the
actual
resource
itself
and
then
the
function's
job
is
to
validate
that
that
individual
resource,
but
it's
also
possible
to
run
functions
like
imperatively
on
the
command
line.
You
can
give
it
the
like
a
dash
dash
image,
you
can
say
kept
so
I've
also.
C
I
also
demonstrated
an
implicit
run
where
the
function
is
run
after
setting
just
to
just
to
normalize
everything.
There's
a
more
of
an
explicit
run
where
you
can
say
kept
function
or
kept
up
in
run
on
a
directory,
and
then
it
will
traverse
all
the
configurations
and
then
run
all
the
functions.
You
can
also
do
an
imperative
mode
or
you
can
say,
capped,
fn,
run
and
then
say
dash
image,
and
then
it
will
run
the
function
from
that
image
explicitly
passing
in
all
the
configuration
into
it.
B
Would
I
have
like
one
function
that
gets
invoked
for
multiple
resources
in
order
to
update
those
resources
in
sequence,
or
would
I
have
like,
like
use
like
a
client-side
crd
and
then
treat
those
resources
as
an
artifact,
that's
generated.
B
Say
I
was
dealing
with
like
a
typical
stateless
serving
workload.
I
have
a
good
deployment.
I
have
a
service,
I
have
pod
disruption,
budget
I've
got
a
horizontal
pod,
auto
scaler,
so
from
and
so
on
and
like.
I
want
to
make
sure
that
the
like,
I
want
to
make
sure
naming,
is
consistent
across
all
the
resources,
for
instance.
So
when
I
update
my
app
name,
I
want
to
update
all
of
that
consistently
or
I
want
to
modify
labeling
across
all
of
them
and
ensure
that
the
selectors
conform
to
select
the
right
artifacts.
C
B
C
The
offhand,
starting
with
like
the
simplest
approach
that
introduces
the
least
amount
of
change
to
your
existing
system
and
then
maybe
only
only
shifting
into
divergent
stuff
when
the
benefit
is
clear.
I'd
start
out
with
okay,
what's
closest
to
the
system
you're
running
today.
Well,
if
you
have
some
ci
cd
system
already
set
up,
I'd
like
maybe
create
a
github
check
or
something
like
that
or
whatever
your
ci
cd
system
does
as
part
of
a
pr
and
I'd
run
it
potentially
imperatively.
C
C
If
you
give
it
command
line
arguments,
it
will
automatically
generate
a
function
config
based
on
those
arguments,
past
that,
and
so
then,
from
that
starting
point,
then
you
get
a
cross-cutting
view
of
okay
validate
this
whole
package,
that's
integrated
into
your
ci
cd
system
and
then
once
you
so
then
from
there.
C
That's
where
you
start
to
look
like
it
starts
to
look
more
like
an
abstraction
type
and
instead
of
people
just
running
in
the
ci
cd
system,
where
it's
embedded
into
that
script,
you
want
it
to
be
run
as
part
of
your
make
process
whenever
someone
runs
runs
locally
and
then
that's
the
point
where
maybe
you
want
to
let's
start
looking
into
do
we
want
a
declarative
approach
or
if
you
want
scoping
of
different
functions
to
different
subdirectories,
like
you
have
a
package
of
20
resources
with
different
subdirectories,
and
you
want
this
function
run
against
one
subdirectory
and
that
function
running
is
a
different
subdirectory
and
then
you
want
to
run
one
command.
C
That
does
all
of
them
like
these
are,
I
said,
more
sophisticated
techniques
and
needs
aware,
then
it
makes
sense
to
to
do
them
declaratively.
C
But
if
you
just
want
to
say
here's
a
collection
of
resources,
please
error
out,
instead
of
allowing
a
pr
to
be
merged,
if
they
don't
look
good,
just
running
them
comparatively,
I
don't
think
there's
anything
wrong
with.
Certainly,
okay.
C
But
but
that's
probably,
it
depends
on
probably
the
rest
of
your
cid's
process
too
right,
like
so
going
with
the
principle
of
it's
maybe
best
if
it
kind
of
fits
into
what
processes
you
already
have
like.
If
you're,
if
you
have
a
cicd
system
that
runs
a
bunch
of
checks
like
adding
that
check
comparatively
there,
if
everyone's
going
to
understand
it,
that
makes
sense
right.
If
you
don't
have
that
system
in
place,
then
maybe
you
don't
want
to
set
it
up,
and
you
just
do
it
as
what
I
do
in
that
case
right
is.
C
I
create
a
new
configuration
file
for
your
abstract
type,
maybe
and
then
at
kind
of
the
root
or
above
the
root.
C
B
B
How
how
do
I
do
so?
If
I
want
to
use
this
so
let's
say
I
have
a
package
and
then
I,
the
package
uses
me
to
put
across
multiple
environments
right,
like
my
primary
need
for
customization,
is
to
specialize
a
package
to
deploy
it
in
different
emds.
C
I'd
say
this:
this
model
still,
so
I
guess
going
back
to
if
you're.
So
if
you
have
a
solution
now
you
like,
like
let's
say
you're
using
customize
today,
to
do
it
right
and
you
had
a
different
customization
for
each
each
environment
right
then
it
would
work
pretty
much.
It
would
be
an
it
would
be
a
value
add
on
top
of
what
you're
doing
today
right.
So
you
would
write
your
package.
As
you
know,
a
collection
of
it
would
contain
the
different
customization
files
right.
C
So
your
package
maybe
now
includes
all
the
environments.
I
just
will
go
through
two
scenarios
to
say
you're
at
the
deployment
piece.
So
then,
maybe
that's
your
that's
your
kept
package,
and
so
you
run
customize
build
or
control
apply,
k
or
whatever
it
is
against
each
of
those
environments.
C
What
you
might
do
now
is
like
what
do
you
do
on
top
of
that?
Well,
maybe
right
before
you
apply
to
an
environment,
you
need
to
set
the
manifest
of
the
image
right
that
you
just
built,
and
so
before.
C
Maybe
you
had
a
if
you
had
a
process
where
you
set
it
in
the
the
sha
of
the
image
into
the
image
tag
right
now,
you'd,
maybe
create
a
center
for
that
right
and
so
now,
promatically
from
you
know
your
deployment
pipeline,
you
can
run
set
image
and
then
run
apply
after
that
would
be
what
one
way
of
integrating
these
capabilities
to
maybe
solve
a
particular
need
another
one
would
be
you
could.
C
Hopefully
the
yeah
actually
remember
if
the
yeah
there's
a
way
to
write
up
the
function
so
that
you
can
like
do
customize
build
and
then
pipe
it
into
a
function,
for
instance.
So
maybe
right
before
doing
control
apply,
you
can
customize
build
the
environment,
pipe
it
into
a
function,
make
sure
that
that
exits
successfully
and
then
and
then
apply
after.
B
I
was,
I
was
trying
to
understand
how
it
fit
in
with
overlap
customization
for
multiple.
If
I
wanted
to
use
overlay
based
customization
for
multiple
environments
and
it
sounds
like
primarily
it's
a
packaging
system-
that's
a
value,
add
on
top
of
that.
That's
compatible
with
overlay
customization,
but
doesn't
mandate
that
particular
approach.
C
Absolutely
so
maybe
I'll
I'll
say
two
two
things
of
how
maybe
this
fits
in
with
customize.
The
first
is
even
if
you're
using
customize
to
use
bases
and
such
there's
still
a
bootstrapping
problem
of
well.
How
do
I
get
that
initial
customization.yaml
right?
That
says,
which
bases
are
I'm
using
right
or
what
the
different
you
know.
C
Common
annotations
and
common
labels
are
right,
and
so
you
probably
end
up
copying
and
pasting
that
from
some
location
right
to
get
your
initial
state
and
then
now
that's
kind
of
orphaned
from
wherever
it
was
copy
and
pasted
from,
and
so,
if
you
want
to
update
the
version
of
customize,
for
instance-
and
that
means
some
new
feature
was
enabled-
and
you
want
to
add
that
to
the
customization
file.
C
In
the
example
like,
how
do
you
get
the
latest
version
of
the
example
right
so
making
the
customization
yaml
itself
a
package
and
then
kept
getting
that
and
then
just
doing
your
normal
workflow
does
solves
a
couple
of
those
problems
for
you.
The
second
is
it's
a
blueprint
customization,
so
there's
the
environment
customization,
which
you,
I
think,
what
we're
just
talking
about
and
then
there's
maybe
a
blueprint
customization,
which
is
okay.
C
I
want
to
just
like
three
java
back-ends
right
and
you
can
use
customize
to
do
that
using
different
bases
and
creating
different
graphs,
and
some
people
really
enjoy
that
model
and
they
like
it
and
and
they
totally
get
it
in
which
case
keep
going
that
route
right
or
I'm
that
it's
not
wrong
to
do
it
that
way,
but
the
one
trade-off
there
is.
You
end
up
with
kind
of
a
large
graph
right
and
I've
seen
some
very
deep
customized
graphs
and
then
folks,
you're
kind
of
limited.
C
What
can
be
done
within
the
customized
framework?
A
customized
framework
is
very
flexible.
So
it's
not
like
there's
a
lot
of
limitations
there,
but
you
are
working
within
this
graph
now,
and
so
maybe,
if
you're
like
saying,
for
whatever
reason.
C
You
don't
like
that
approach
and
you
just
want
like
hey.
I
want
three
copies
of
the
back
end
and
then
I
want
to
manually
customize
those
things
instead
of
using
patches,
and
I
just
want
to
pull
them
down
from
my
example,
and
so
what
you
would
otherwise
do
is
copy
and
paste
the
java
back
in
yaml
three
different
times
and
now.
Those
are
your
customized
bases
right.
That
would
be
kind
of
your
blueprint
approach.
C
Well
kept
now
allows
you
to
pull
down
those
different
java
back-ends
into
different
local
packages,
and
so,
instead
of
having
a
shared
base
between
a
shared
java
base
that
you
try
and
create
this
customized
diamond
shaped
graph
with
you.
No
longer
you
have
a
different
sort
of
graph
where
each
of
those
is
a
completely
independent
node
and
there's
no
shared
base
between
them
through
the
customized
graph.
C
But
when
you
want
to
pull
down
updates
to
them,
you
use
cap
to
kind
of
do
that
resource-based
merged
model
of
of
pulling
pulling
in
updates
from
wherever
they
originally
came
from
and-
and
I
guess
they'd
say
that
the
way
I've
described
customized
and
kept
to
folks,
because
some
folks
have
asked
like
well,
should
I
use
customize
or
kept
for
this.
C
There
seems
to
be
overlap
in
and
what
they're
capable
of
doing,
or
the
problems
they're
capable
of
solving-
and
the
metaphor
I
have
held
in
my
mind-
is
kind
of
like
in
a
programming
language.
You
have
both
while
loops
and
functions.
Right
and
technically
you
can
write
a
program
without
functions
and
just
use
like
ifs
and
while
loops
right
and
technically
you
could.
Probably
you
can
write
like
programs
without
while
loops,
if
you
get
like
some
crazy
recursion
right
with
functions
right
and
so,
while
both
functions
and
while
loops
have
overlap.
C
C
And
the
answer
is,
you
know,
it
depends
on
the
problem
and
what
your
programming
style
is
and
what
happens
to
make
sense
there.
Yes,
there's
overlap
between
them.
Yes,
it
makes
sense
to
have
both
of
them,
because
they
provide
a
slightly
different
ways
of
solving
kind
of
the
same
problems
that
can
be
used
together.
B
So
for
the
stateless
surveying
workflow
example,
I
imagine
that
you
might
have
like
a
blueprint
called
service
or
microservices,
which
contains
all
of
the
resources
that
you
would
generally
have
to
start
with.
You
could
use
that
blueprint
to
generate
yaml
that
instantiates
a
specific
service,
give
it
an
application
name
all
that
stuff,
and
then
you
could
use
customize
for
last
mile
customization
as
you
deploy
that
gamble
across
different
environments
like
that,
but
pretty.
C
Much
yep,
that's
that's
the
model
where
we're
considering
and
then
you
can
even
throw
in
functions
like
there's
we're
going
to
have
to
get
more
opinionated
here.
I
think
what
we're
going
to
have
to
do
is
build
more
like
very
opinionated
and
maybe
there's
probably
gonna,
be
three
or
four
different
patterns
of
like.
If
this
is
how
your
organization
does
something,
here's,
how
you
do
it,
but
you
can
even
use
functions
on
top
of
that
right.
So
you
could
use
a
function
with
an
abstraction
that
generates
the
config
and
that's
your
package.
C
Your
package
could
just
be
the
raw
yaml
for
the
stateful
or
stateless
service,
and
then
that's
your
packaging.
You
pull
that
down
and
modify
it
in
place,
rather
than
have
an
abstraction
in
either
one
of
those
cases
you
can
use
the
either
the
output
of
the
function
or
the
raw
base
as
customized
basis
and
then
do
do
variant
customization
from
there.