►
From YouTube: Kubernetes SIG Apps 20200504
Description
Led by: Janet Kuo
Co-hosts(s): Adnan
Announcements:
- Working with KK from SIG Storage on StatefulSet Volume resizing
Demo:
- kpt [pwittrock, mortent] - https://googlecontainertools.github.io/kpt/
A
B
Want
announcement,
real
quick,
oh
sure,
is
that
is
a
follow-up
from
police
good,
so
it
with
the
staple
set
by
resizing
I
talked
to
six
storage.
There's
an
implementer
from
sue
storage
that
wants
to
move
forward
with
modifications
to
the
existing
cap,
go
by
KK,
so
I'm
going
to
work
with
them
and
try
to
get
that
implemented
and
Anand.
It
expressed
some
interest
in
and
try
contributing
to
that
as
well.
So
I
think
we
have
enough.
A
C
C
D
A
A
C
C
Demo,
it's
helpful
to
give
the
context
of
what
we're
trying
to
do
and
why
it
was
designed
the
way
it
was,
and
one
of
the
themes
is
about.
Shifting
the
resource
model
left
the
kubernetes
resource
model.
We
often
as
we
talked
about
it's
often
in
the
API
server
or
in
the
control
plane
and
is
built
using
controllers
and
muting
web
hooks
and
validating
web
hooks
and
resources
right,
and
these
are
kind
of
the
primitives
that
allow
systems
to
be
built
on
top
of
the
resource
model,
where
the
resource
model
are
a
set
of
declarative
API.
C
Is
that
have
different
programs
which
read
those
api's
and
write
to
them
and
I'll
lose
this
lily
couple
by
a
number
of
different
api's
together
and
what
kept
is
doing
is
taking
the
same
resource
model
and
shifting
it
left,
and
so
that
means
that
operating
on
the
kubernetes
resources
can
be
done
before
they're
applied
to
the
cluster.
Before
you
even
talk
to
a
cluster,
for
instance,
and
some
of
the
reasons
this
shift
list
is
useful,
more
abstractly.
C
Would
be
better
than
then
waiting
until
it's
actually
been
pushed
and
then
catching
there,
then
so,
the
more
we
can
shift
the
work
left,
the
more
opportunity
we
have
to
catch
errors
before
they
go
out,
and
one
thing
I'd
really
like
folks
watching
this
too,
to
kind
of
take
note
of
is
the
architecture.
C
The
first
is
the
output
of
kept
should
be
kubernetes
resources
and
maybe
open
api
right,
and
so
this
follows
the
kubernetes
resource
model,
where
controllers,
for
instance,
are
writing
to
resources
in
the
API
server,
and
this
basically
means
like
you
need
to
be
able
to
COO
control,
apply
whatever
kept
outputs
or
you
should,
or
it
should
follow
roughly
like
what
the
kubernetes
examples
on
github
show.
It
should
look
like
these
things.
All
right-
and
you
can
write
these
things
to
different
locations
right,
so
it
can
write
resources
to
files.
The
standard
outtake
guide,
endpoints.
C
These
are
all
about
locations.
What's
important
is
that
it's
writing
resources
that
the
kubernetes
api
under
server
I
could
conceptually
understand
right.
Maybe
one
is
a
C
or
D
isn't
installed,
for
one
so
doesn't
mean
they
need
to
be
the
native
API
is,
but
they
look
and
feel
like
kubernetes
resources
and
I
think
most
tools
kind
of
all
this
eventually
at
the
end,
right
like
there's,
since
this
is
the
native
speak
of
the
kubernetes
api
server.
C
Eventually,
if
you're
gonna
have
to
you're
gonna
create
a
resource,
it's
going
to
have
to
help
put
something
that
looks
like
a
resource
the
next
one.
Not
all
tools
do-
and
this
is
what
the
input
looks
like,
and
this
is
that
the
primary
input
must
also
make
you
pronounce
resources.
This
is
then
put
in
the
output
are
symmetric.
It
may
read
open
API
in
addition
to
kubernetes
resources.
This
follows
the
conventions
of
what
the
like
Princeton's.
The
tools.
C
Do
coop
control
reads
the
open
api
from
the
api
server
and
the
resources
also
follows
what
the
how
the
controllers
tend
to
work
where
the
controllers
are
also
reading
resources
from
the
api
server,
and
so
you
know
this
is
kind
of
the
the
guide
check
here
is:
is
it
to
control
get
a
bowl
like
the
inputs?
Are
there
something
you
can
get
from
to
control?
Get,
for
instance,
do
they
look
and
parse
like
that
or
do?
C
C
C
You
can
do
different
development
workflows
around
like
ID
integration
and
validation.
So
IDs
already
have
some
validation
based
on
the
open
API,
but
you
can
imagine
your
what
we'd
normally
be
in
a
validating.
Webhook
wouldn't
be
great
if
your
IDs
could
just
capture
that
same
validation
and
apply
it
right
at
night,
II
as
you're
developing,
rather
than
having
to
actually
try
and
push
something
to
production
to
catch
that
validating.
C
What's
not
gonna
work
and
one
advantage
of
the
ship
left
approach
is
that
the
scope
of
what
it
can
validate
is
is
greater
because
it
can
validate
holistically
all
the
resources
that
are
gonna
be
applied.
The
bundle
where,
as
validating
Web
books,
only
work
on
a
single
resource
at
a
time
so
trying
to
catch
something,
a
validation
like
this
deployment,
plus
the
service
buses
config
map.
Much
look
like
this
collectively
is
gonna
be
harder
to
do
with
the
validating
web
hook
than
it
would
be
in
ship
left
and
then.
C
C
How
do
we
do
that
low
level
piece
of
not
copying
and
pasting,
be
kubernetes
example
right
and
you
can
get
clone
that
thing,
but
there's
some
challenges
like
it
is
not
designed
ordering
around
resources.
If
you
want
to
do
an
update,
for
instance,
it's
going
to
do
file
based
updates,
as
opposed
to
resource
based
updates
and
these
sorts
of
things.
Another
level
piece
of
does
a
static
modification
of
configuration.
So
how
can
you?
C
Modification
of
configuration
right,
so
these
are
things
where
it's
not
simply
a
matter
of
go,
find
this
field
and
set
its
value
to
this,
we'll
go
find
these
different
fields
and
set
their
values
based
on
these
inputs.
It
becomes
more
like
this
is
more
the
controller
based
approach
where
it's
like:
okay,
look
at
the
holistic
system
and
then
what
the
desired
state
is
based
on
what
these
are
specifying
and
then
figure
out
how
to
make
it.
C
Look
like,
like
you
wanted
to
write
like
the
program
thinks
it
should
and
then
finally,
there's
an
actuation
piece
which
is
reading
and
writing
from
the
API
server.
These
are
the
extensions
or
they're
like
apply
and
status,
and
these
sorts
of
things
which
we're
pretty
familiar
with
all
right.
So
now,
I'll
give
a
quick
demo.
C
C
Let
me
give
a
quick
time
of
kept
I'm
in
this
kind
of
empty
directory.
Here,
I'm
gonna
use
my
magic
to
kind
of
run.
The
commands
for
me
so
I
can
talk
as
it
has
a
tightness
the
commands
for
me.
So
the
first
thing
that's
gonna
do
is
fetch
a
package
from
git,
and
so
this
is
saying
go
find
in
the
subdirectory.
From
from
a
little
git
repo
I
wrote
for
this
example
to
pull
down
the
configuration
right
and
so
at
this
particular
version
on
in
typically
in
a
workflow.
C
We
want
to
go
ahead
and
add
this
to
get
right
away
and
that
will
rake
the
diffs
easier.
I
can
use
git
diff
to
come
in.
Show
you
the
changes
as
we
modify
this
thing
and
so
kept
has
some
commands
for
just
viewing
package.
Configuration
here
happens
to
be
one
of
them
that
allows
you
to
kind
of
quickly
at
a
glance,
see
what,
if
I
fetch
straight.
C
You,
and
so
following
that
it
has
to
apply
to
a
cluster
thing
like
we
can
just
to
control,
apply
this
thing.
Capsizes
own
apply
utilities
which
allow
for
some
some
nice
functionality,
but
right
now,
I'm
going
to
use
cout
control
just
to
demonstrate
the
philosophy
again,
and
you
can
see
that
there
is
this
@
çd
thing
running.
C
So
as
we
modify
this
thing,
it
should
be
able
to
keep
those
things,
because
it's
reading
and
updating
so
we're
gonna
do
here
is
we're
gonna,
go
ahead
and
update
this
package
from
upstream
to
a
new
version,
and
this
should
go
ahead
and
keep
the
annotation
I
added
while
pulling
you
in
upstream
updates,
because
it's
doing
a
merge
right,
and
so
you
can
see-
and
here
that
it's
just
changed
this
to
a
five
from
a
four.
You
can
see
that
change
here
as
well.
So
that's
what
it's
changing
the
update!
C
Going
head
in
behine
and
the
video
files
is,
it's
not
always
the
best
way
of
not
always
the
way
people
want
to
edit
stuff
people
do
or
there's
different
reasons
you
may
want
to
just
have
put
commands
to
go
and
change
these
things.
It
allows
tools
to
do
kind
of
the
heavy
lifting
for
humans
so
that
it's
less
likely
humans
are
going
to
make
mistakes,
and
you
can
see
what
I've
done
here
is
just
add.
It
run
the
set
command.
So
these.
D
C
C
And
so
you
can
see
here,
what
that's
done
is
it's
gone
and
it's
just
updated
the
yellow
files
so
again
going
back
to
that
philosophy,
but
read
zml
of
right.
Tamil
reads
what
is
written
previously,
and
so
this
is
loosely
coupled
with
the
packaging
piece.
Like
you,
the
packaging
piece
reads
and
write
animals.
The
setters
piece
reads
and
writes
animals.
They
don't
really
need
to
know
about
each
other
for
how
each
other
operate,
allowing
a
better
interoperability,
and
so
this
case
has
just
gone
ahead
and
updated.
That
and
I'll
talk
a
little
bit
about
this.
C
C
The
configuration
so
we're
gonna
go
ahead
and
apply
this
thing.
There
changes
and
we're
going
to
see
that
actually,
it's
not
working.
There's
these
errors
right
and
that's
because
we
have
designed
this
is
that
while
it
changed
the
replicas
just
changing
the
replicas,
isn't
enough
right
in
this
case,
doing
a
pure
static
transformation
which
would
work
great
for
certain
things
like
image
and
replicas,
maybe
for
deployment
and
other
things
in
this
particular
case
for
the
state
will
set
its
saying.
C
C
C
You
to
quickly
perform
a
dynamic
sort
of
scriptable
transformation
without
executing
arbitrary
code
on
your
machine.
There's
other
runtimes
that
allow
this
too
and
containers
are
actually
the
primary
one
in
time
we're
supporting,
but
this
one
happened
to
be
a
little
bit
easier
for
the
purposes
of
demonstration,
and
so
it's
updated.
This
configuration
to
say
hey.
This
thing
now
has
this
function,
which
is
a
dynamic,
dynamic
piece
of
code
which
should
perform
transformations,
and
it's
done
through
kind
of
this
Starlog
script.
C
C
C
D
C
C
That
list
of
pods
is
the
initial
state
right,
and
so
it's
gone,
it's
gone
and
modify
that
particular
resource
in
the
items
and
then
kept
itself
will
then
write
that
back
out,
and
so
this
is
automatically
triggered
anytime
set.
His
is
run
right,
so,
by
virtue
of
setting
this
to
you,
it
runs
this
function
in
that
functions.
Job
is
just
to
make
sure
that
all
the
configuration
looks
correct,
based
on
its
current
set
of
values.
C
And
then
I
can
show
you
to
the
open
API
for
the
setters.
This
is
kind
of
how
the
senator
to
find
miss
Donnelly
to
open
API
definitions
which
describe
like
here's,
the
name
of
the
sorry,
here's
a
name
of
a
setter
called
replicas.
Here's
who
it
was
set
by
and
here's
its
current
value
and
so
going
back
to
that
CD
AM
will
file.
This
is
just
an
open,
API
reference,
and
so
these
resolve
using
standard,
open,
API
libraries
there's
nothing
specific
about
this
reference
or
resolution.
C
The
only
things
that
are
special
are
that
kept
looks
for
open
API
as
comments,
so
as
it
parses
the
mo
files,
it
looks.
Is
there
an
open
API
definition?
That's
comes
as
a
comment
on
this
particular
field
and
then
it
also
parses
additional
open,
API
definitions
from
the
kept
file,
and
so
using
those
two
things
together,
you
can
augment
the
open
API
provided
from
kubernetes
with
your
own
extensions,
and
then
you
can
customize.
C
The
individual
object
is
open,
API
to
be
something
more
than
just
like
the
generic
open
API
for
that
type
that
you
could
say
for
this
particular
object,
even
that
these
are
all
these
particular
fields,
look
which
opens
the
door
for
and
the
future
doing
things
such
as
saying
this
particular.
Maybe
image
should
have
this
regular
expression
and
putting
restrictions
on
certain
fields
for
particular
objects,
while
still
allowing
end-users
to
do
modifications
to
them
in
customizations.
C
E
Can
just
run
through
it?
It's
it's
just
a
smaller
part
of
cap
time
what
Phil
went
through,
so
it
shouldn't
take
too
long
great.
E
I
think
I'll
have
to
do
it
like
this,
because
I'm
gonna
demo,
something
which
might
might
if
I
change
the
font
size.
It
might
not
work
too
well.
Yet,
since
it's
still
only
so-so,
it's
gonna
show
it's
the
kept
live
functionality
and
it
is
for
applying
cat
packages
to
a
cluster
and
but
kept
like
it.
Just
really.
E
So
first
thing
we
need
to
do
is
because
we're
pruning
works,
we
create
inventory
objects
as
part
of
apply,
and
to
do
that,
you
need
to
know
where
we
want
to
put
those
inventory
objects
and
also
generate
an
ID.
So
first
thing
you
need
to
do
is
just
do
kept
live
in.
It
now
create
an
inventory
template,
which
is
just
a
config
map.
I
can
show
it
here.
E
E
E
E
E
D
E
E
So
this
view
is
partly
inspired
by
the
cue
control
tree
command.
Okay,
so
this
this
was
a
timeout.
So
now
this
didn't
I
forgot
to
set
the
timer
for
longer
than
60
seconds,
but
they
still
still.
The
record
stall
was
still
happening
in
background
and
we
can
do
now
is
we
can
just
delete
a
staple
set
and
then
we
can
do
a
preview
again.
E
And
then
we
can
see
here
that
the
window
run
apply
next
time.
We
expect
state
or
set
to
be
pruned
the
service,
and
also
the
existing
inventory
object,
as
it
will
create
a
new
one
and
if
you
do
apply
again,
you'll
see
that
the
state
flip
set
is
no
longer
there,
and
this
will
be
deleted
and
background
we're
working
on
functionality
to
wait
while
tuning
happens
and,
finally,
what
is
possible.
You
can
also
do
a
fly
preview.
E
E
B
C
C
The
so
there's
two
ways
of
there's
a
couple
ways
of
running
the
functions,
and
it's
there's,
there's
layers
and
the
functions
so
so
at
the
layer
you're
talking
about
is
effectively.
How
do
you,
how
are
they
orchestrated
right
like
what
causes
them
to
be
invoked
and
what
I
there's
a
number
of
ways
of
different
of
specifying
them
and
to
be
orchestrated?
C
C
Then
it
will
cause
the
function
to
be
read
with
the
resource
that
contains
the
annotation
to
be
provided
in
a
special
feel
in
the
influent
called
function.
Config,
and
so
you
can
build
it.
You
can
design
a
system
to
look
kind
of
like
a
controller
right
where
you
could
create,
for
instance,
a
new
client
side,
C
or
D.
C
What
I
demonstrated
is
a
little
bit
different
model
than
that,
so,
instead
of
doing
an
abstraction
right,
where
it's
like
a
new
type,
I
demonstrated
a
function
on
like
the
actual
resource
itself
and
then
the
functions
job
is
to
validate
that
that
individual
resource
dude,
but
it's
also
possible
to
run
functions
like
imperative
ly
on
the
command
line.
You
can
give
it
the
like
a
dash
dash
image,
you
can
say
kept
so
I've.
C
Also
I
also
just
demonstrated
an
implicit
run
where
the
function
is
run
after
setting
just
just
a
normalize
everything
there's
more
of
an
explicit
run
where
you
can
say
kept
function
or
kept
up
and
run
on
a
directory,
and
then
it
will
reverse
all
the
configurations
and
then
run
all
the
functions.
You
can
also
do
an
imperative
mode
where
you
can
say
kept
append,
run
and
then
stay
image,
and
then
it
will
run
the
function
from
that
image,
explicitly
passing
in
all
the
configuration
into
it.
C
B
I
have
to
mutti,
so
let's
say:
I
have
one
parameter
that
would
cause
effects
across
multiple
resources.
Would
I
have
like
one
function
that
gets
invoked
from
multiple
resources
in
order
to
update
those
resources
in
sequence
or
what
I
have
looks
like
you
still
get
client-side
CRD
and
then
treat
those
resources
as
an
artifact
that's
generated.
B
Let's
say:
I
was
dealing
with
like
a
typical
stateless
serving
workload.
I
had
like
an
appointment,
I
have
service,
I
have
pas
disruption.
Budget
I've
got
a
horizontal
pot.
Autoscaler,
so
put
me
so
hungry
and
like
I,
want
to
make
sure
that
the
like
I
want
to
make
sure
naming
is
consistent
across
all
the
resources.
For
instance,
so
I
update
my
hack
name
I
want
to
update
all
of
that
consistently
or
I
want
to
modify
labeling
across
all
of
them
and
ensure
that
selectors
conform
to
select
the
right,
artifacts
mm-hmm.
C
B
C
The
off
hand
the
starting
with
like
the
simplest
approach
that
introduces
the
least
amount
of
change
to
your
existing
system
and
then
maybe
only
only
shifting
into
divergent
stuff
when
the
benefit
is
clear.
I
start
out
with
okay,
what's
closest
to
the
system
you
running
today.
Well,
if
you
have
some
ste
ICD
system
already
set
up
I'd
like
maybe
create
a
github
check
or
something
together,
part,
whatever
your
CI
CD
system
does
as
part
of
a
PR
and
I
run
it
potentially
imperative
Li.
C
C
But
but
that's
probably,
it
depends
on
probably
the
rest
of
your
she
IDs
process
to
write
like
so
going
with
the
principle
of
its
maybe
best
if
it
kind
of
fits
into
what
processes
you
already
have
like.
If
you're,
if
you
have
a
CI
CD
system,
that
runs
a
bunch
of
checks
like
adding
that
check
imperative
Li
there
if
everyone's
gonna
understand
it,
that
makes
sense
right.
If
you
don't
have
that
system
in
place,
then
maybe
you
don't
want
to
set
it
up
and
you
just
do
it
as
well.
C
What
I
do,
in
that
case
right
is
I,
create
a
new
configuration
file
for
your
abstract
type,
maybe
and
then
at
kind
of
the
route
or
above
the
route.
Add
the
annotation
man
tation
would
describe
the
container
or
starlight
script
or
whatever
runtime
you
happen
to
be
using.
Then
you
run
kept
box
and
run
on
that
directory,
and
then
it
would
invoke
that
thing
if
that
fits
more
and
you're
better
in
your
workload,
and
that
makes
sense
too
I.
B
C
I'd
say
this:
this
model
still
so
I
guess
going
back
to.
If
your
saw,
if
you
have
a
solution
now
you
like,
like
let's
say
you're
using
customize
today,
to
do
it
right
and
you
have
a
different
customization
for
each
each
environment
right
then
it
would
work
pretty
much.
It
would
be
an
it
would
be
a
value
oo,
add
on
top
of
what
you're
doing
today
right.
So
you
would
write
your
package.
As
you
know,
a
collection
of
it
would
contain
the
different
customization
files
right
into
your
package.
Maybe
now
includes
all
the
environments.
C
C
C
Absolutely
so
maybe
I'll
say
to
two
things
of
how
maybe
this
fits
in
with
customized.
The
first
is
even
if
you're,
using
customize
to
use
bases
and
such
there's
still
a
bootstrapping
problem
of
well,
how
do
I
get
that
initial
customization
guy
yamo
right?
That
says
which
bases
are
I'm
using
right
or
what
the
different
you
know.
C
In
the
example
like,
how
do
you
get
the
latest
version
of
the
example
right,
so
the
customization
gammel
itself
a
package
and
then
kept
getting
that
and
then
just
doing
your
normal
workflow
does
solves
a
couple
of
those
problems
for
you.
The
second
is
it's
a
blueprint
customization,
so
there's
the
environment
customization,
which
you
I
think
or
just
talking
about,
and
then
there's.
C
Maybe
a
blueprint
customization,
which
is
okay,
I,
wanted
just
like
three
Java
backends
right
and
you
can
use
to
customize
to
do
that
using
different
bases
and
creating
different
graphs,
and
some
people
really
enjoy
that
model
and
they
like
it
and
and
they
totally
get
it
in
which
case
keep
going
that
route
right
or
I'm.
It's
not
wrong
to
do
it
that
way,
but
the
one
trade-off
there,
as
you
end
up
with
kind
of
a
large
graph
right
and
I've,
seen
some
very
deep
customized
graphs
and
then
folks,
you're
kind
of
limited.
C
C
Don't
like
that
approach-
and
you
just
want
like
hey
I-
want
three
copies
of
the
back
end
and
then
I
want
to
manually
customize
those
things
instead
of
using
patches
and
I,
just
want
to
pull
them
down
from
an
example,
and
so
what
you
would
otherwise
do
is
copy
and
paste
the
Java
back
in
yeah.
Well,
three
different
times,
and
now
those
are
your
customized
bases
right
now
would
be
kind
of
your
blueprint
approach.
C
Well
kept
now
allows
you
to
pull
down
those
different
Java
backends
into
different
local
packages,
and
so,
instead
of
having
a
shared
base
between
a
shared
Java
base
that
you
try
and
create
this
customized
diamond-shaped
graph
with
you,
you
no
longer.
You
have
a
different
sort
of
graph
where
each
of
those
is
a
completely
independent
node
and
there's
no
shared
base
between
them
through
the
customized
graph.
C
But
when
you
want
to
pull
down
updates
to
them
and
use
kept
to
kind
of
do
that
resource
based
merged
model
of
pulling
pulling
updates
from
wherever
they
originally
came
from
in
I,
guess,
they'd,
say
the
way
I've
described
customized
and
kept
to
folks
it
cuz.
Some
folks
have
asked
like
well,
should
I
use
customized
or
kept
for
this.
C
There
seems
to
be
overlap
and
and
what
they're
capable
of
doing
or
the
problems
are
capable
of
solving
and
the
metaphor,
I
upheld
in
my
mind,
is
kind
of
like,
but
in
a
programming
language
you
have
both
while
loops
and
functions.
Right
and
technically
you
can
write
a
program
without
functions
and
just
use
like
ifs
and
wild
loops.
C
B
Stateless
server,
a
good
example
I.
Imagine
that
you
might
have
like
a
blueprint
called
service
or
micro
service,
which
contains
all
the
resources,
and
you
would
generally
have
to
start
with
it.
You
can
use
that
print
to
generate
yeah
mole,
that
instantiates
specific
service
give
it
application,
name
all
that
stuff,
and
then
you
could
use
customize
for
last
mile
customization
as
you
deploy
that
gamble
across
different
environments
like
that
was
yummy,
but.
C
Pretty
much
yeah
yeah,
that's
that's
the
model,
we're
considering
and
then
you
can
even
throw
in
functions
like
there's.
We're
gonna
have
to
get
more
opinionated
here.
I
think
what
we're
gonna
have
to
do
is
build
more
like
very
opinionated
workflows
and
maybe
there's
probably
gonna
be
three
or
four
different
patterns
of
like.
If
this
is
how
your
musician
does
something,
here's
how
you
do
it,
but
you
can
even
use
functions
on
top
of
that
right.
So
you
could
use
a
function
with
an
abstraction
that
generates
the
config
and
that's
your
package.
C
Your
package
could
just
be
the
raw
mo
for
the
stateful
or
stateless
service,
and
then
that's
your
packaging,
pull
that
down
and
modified
in
place
rather
than
have
an
abstraction
and
either
one
of
those
cases.
You
can
use
the
eval
to
the
function
or
the
raw
base
as
customized
basis
and
then
do
do
variant
customization
from
there.