►
From YouTube: Kubernetes SIG CLI 20220126 - KRM Functions Subproject
Description
Agenda: https://docs.google.com/document/d/1x80l4i88F27zSCxSjlhvwFdH6jQAHou1k1ibuXrDTaw/edit (must be on SIG CLI mailing list to access)
A
Hello
and
welcome
to
the
second
km
function,
subproject
meeting.
We
have
a
few
items
on
the
agenda
today,
so
let's
just
dive
in,
I
believe
there
we
have
all
met
each
other,
so
we
don't
need
to
do
any
introductions
today.
So
let's
just
go
with
the
first
topic,
which
is
the
function
improvement
pr
munchie.
Do
you
wanna
your
own.
B
C
Yeah,
so
this
is
like
a
follow-up
discussion,
so
we
have
discussed
this
in
a
previous
meeting,
so
I
want
to
like
trying
to
understand.
What's
what
should
I
resolve
before
like
we
can
merge
this
so
yeah?
That's
what
I
want
to
that's
why
I
want
to
bring
this
up
again
here
and
jeremy
said
he
is
going
to
provide
some
feedback,
but
I'm
still
waiting
for
his
feedback.
A
A
C
Yeah,
so
this
one
is
from
un
it's
this
one
is
about
like
because
we,
if
we
change
resource
resource
listing
from
arnold
to
a
list
of
cube
objects,
it's
a
breaking
change
for
the
existing
users.
C
So
for
this,
for
this
one,
I
think
if
we
choose
to
change
the
change,
the
type
of
resource
these
items,
we
will
do
that
in
the
next
minor
release,
which
is
will
contains
all
the
breaking
changes.
Since
kayamo
is
still
pre-v1
and
the
user
existing
users.
C
They
can
choose
to
upgrade
to
this
minor
version
or
they
can
like
stay
with
the
current
version
or
the
the
patch
version
of
their
existing
minor
version.
We
are
not
like
forcing
them
to
upgrade,
since
you
know.
We
are
following
december
here.
A
Yeah
and
we
can
make
breaking
changes.
This
would
be
a
pretty
huge
one
so
like
just
because
we
can
doesn't
mean
we
shouldn't
be
careful
about
it,
but
I
I
agree
with
you
that,
strictly
speaking,
it's
perfectly
fine
to
to
go
ahead
and
make
changes
that
we
think
are
warranted
in
an
alpha
library,
but
with
this
specific
change,
it's
actually
related
to
the
exact
same
thing
that
my
comment
down
here
is
really
about.
A
So
for
the
combination
of
those
two
reasons
I
would
say
we
should
stay
with
our
node
in
as
the
function
config
and
the
items
I
don't
know.
If
do
you
want
me
to
explain
myself
more
from
from
what
I
said
here
or
did
you
have
a
like?
You
said
you
were
okay
with
merging
it,
so
it
sounded
like.
C
Yeah,
so
my
so,
I
said
that
yeah
so
because
I
like
I
want
to
get
this
improvement,
get
in
like
instead
of
being
blocked,
here's
and
the
user
gets
nothing.
C
A
Okay,
did
you
understand
my
rationale
that,
like
there's
just
so
much
about
this
package
that
is
oriented
around
our
node
transformation,
like
the
filters,
the
selectors,
the
matchers,
that,
like
a
huge
amount
of
the
like
high
level
like
user-friendly
convenience
tools,
would
stop
working
and
we'd
have
to
invent
duplicates
of
them
for
the
duplicate
object
type?
If
we
were
to
have
a
second
one.
C
So
for
for
the
for
the
selectors
and
the
matchers,
so
why
does
this
like
suddenly
stop
working.
A
Because
they're
oriented
around
our
nodes
so
like
the
selector
itself,
expects
to
be
a
filter
which
is
an
our
node.
It's
a
function
that
takes
in
our
node
in
and
gives
the
list
of
our
nodes
in
and
gives
list
of
arounds
out
and
matches
a
our
set
of
that
which,
instead
of
taking
a
collection,
they
take
an
instance.
They
see
if
it
matches
or
not
return
of
boolean,
but
it's
there
are
node
oriented.
So
we
have
these
like
really
high
level
convenience
functions
that
are
like
framework.match.
A
All
is
one
of
them
that
you
just
give
it
your
your
your
filter,
which
the
pre-made
ones
that
are
just
like,
like
the
name
of
the
kind,
the
name
like
just
you
can
give
it
the
string
kind,
the
string
name,
the
string,
whatever
you
know
very
high
level,
and
then
it
will
match,
match
all
the
objects
with
that.
A
That's
how
the
whole
framework
is
oriented
right
now,
so
like
tools,
those
are
two
examples
of
tools
that
if
you
were
an
end
user
who
started
building
with
cube
object
instead,
you
just
would
lose
access
to
those,
and
you
would
have
to
rebuild
your
own
version
or
we
would
have
to
build
a
second
version
of
what
a
selector
and
a
matcher
is
that
handles
keep
object
instead.
A
You
know
a
user
who
needs
to
manipulate
and
dig
into
kubernetes
resource
that
came
from
some
yaml
that
we
know
has
metadata
at
high
level,
but
has
arbitrary
fields
besides
that,
like
they're
at
the
same
level
of
abstraction.
I
agree
with
you
that
the
name
sucks
that's,
unfortunately,
pretty
annoying
to
the
end
user.
To
change.
A
A
So
instead
of
having
a
second
class
of
objects
that
has
a
smaller
surface
area
and
isn't
compatible
with
a
lot
of
the
tools
we
built,
I
would
rather
evolve
our
node
to
be
more
in
the
shape
that
you
would
like
and
to
be
more
user-friendly,
because
it's
part
of
this,
this
alpha
package,
too
right.
D
Yeah,
I
can
understand
katrina's
concern.
Actually,
I
think
before
we
talk
about.
If
this
cube
object
may
introduce
duplications
or
there
will
be
a
bunch
of
fixes,
we
need
to
do
on
the
r
node.
I
think
for
cube
object
and
the
arnold
and
munchie
also
mentioned
the
name
pipe.
These
are
actually
different
ways
to
introduce
the
concept.
The
the
structure
of
a
hype
over
hydration
over
krm
function
to
users.
D
I
think
we
want
to
clarify
which,
how
we
want
to
introduce
the
ideas
to
users
when
we,
when
users
hear
about
arnold
pipe
and
calling
those
functions,
is
more
like
a
graph.
Is
this
the
right
way
we
want
to
introduce
users
if
that
make
things
very
complicated,
make
que
yamo
how
to
use
error
pro?
D
Maybe
q
object
is
a
is
a
better
way
to
introduce
the
definitions,
and
from
that
perspective,
even
though
there
will
be
a
lot
of
work
to
fixing
the
surface
areas
that
still
deserves
the
effort.
Yeah,
that's
that's
my
opinion.
I
think
once
he
mentioned
the
the
name.
Pipe
is
not
good
and
you
change
that
to
us
as
may.
If
I
remember
correctly,
do
you
mind
introducing
the
new
surface
and
the
definition
you
want
to
introduce
to
users?
A
Yeah,
I
think
part
of
it.
I
think
I
brought
this
up
last
time
too.
It
is
a
documentation
issue,
and
I
love
that
this
pr
has
a
ton
of
examples
in
it,
because
that's
really
going
to
help,
no
matter
what
the
final
shape
of
it
ends
up
being
like.
That's
totally
something
we
really
desperately
need,
because
this.
A
This
is
a
really
powerful
library
and
it's
not
just
built
for
those
simple
use
cases
that
this
pr
is
targeting
and
we
need
to
make
sure
that
we
continue
to
satisfy
the
other
use
cases
as
well,
like
the
way
that
it's
built
in
layers
makes
it
so
that
you
can
build
functions
that
are
composable,
which
is
something
we
don't
have
any
public
examples
of.
A
But
it's
really
cool,
because
it's
all
oriented
around
the
super
simple
concept
of
transforming
an
object
like
you
get
an
object
and
you
give
an
object
out
like
when,
because
we
kept
it
that
simple
at
the
low
layers,
you
can
actually
build
a
krm
function
that
invokes
other
krm
functions
just
by
importing
their
filters
and
basically
wrapping
them,
and
you
can
do
that
like
over
and
over
again.
A
Basically,
and
even
when
you're
integrating
your
function
into
customized
or
kept
you're
kind
of
doing
that,
because
they're
working
under
the
same
principles
under
the
hood,
which
is
really
cool
and
super
powerful,
especially
if
you're
talking
about
building
abstractions.
So
that's
some
of
the
reason
for,
like
multiversion
api
processor,
for
example.
A
It's
a
dispatcher
that
you
could
use
to
build
your
own,
like
orchestrator
alternative,
if
you
want,
but
you
can
also
use
it
just
inside
of
your
own
function,
to
make
it
a
sustainable
project
over
time,
because
you'll
want
to
release
new
versions
of
it.
So
you
use
it
as
a
dispatcher
just
between
v1,
alpha,
1
and
v1
beta1.
A
So
in
that
sense,
like
the
tools
are
sort
of
recursively
useful
and
that's
something
that
that
we
need
to
preserve,
which
is
why
I'm
advocating
here
for
making
this
like
a
veneer
on
top
of
the
existing
tools,
rather
than
making
sort
of
a
set
of
tools
that
are
separate
or
that
don't
interrupt
with
the
with
the
the
tools
that
we
have
that
were
built
for
this
power
so
pipe.
I
agree
like
that's
unintuitive,
to
use,
let's
say,
and
that
one.
A
That's
not
the
entry
point,
though
so
I
think
that
was
a
separate
point
that
we
can
return
to,
but
like
all
of
the
all
of
the
functions
that
munchie's
adding
in
this
pr
to
make
it
easier
to,
I
don't
know
for
like
a
better
expression
like
dig
through
the
objects,
like,
I
think,
that's
all
really
great
and
helpful
and
way
more
approachable
to
use
than
than
pipe
is,
but
for
sequencing
filters
themselves.
A
The
framework
has
like
pipe
isn't
from
the
framework
right.
I
think
it's
kio,
maybe
or
kml.
I
don't
think
it's
from
the
framework
and
the
framework
has
its
own
way
of
composing
filters
together
and
getting
them
invoked
with
like
execute,
is
the
main
entry
point
so
and
then
the
api
version
processor
is
the.
If
you
want
dispatch
yeah.
D
If
I
understand
correct
is
the
main
concern
about
the
naming
of
arnold
or
cube
object,
if,
if
it's
called
r
node,
then
basically
the
function
is
merged,
except
we
change
the.
I
think
we
change
the
metadata's
name
from
get
to
metadata
directly.
That's
the
only
concerns.
D
A
I
I
think,
we're
on
the
same
page
that
at
least
for
this
pr,
the
the
new
features
will
get
added
to
our
node
instead
and
then
we'll
put
our
node
back
in
this
struct
here.
The
resource
extract
we're
looking
at
and
we're
kind
of
diving
into
the
reasons
behind
it
to
make
sure
that
we're
on
the
same
page,
about
that.
D
C
In
the
short
term,
I
think
we
can
merge
the
merge,
cube
object
on
the
r
node,
so
so
it
at
least
can
give
user
improvement
of
this.
But
in
the
long
term
we
still
have
this
problem,
which
is
arnold.
C
It's
not
a
intuitive
name
and
like
it
have
a
still
have
a
very
big
surface.
So
what's
the
plan.
C
A
I
think
I
think
renaming
it
is
not
to
be
done
lightly.
Just
because
of
what
a
serious
pain
it
would
be
for
everybody
to
accommodate
on
the
break
and
change
front.
We
could
consider
it
if
you
think
it's
vitally
important
and
worth
the
disruption
in
terms
of
the
surface
area.
A
I
think
we
can
totally
change
that,
especially
if
we're
introducing,
with
with
the
changes
in
your
pr,
better
ways,
easier
ways
to
do
the
exact
same
thing
sure
why
not
get
rid
of
the
old
boys-
and
I
think,
by
using
customize
like
customize
as
a
heavy
consumer
of
this
right,
and
it
will
show
us
any
downsides
that
we
might
not
have
realized
in
of
the
consequences
of
reducing
the
surface
area
like
any
edge
cases
that
maybe
we
didn't
consider
with
the
new
way
that
we're
proposing
of
doing
things
but
like
this,
our
node
is
part
of
this
library
right
and
we
can
evolve
it
as
well.
C
Okay,
another
another
problem
I
think
we
we
may
want
to
address
is
that
so
currently,
when
one
users
want
to
develop
functions,
so
they
need
to,
they
need
to
understand
the
framework
package
and
the
the
yaml
package
and
also
the
kl
package,
which
combined
it's
a
like
pretty
big,
a
pretty
pretty
steep
learning
curve.
C
You
know
so,
in
my
opinion,
that
for
the
framework
package,
you
know
it's
currently
depend
on
a
kl
package
and
expose
some
of
the
klo
stuff.
So
I
think
ideally
like
we
should
like.
We
should
not
do
that
to
simply
simplify
this.
C
For
example,
like
the
the
kl
filter
is
heavily
used
in
the
framework
package
and
the
meanwhile,
we
have
the
resource
display,
processor
interface,
so
they
are,
in
my
opinion,
doing
the
very
similar
stuff
and
this
off
like
often
confuse
the
users
because
they
don't
under
they
don't
know
what
want
to
use
which.
A
So
we
need
to
provide
guidance
about
what
the
simple
case
looks
like
and
if
we
had
like
here's,
the
tutorial
of
how
to
build
a
simple
thing:
here's
the
tutorial
of
how
to
build
a
multi-version
api.
A
Here's
a
tutorial
of
how
to
layer
them
together,
like
a
filter,
is
the
central
concept
that
all
of
these
things
are
revolving
around
the
concept
that
you
have
some
yaml,
that
you're
taking
in
you're
going
to
do
whatever
to
it,
and
then
you
emit
it,
but
sometimes
that's
not
sufficient,
because
you
need
per
the
spec
to
have
some
sort
of
configuration
that
changes.
What
that
filter
does
so
the
filter
interface
on
its
own.
A
A
So
that's
why
we
have
more
layers
around
that,
like
you
can,
if
I
remember
correctly,
build
a
processor
straight
out
of
a
filter
for
the
case
where
you
don't
actually
have
any
config
and
the
processor
interface
lets.
You
avoid
filtering
all
together
if
you
don't
feel
like
it.
A
If
I
remember
that
correctly
that
it's
just
it's
like
processed
and
you
take
in
a
resource
list
and
you
mutate
that
resource
list-
and
you
emit
an
error,
I
think
that's
the
signature,
which
would
let
you
do
whatever
you
want
in
there
technically,
if
you
don't
feel
like
implementing
it
with
the
tools
that
we've
provided.
So
it's
like
it's
built
in
a
layered
way
where
you
can
hook
in
where
you
want.
A
So
I
am
okay
with
like
if
we
can,
if
there
are
opportunities
to
not
expose
kio
tools
where
we
can
just
take
care
of
stuff
for
the
user.
That's
that's
great
filter,
though,
is
like
the
low
level
thing
that
we
build
on
on
top
on
tops
of
all
the
other
tools
that
we
provide,
so
that
one
in
particular
can't
go
away
for
sure.
A
That's
why
our
node
is
so
central
because
everything's
built
on
filter,
I'm
just
looking
at
your
comment
here,
which
I
think
is
related
to
what
you
were
just
saying.
C
And
also
the
kale
provides
some
reader
writer
stuff,
which
I
believe
we
don't
need
to
expose
to
the
user,
necessarily
because
this
reader
writer
applies
something
like
either
set
set
reader
annotations,
which
is
like
more
for
the
aux
treaters,
but
not
for
the
functional
australians
for
the
functional
authors.
They
are
more
likely
to
just
read
the
ammo
and
decoded
not
to
really
set
these
additional
internal
annotations.
A
Yeah,
I
guess
I
I'm
not
attached
to
exposing
the
ko
reader
and
writer
specifically
make
a
good
point
that,
like
most
of
the
options
that
it
has
aren't
useful
to
the
function
authors,
I
think
it's
mostly
being
used
for
expediency
because
to
make
your
function
testable,
you
don't
want
it
to
assume
that
it's
reading
straight
from
standardizing
standard
out
you
want
to
you
want
to
use
a
reader
interface
of
some
sort,
so
so
that,
yes,
you
can
write
test
cases
that
that
you
are
controlling
the
input
and
the
output.
A
So
for
that
reason,
all
of
the
places
that
we're
assuming
that
things
are
coming
from
standard
and
are
standard
out
are
in
a
sub
package,
a
kml
function,
framework
command
and
everything
in
the
framework
itself
uses
a
reader.
A
If
there's
a
way
to
make
it
less
complicated
like
by
hiding
the
fact
that
it's
a
kio
reader
that
has
all
these
fancy
options
like.
I
think
we
need
some
sort
of
reader
to
make.
It
continue
to
possible
to
make
these
testable,
but
it
we
don't
have
to
expose
options.
If,
if
that,
if
some
that's
something
we're
doing-
and
it's
not
necessary.
C
We
I
think
we
can
provide
some
test
utilities
or
some
test
harness
to
kind
of
use
the
kl
package
to
do
to
like
read
the
red
package,
but
if
we
let
the
user
directly
use
kl,
it's
it's
hard.
It's
it's
like
not
easy
to
use.
A
The
gist
of
what
the
the
package
does
is
it
actually
lets
you
test
either.
If
you
have,
if
you're
using
the
command
package
to
build
a
cobra
command,
you
can
use
it
for
that
or
if
you're,
not
using
cobra,
you
can
test
a
processor
directly
and
the
setup
is
that
you
have
one
directory
per
test
case
and
you
have
conventional
file
names
that
have
the
input
and
the
expected
output,
whether
that's
an
error
or
a
well
resource
list,
successfully
produced
and
the
framework
automatically
kind
of
wires.
A
C
The
the
unit
has
is
a
golden
test
kind
of
you
have
a
the
input
here.
You
you
have
the
input
in
your
directory
and
then
you
have
the
expected
output
in
your
separate
directory
and
the
yeah
golden
test
golden
files.
And
then
you
compare
them.
A
Yeah,
that's
exactly
it
they're
all
in
the
same
directory
and
they
have
conventional
names,
but
that's
exactly
it.
It's
golden
file
tests
and
it
supports
errors
too.
So
it's
just
a
different
file
name.
If
you
expect
an
error
and
then
it
matches
the
text
in
the
errors
file
against
the
actual
error
that
was
received.
A
So
it's
always
at
the
integration
level,
really
because
you're
you're
taking
a
resource
list
and
putting
the
result.
A
A
B
C
Okay,
I
will
take
a
closer
look
at
this.
C
Not
in
this
pr,
not
in
this
pr
is
like
yeah,
we
can
fix
that.
You
know,
like
you,
know,
follow
up
later
pr.
D
Yeah,
I'm
sort
of
wondering
if
we
should
have
a
agreement
on
the
high
level
ideas
like
what
kind
of
things
we
want
to
achieve
from
this
pr,
I
can
see
we
are
trying
to
make
the
kiamo
functions
more
powerful.
At
the
same
time,
we
want
to
simplify
or
truncate
truncate
the
surface
areas.
We
want
to
redefine
the
terms,
so
it's
easier
for
users
to
understand.
D
C
A
Folks
to
play
around
with
and
in
a
future
follow-up,
we
can
reduce
the
api
surface
of
our
node
by
converting
some
of
customize,
for
example,
to
use
the
new
version
instead
of
the
old
version.
And
if
that
goes
well,
you
know
it's
a
perfect
concept,
then
just
get
rid
of
the
old
way.
A
That
is
just
more
confusing
and
doesn't
work
any
better
and
then,
as
a
separate
decision,
we
can
consider
renaming
cube
objects,
but
that
one,
I
think
we
need
to
have
a
further
conversation
and
do
that
in
its
own
specific
pr,
because
of
the
amount
of
impact
just
because
of
how
central
that
object
is.
D
I
felt
the
cool
object.
The
confusion
of
our
node
is
a
significant
issue
and
if
we
do
plan
to
rename
it,
why
not
do
it
earlier
so,
as
less
user
will
be
affected
once
once
they
they
do.
Love
this
increasing,
powerful,
arnold
or
cube
object,
functions.
A
A
So
would
need
to
make
a
considerable
amount
of
adjustments,
and
we
shouldn't
take
that
decision
likely
like.
I
think
we
should
just
you
could
make
as
soon
as
these
pr's
are
in.
You
could
make
that
pr
in
a
week
or
two
from
now,
but
it
should
be
in
its
own
pr,
where
we
deliberately
make
that
decision
and
like
when
an
end
user
comes
and
like
we
it's
having
to
have
that
update
burden,
they
can
specifically
see
the
pr
that
did
it
and
the
explanation
for
why
it
happened,
and.
D
A
I
mean
if
it's
we
don't
do
releases
that
regularly.
What
I
said
could
happen
before
or
the
next
release.
Even
I
just
don't
think
yeah
and
we
should
see
what
the
consequences
are
for
ourselves
like
when
we
do
that.
We'll
have
to
rename
like
do
the
rename
ourselves
all
throughout
customized,
so
we'll
be
able
to
see
the
kind
of
levels
of
pain
that
we're
inflicting
on
our
users
and
make
the
decision
as
part
of
that.
D
C
I
think
so
to
summarize
it
so.
The
action
item
here
is
to
merge
the
cube
object
surface
with
the
r
node
and.
C
Then,
and
then
also
for
the
other
s
main
functions
that
can
be
merged
with
the
command,
the
command
sub
package
and,
and
then
this
pr
should
be
a
good
goal
is
there
anything
else
should
be
addressed.
A
I
I
don't
want
to
say
that
I
will
for
sure
approve
it
just
like
that,
because
it's
so
big
but
those
those
are
the
main
blockers
that
I
see
right
now
and
I
will
commit
to
as
soon
as
that
is
updated,
going
taking
another
look
at
the
pr
and
you
know
with
it
getting
smaller
as
well.
I
think
that
will
help
with
the
review
ability.
A
I
just
want
to
make
sure
that
we're
that
what
we're
changing
is
going
to
have
an
internally
consistent
result
and
that
our
tools
are
going
to
continue
to
work
together
and
work
for
both
the
like
that
in
in
supporting
the
simpler
use
case,
we're
not
going
to
break
the
advanced
use
cases.
So
that's.
A
A
D
Oh,
I
actually
removed
my
items.
One
is
about
the
the
kiamo
runtime.
We
just
talked
about
earlier
that
we're
going
to
drop
the
starlark
and
I'm
going
to
add
that
to
our
own
repost,
and
that
also
means
we
are
going
to
drop
the
the
other
runtime
right.
I
don't
remember
the
name.
A
A
So
if
the
kml
library
evolves
the
starlark
implementation,
then
it's
not
particularly
relevant
to
customize,
because
customize
is
deprecating
that
feature
completely
and
we
could
remove
it
whenever,
as
far
as
we're
concerned,
the
question
of
moving
it
out
to
its
own
repo,
like
this
is
cncf
owned
code.
So
I
that
would
have
to
be
brought
up
at
a
sig
meeting.
The
runtimes
are
their
own
feature.
A
It's
not
just
the
function,
library
that
we're
improving
this
pr,
but
the
runtime
that
you're
talking
about
here
is
another
another
sort
of
independent
feature,
and
it
could
totally
make
sense
to
have
that
in
its
own
repo-
and
I
don't
have
any
conceptual
objections
about
that
myself,
but
would
have
to
figure
out
just
say
how
to
like
whether
that
logistically
makes
sense
and
is
worth
like.
The
effort
of
extracting
it
versus
just
evolving
the
starlark
thing
to
meet
your
needs
and
unhooking
customize.
D
So
you
means
the
whole
whole
fn
directory
can
be
in
its
own
repo.
At
the
same
time,
correct
me.
At
the
same
time,
we
remove
the
starlock
subdirectory
under
the
fn.
A
A
Probably
not
because
it's
not
importing
that
file
but
somewhere
in
this
vicinity?
There's
there's
something
that
makes
customized
specifically
use
function,
runtime
starlark,
and
we
could
just
disconnect
that
and
then
the
changes
that
you
make
will
not
have
an
impact
on
customize
anymore,
because
customized
won't
support
starlark,
and
that
would
be.
That
would
be
the
easiest
thing
to
do
for
sure.
A
C
C
So
so,
regarding
the
runtime,
I
want
to
ask
that
so
in
customize,
have
you
heard
any
complaints
or
concerns
about
using
dockers
with
customize?
A
Docker,
so
we
definitely
are
aware
of
plenty
of
use
cases
where
customize
itself
is
being
used
in
docker
and
docker
docker
impossible,
so
they
they
aren't
able
to
use
the
containerized
version
of
functions
as
a
result,
the
alternative
there
is
exec.
A
So
that's
that's
why,
as
far
as
customers
is
concerned,
exec
support
is
the
number
two
priority
after
containers
and
and
that's
that's.
The
way
forward
that
we
decided
in
our
kep
is
to
support
both
containers
and
exec.
As
the
extensions
points,
their
mechanisms.
D
C
For
the
for
the
for
mike
and
the
windows,
so
have
you
heard
any
users
like
feedback
on
these.
C
Yeah,
so
for
the
for
capsid,
we
want
to
find
or
provide
some
darker
alternatives
for
running
containers,
since
docker
will
no
longer
be
free
on
mac
and
windows.
So
this
is
a
concern
for
cap.
A
I
haven't
heard
the
feedback,
but
that
totally
makes
sense
to
me
so
to
be
clear:
I'm
not
advocating
for
the
function
runtime
libraries
used
by
kept
and
customized
to
diverge
like
I
think
they
should
stay
together
and
orchestrators
can
pick
and
choose
which
ones
that
they
want
to
support
so
like.
If
kept
wants
to
build
a
non-docker
container
runtime,
I
totally
want
it
in
function,
runtime
or
whether
that's
inside
camel
or
as
its
own
repo.
I
think,
having
like
a
central
collection
of
enzymes
that
are
supported
by
the
that,
like
help
implement
the
spec.
B
Also,
just
to
add,
I
do
have
some
user
feedback
from
customized
users
saying
that
they
can't
run
care
the
containerized
km
functions
because
they
have
end
user
devices
that
don't
have
docker
or
aren't
supported
by
docker.
So
I
think
a
docker
alternative
makes
a
lot
of
sense
for
those
users.
E
Yeah
with
very
little
context,
hi
folks,
my
my
name
is
nick.
I
am
for
many
years.
Was
the
tech
lead
of
cross
plane
now
part
of
the
sort
of
cosplay
project
steering
committee.
We
have
a
use
case
where
we're
highly
interested
in
using
krm
functions.
It's
still
early,
but
absolutely
shelling
out
to
docker,
as
is
the
way
that
we
run
things
it's
not
going
to
work,
so
I'm
only
just
diving
into
these
runtime
libraries
and
whatnot
at
the
moment,
but
I'm
definitely
curious
to
chat
to
you
all
about
it.
E
A
Okay,
so
on
the
topic
of
that
alternate
runtime,
is
that
something
that
you're
you're
working
on
or
is
there
like
a
a
doc
that
we
should
all
go
check
out
about
what
the
alternatives
might
be,
or
is
there
an
action
item
here?
Basically,.
C
So
I'm
working
on
exploring
this,
but
we
are
still
early
in
the
early
stage.
So
I
think
if
we
have
some
solutions,
I
think
it
can
be
used
in
both
kept
on
the
customers.
D
Yeah,
the
other
one
is
about
so
for
kml
is
under
customized,
but
we
release
it
separately.
I'm
wondering
every
time
we
want
to
make
service
changes,
though,
is
in
alpha.
I
mean
in
long
term,
once
it's
in
v1
or
a
more
material
version,
how
we
can
guarantee
the
do.
We
have
like
some
enterprise
infrastructure
that
we
can
make
sure
it
wouldn't
affect
our
users.
D
I'm
thinking
like
just
the
as
munchie
mentions
the
golden
test
that
possible.
We
encourage
our
use,
our
keyamu
or
customized
users
to
add
their
own
tests
in
the
repo.
D
They
don't
need
to
tell
us
their
their
use
case
say
for
private.
They
may
have
a
private
repo
that
they
won't
be
able
that
we
won't
be
able
to
realize
in
the
gold
library
import
the
importantly
so
for
kiamo
we
have
in
the
go
document.
It
tells
all
the
public
repo
which
imports
that
kml
library,
but
there's
for
it,
wouldn't
show
the
private
repo
and
to
better
serve
the
private
repo
users.
We
can
encourage
people
to
add
text
in
the
keyammo
or
customize
framework
repo,
and
we
make
sure
those
test
passes.
D
B
A
I
think
I
understand
the
general
idea,
but
I'm
having
trouble
sort
of
imagining
what
the
specifics
look
like,
because
we're
like
a
toolkit
for
building
a
bunch
of
different
things.
So.
D
It
can
be
like
I'm
a
kml
user,
but
I
don't
my
repo
is
public,
it's
private
and
I
import
the
customized
repo
repo
okayamo
libraries.
I
want
to
make
sure
the
changes
on
kiamo
wouldn't
break
my
ripple,
my
own
code,
that
I
cannot
share
publicly.
D
So
I
add
my
I
add
test
in
kayamu
codebase
that
I
expect
every
change
to
kayamo
will
run
a
pre-submit
test
against
my
resource
input
and
the
expected
output
I
provide,
and
if
the
test
fails,
that
pr
shouldn't
be
merged
or
if
okay
yama
maintainers
have
some
breaking
change
about.
Like
the
you
exchange,
I
can
be
notified
without
sharing
details
without
monitor,
without
checking
the
latest
change
on
kayamu.
A
I
think
that
at
a
high
level,
that's
definitely
a
great
idea
to
have
some
sort
of
like
easy
workflow
for
people
to
contribute
their
test.
Cases,
like
I
said
customize
does
have
something
very
similar
to
that
in
concept.
Where,
even
for
just
like
bug
reports,
we
encourage
people
to
commit
test
cases
to
this
particular
directory.
So
having
like
a
workflow
for
that
established
totally,
I
don't
think
we
want
to
be
responsible,
especially
given
the
scope
of
the
team
to
like
reach
out
to
individual
people.
A
D
Yeah
yeah
yeah,
so
once
we
make
that
more
like
automated
and
prod
framework,
though
it
basically
reduce
the
maintenance
cost.
At
the
same
time,
I
think
I
we
always
have
the
question:
have
we
heard
people
reporting
certain
things
it
may
make
it
make
make
the
decisions
easier
once
we
already
have
those
resources
provided
by
users.
A
Have
we
heard
what
reports.
D
Oh
I
mean
when
we
talk
to
to
make
a
decision,
there's
always
a
question
like
have.
We
heard
people
reporting
certain
kind
of
usage
of
this
feature
and
we
try
to
use
that
to
help
us
make
decisions
rather
than
asking
those
questions.
D
If
users
they
they
provide
those
kubernetes
resource
in
the
golden
test,
as
the
in
and
out
yaml
files,
then
it's
easier
for
us
to
to
check
and
make
sure
how
users
will
feel
about
those
feature
changes
or
the
the
ux
change.
A
I
think
it
will
never
be
a
starting
point
to
get
the
answer
to
that
question.
Just
because,
like
the
portion
of
users,
who
will
bother
to
actually
contribute,
is
going
to
be
a
small
small
slice
of
what's
out
there.
But
that
said
that
doesn't
mean
it's
not
worthwhile
like
it's
definitely
better
to
have
more
internal
data
and
more
enforcement
of
what
constitutes
a
breaking
change
than
not
that's.
Definitely
a
good
thing.
C
C
A
I
just
wanted
to
mention
on
the
composition
side.
I
probably
won't
do
a
regular
stand
up
for
it,
because
the
core
code
is
out
there
and
there's
a
list
of
issues
that
need
to
be
tackled
before
it
could
be
merged
just
because
it
won't
be
helpful
without
the
issues
tackled
first
and
it's
not
a
burning
priority
for
the
customized
project
compared
to
other
issues
that
we're
facing
right
now.
So
on
the
roadmap.
A
It
is
there,
but
it's
there
as
a
secondary
priority,
so
unless
other
folks
feel
passionately
about
moving
that
forward,
it's
not
something
that
I'll
be
personally
working
on
for
a
little
bit
still
on
the
roadmap.
But
it's
just
it's
not
something.
I'm
gonna,
I'm
gonna
start
with
this
year.
A
Anything
else
to
to
add
before
we
wrap
it
up
here.
Natasha
do
you
have
any
sort
of
stand
up?
No,
I
don't.
A
Okay,
well,
thank
you,
everyone
for
coming
to
this
karen
function,
sub
project
meeting.
Thank
you
for
all
of
your
hard
work
on
these
topics
and
we'll
see
you
again
in
two
weeks.