►
From YouTube: Kubernetes SIG CLI 20220504 - KRM Functions
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
today's
sig
cli
cam
function,
sub
project
meeting
today
is
wednesday.
May
4th
and
my
name
is
katrina
very
my
pronouns.
Are
she
her
and
I
will
be
your
host
for
today.
A
So
before
we
get
started
with
the
topics?
Do
we
have
any
introductions
today?
Anyone
who
would
like
to
introduce
themselves
to
the
group.
B
Yeah,
my
name
is
holden
o'neill,
I'm
a
software
developer
at
sas
institute
and
I'm
interested
in
learning
more
about
the
carerm
functions.
I've
been
using
customize
for
a
couple
years
now,
as
well
as
kubernetes,
and
things
like
that.
So
just
here
to
try
to
learn
a
little
bit
more
and
maybe
contribute
something
back.
A
So
we
only
have
one
topic
here
today:
munchie
do.
Would
you
like
to
present
your
proposal.
C
So
this
one
okay,
so
this
one
is
a
proposal
about
adding
adding
something
new
into
the
current
function,
spec
to
support
running
it
as
a
server
right
right
as
a
server.
So
the
so.
The
motivation
here
is
that
currently,
when
we
evaluate
our
current
functions,
we
roughly
have
this
sixth
step.
So
we
spin
up
the
runtime,
so
it
can
be
either
exact
or
container
currently,
and
then
next
step
is:
do
some
pre-processing
like
fetching
something
from
network
or
parsing
a
schema?
C
So
from
from
our
experience
at
google,
we
feel
that
so
the
step
one
and
the
step
two
and
the
step
six.
So
if
we,
so
these
steps
can
take
more
time
than
the
actually
real
evaluation
step,
which
is
from
step
three
to
step
five.
C
So,
for
example,
if
the
if
we
are
using
the
container
as
runtime
and
spinning
it
up,
it
may
takes
a
few
seconds
and
also
something
like
if
we
fetching
up
a
schema
from
network
and
if
it's
big
it
can
take
half
seconds
or
longer
and
similarly
for
passing
the
schema.
So
so,
if
we
can.
C
C
Inspect
v2,
so
I'm
proposing
that
supporting
running
car
function
as
server
so
that
we
can
spin
it
up
once
and
do
the
pre-processing
once
and
then
we
can
keep
evaluating
functions
and
for
many
different
for
many
resource
lists.
C
And
so
we
don't
plan
to
change
to
the
resource
list
itself,
but
we
need
to
change
to
the
function.
Metadata
schema
to
indicate
that
what
version
or
what
spec
version
these
functions
conforms
with.
So
this
we're
introducing
a
new
field
here.
C
C
C
So
this
is
a
new
field
that
h
it's
at
least
of
the
spec
versions.
It
confirmed
with
so
copy
multiple
value.
C
We
can
support
two
flavors
of
server,
so
the
first
one
it
can
be
a
http
server
and
it
can
accept
json
or
ammo,
and
it
can
also
return
yama
or
design
in
the
response.
So
currently,
what
we
support
in
spec
we
want,
is
it
always
yamo.
C
C
And
yeah,
similarly
for
the
grpc
server,
so
the
so
the
protobuf
definition.
So
it's
being
worked
on
so
it's
I
haven't
finished
it
yet
so
because
in
our
function
spec
we
have
a
like
a
a
field
called
items
and
also
function
spec.
It
can
be
any
any
crm
resource,
so
we
need
to
have
a
way
a
generic
way
to
represent
them.
So
I'm
still
working
on
this.
So
I'm
expecting
to
update
this
by
the
end
of
wake.
C
To
find
out,
if
it's
implement
spec
v2
of
both
v1
way
too,
we
can
so
we
can.
The
user
can
find
it
from
the
crm
function
registry,
eventually
one
when
the
users
come
when
the
developers
publish
their
functions
under
their
function
metadata
there.
C
And
the
second
user
story
is
that
so
as
a
orchestrator
occupation
tool
developer,
so
I
can
develop
tools,
work
with
both
spectrum
and
way
tool,
and
so
that
who
can
choose
to
oops
the
tools
can
choose
to
if
reuser
can
function,
runtime
or
not,
depending
on
its
need.
C
I
can
implement
my
functions
to
confirm,
spec,
v2
or
both
rebound
way
to
and
then
publish
to
the
registry
and
the
for
end
users
to
consume
it,
and
if
it's
a
container-based
functions,
the
function
metadata
can
also
be.
C
Saved
into
oci
annotations,
so
while
our
users
fetch
an
image,
it's
more
self-contained,
the
user
have
the
image
and
then
can
read
the
metadata
from
the
outside
image,
and
then
the
user
get
everything
he
or
she
needs.
C
The
ask
ability,
so
this
proposal
is
intend
to
help
the
performance
so
with
the
same
amount
of
cpu
and
memory
resource
and
the
given
same
time.
It
is,
it
can
evaluate
more
current
functions.
C
C
So
this
more
like
it
can
helps
when,
for
example,
a
a
platform
team,
your
enterprise
want
to
build
some
tooling
to
manage
their
cam
resource.
C
This,
the
remote
can
be
helpful
because
they
may
evaluate
same
functions
on
behalf
of
different
users.
C
C
Spin
out
runtime
and
evaluate
and
shut
down
on
the
server
mode,
but
we
can
actually
also
make
it
to
be
the
two
spec
to
be
disjoint
means
we
want
and
the
way
to
doesn't
have
overlappings
and
that's
wealth
option.
C
Another
alternative
is
that
so
in
the
we
we
extend
from
the
what
the
current
format
is.
So
in
the
input
we
can
accept
multiple,
multiple
resource
list
separated
by
dash
and
in
the
output.
We
did
the
same.
We
do
the
same,
write
one
resource
list
out
and
then
add
a
dash
dash
and,
and
then
the
next
one
something
similar
to
the
server
mode.
Is
that
so
this
current
functions
after
evaluating
one
resource
list
it
stay
there
wait
for
the
next
one
until
see
the
next
resource
list
and
with
3-dash.
C
In
our
schema,
we
can
first
make
it.
We
will
offer
one
along
with
another,
proposed
change
by
another
proposed
change,
which
is
proposing
changing
proposed
value
to
purpose
values,
and
we
do
it
in
way
to
alpha
one
and
give
it
some
time,
and
then
we
can
do
it
to
our
way
to
beta
one
and
the
eventually
graduate
to
e2
yeah.
So
that's
overview
of
this
proposal.
A
Yeah,
go
ahead
all
right,
so
my
first
question
is:
why
is
it
a
v2
and
what
is
it
a
v2
of
exactly
because
it
sounds
like
we're
not
actually
incrementing
the
resource
version
on
any
of
the
associated
types
like
resource
list
isn't
changing
function?
What's
it
called
function,
meta
data?
I
guess,
is
it
the
kind?
Is
it
showing
on
the
example,
but
whatever
the
kind
is
for
that
resource
isn't
actually
getting
a
new
group
version.
C
So
so
it's
I
think
it
can
be
made
backward
compatible,
but
in
my
opinion,
it's
I
feel
the
server
mode
is
significantly
different
from
what
we
currently
have.
So
it
may
make
sense
to
make
a
way
to,
but
I'm
open
to
this.
A
I
guess
it
is
also
related
to
my
next
observation
comment,
which
is
that
v2,
if
versus
v1
like
v2,
if
we
were
implementing
it
as
a
superset,
then
v2,
you
know
server
mode
is
optional
in
b2
and
not
existent
in
v1.
A
So
perhaps
the
more
useful
distinction
would
be
just
explicitly
say:
you
know
declaratively
server
mode,
supported
true
false,
because
if
you
say
conform
to
spec
version
v2
well,
if
server
mode
is
optional
in
b2,
then
you
are
conformant
with
v2,
even
though
you
don't
support
server
mode
right
and
as
you
point
out
toward
the
end
of
your
document
there,
I
don't
think
this
mode
is
going
to
be
approachable
enough
to
be
universally
adopted,
because
it
has
a
lot
more
requirements
than
the
original
both
on
the
developer
side
and
on
the
fact
that
you
just
have
to
have
infrastructure
running
in
the
background
for
it
to
work.
A
So
I
think
we
would
need
to
retain
that
as
well.
Retain
the
original
mode
in
v2
and
make
it
optional
so
we'll
need
that
further
specification,
whether
or
not
we
bump
the
version.
C
Okay,
I
see
so
it's
kind
of
a
feature,
gate
and
a
feature
flag.
We
can
we,
we
can
put
it
in
the
in
the
function.
Metadata
schema,
probably
with
a
different
name
here.
A
Oh,
I
don't
know
if
you
were
talking
about
wanting
to
version
the
feature
and
put
it
through
alpha,
so
you
could.
You
could
put
the
word
alpha
into
the
into
the
field
as
well.
If
you
still
wanted
to
do
that
right,
so
you
could
have
an
alpha
field.
Alpha
server
support
whatever
you
want
true
and
then,
when
you
graduated,
that
would
just
change
the
field
to
serve
or
support.
True.
A
Yeah,
I
have
another
one,
that's
very
related
to
this.
This
field
question
as
well,
but
it
kind
of
ties
into
my
biggest
question
about
the
whole
thing,
which
is
on
the
question
of
trust.
So
we
have
a
lot
of
different
actors.
This
actually
introduces
even
more
actors
than
any
previous
scenario
that
we
have
described
in
our
other
caps.
A
If
I'm
counting
correctly,
we
have
up
to
five,
so
you
have
the
end
user.
Who
is
they?
May
they
may
not
have
written
their
customization
or
pep
file
themselves?
They
just
want
to
use
it
so
end
user
inflating
a
manifest.
Let's
say
they
may
or
may
not
be
using
catalogs.
There
are
two
kind
of
sub
scenarios
in
there.
Then
you
have
the
manifest
author,
who
may
or
may
not
be
the
same
person
who
again
may
or
may
not.
Also
author
a
catalog
two
sub-scenarios.
A
Then
you
have
the
function
author,
which
you've
addressed,
and
you
have
the
orchestrator
author
so
like
kept
or
and
then
you
have
the
function
server
host
now
as
a
fifth.
A
So
something
I'd
really
like
to
see
in
the
proposal
is
more
detail
and
all
of
the
stories
and
in
some
of
the
technical
design
places
as
well
about
who
is
who
is
owning
the
trust
for
like,
and
what
expressing
that
trust
in
the
other
parties
looks
like
from
these
various
stories
perspective.
So
if
I'm
an
end
user
inflating
a
manifest,
what
trust
do
I
need
to
explicitly
express
in
the
manifest
author,
the
function
author,
the
orchestrator
author
and
the
function
server
host,
some
of
that's
implicit?
Obviously,
I
run
kept.
A
Obviously
I
trust
kev,
that's
good
enough,
but
like
with
catalog,
we
have
some
additional
trust
mechanisms
in
there,
but
there's
nothing
for
the
function,
server
host,
in
particular
for
their
relationship
with
any
of
these
other
parties.
So
that's
like
that's
something
I'd
like
to
see
a
lot
more
detail
on
and
going
back
to
the
field.
One
of
the
places
where
that
comes
up
in
the
technical
design
is
that
a
boolean
probably
isn't
sufficient.
A
Because
of
that
trust
question
like
maybe
that
it
supports
the
function,
supports
being
run
in
server
mode,
but
there's
a
couple
users
a
couple
different
parties
in
between
the
end
user,
who's
actually
doing
the
invocation
and
the
function
author,
who
added
the
support,
because
the
fact
that
it
can
theoretically
be
run
that
way
doesn't
mean
that
there's
a
particular
server
running
somewhere
that
I
can
use
and
that
I
trust
that
server
so
the
missing
pieces.
A
In
my
mind,
there
are
like
how
do
I
know
where
that
server
actually
is,
as
the
orchestrator
author
and
as
the
end
user
and
as
the
end
user
in
particular?
Well,
I
guess
we're
the
orchestrator
author.
How
do
I,
how
do
I
express
my
trust
in
that
particular
function?
Server.
C
Yeah,
I
will
add
more
details
discussing
this.
A
If
you
had
any
thoughts
to
share
on
that
topic,
we
could
also
discuss
now.
We
still
have
plenty
of
time.
That's
my
that's.
My
big
question
is
about
those
relationships
and
how
to
express
them
in
a
way
that
will
be
good
enough
for
security.
C
Yeah,
I
think,
in
that
scenario,
that's
the
that
the
server
host
is
responsible
for
establishing
this
trust
and
also
curated.
The
also
carefully
pick
the
functions
they
run
as
a
server
because
when,
when
they
run
the
this
current
function
as
servers,
that
means
they
trust
these
functions
and
also
they
provide
the
a
service
for
a
service
for
their
current
function
and
users.
C
A
So
you
have
a
trust
relationship
between
the
function,
author
and
the
function
server
host
in
that
case
and
then
the
end
user
no
longer
needs
to
trust
the
function
address
directly.
They
need
to
trust
the
server
host
because,
for
example,
some
functions
require
network
access
because
they
interface
with
a
secret
store.
A
C
Yes,
yes
and
the
the
server
so
so
the
the
function
function,
orchestrator
tools
and
may
also
support
authentications,
but
I
I
think
so
I
think
that
may
complicate
the
our
starting
point.
Maybe
we
can
at
this
at
the
starting
point.
C
We
can
restrict
this
use
case
to
be
within
one
organizations,
which
means
the
users,
the
end
user,
who
can
access
this
krm
function?
Evaluation
service
is
in
the
same
organ
organizations
so
that
the
authentications
may
not
be
really
necessary,
so
it
can
simplify
our
initial.
C
A
If
you,
if
you
see
the
manifest
like,
maybe
there's
some
detail,
that's
just
not
specified
in
the
design
yet
but
like.
If
the
manifest
just
says
this
function
supports
server
side
like
running
a
server.
What
does
that
mean
like?
Where
is
the
server?
How
does
it
get
found?
How
does
that
information
like
where?
Where
does
the
information
about
that
come
from
and
how
does
the
end
user
say
like
yeah?
A
C
So
I
imagine
that
is,
that
is
solved
by
the
orchestration
tools.
So
the
tools
can
run
the
cam
functions
in
both
server
mode
or
the
standard
mode,
and
the
tools
also
provide
the
and
the
user,
some
maybe
some
kind
of
a
client.
And
then
this
this
client-side
a
client
tool,
can
talk
to
the
orchestration
tools
and
that
they
know.
C
They
can
give
the
kind
of
information
where
the
the
current
function
server
is
yeah.
Another
alternative
is
they
cut.
The
the
oxidation
tool
can
even
be
a
kind
of
dispatcher,
so
you,
the
clan
tool,
would
just
always
send
to
the
oxidation
tools
and
the
oxygen
to
dispatch
to
the
right
karen
function.
Server
running
by
this
tool.
A
So
in
in
the
concrete
case
that
you
have,
let's
say
talking
about
kept
and
google
as
the
server
host,
would
you
then
hard
code
in
a
given
distribution
of
kept
the
address
for
google's
server
host
or
how
is
that
information
getting
in
there?
Still,
even
if
the
orchestration
tool
is
the
one
doing
the
selection.
C
So
the
currently
used
case
we
in
our
mind,
is
that
we
imagine
that
this
obstruction
tools
and
also
the
parent
functions
servers
can
are
run
in
the
same
kubernetes
cluster,
and
so
it
happens
in
the
same
clusters.
It's
not
it's
not
like.
Google
provide
a
hosted
service
like
our
official
hosted
service
somewhere
for
the
cam
functions
and
the
the
tools
will
make
request
tools
to
that
service
out
of
its
clusters
is
running.
A
C
A
A
A
So
I
feel
like
we
should
have
a
way
for
me
as
an
end
user
to
say
that
I'm
running
this
function
in
my
own
fleet
over
here
and
have
and
get
the
benefit
as
well.
Otherwise,
the
benefit
is
just
so
narrow.
C
You
mean
like
a
provide
a
discovery
mechanism
or
probably
cache
the
endpoint
somewhere
for
the
tool
so
that
they
can
access
that.
A
That's
not
100
hosted
like
you're
the
one
you're
talking
about,
because
I
don't
think
that
that
describes
the
vast
majority
of
our
users
and
the
performance
problem
that
you're
addressing
with
this
is
real
and
it's
interesting
and
like
if,
if
you
have
a
user
who's,
not
using
your
hosted
kept
service,
but
they
are
a
customer
and
they
use
kept
and
you're
already
hosting
those
functions
that
they
need,
like
it'd,
be
great
if
they
could
use
them
too
and
get
the
benefits
right
and
like
how.
How
does
the
discovery
work?
A
I
don't
know
if
you
were
to
use
the
orchestrator
approach
like
you're
saying.
Maybe
the
orchestrator
needs
to
have
a
flag
that
says
you
know,
function
server
host
here
and
then
you'd
probably
still
need
like
a
series
of
fallbacks,
because
you
don't
know
which
functions
the
host
supports.
A
You
could
potentially
also
use
catalog
for
this.
It
might
be
a
little
bit
more
brittle,
but
it
does
theoretically
support
use
cases
that
are
very
similar
right
in
terms
of
like.
Where
is
the
implementation?
That's
a
core
feature
of
what
catalog
is
trying
to
support
right
so
that
you
can
support
multiple
exec
binaries
that
might
be
in
different
places,
verify
them
and
containers,
and
you
could
have
a
runtime
environment.
A
That
is
saying
I
prefer
containers
or
I
prefer
exec
or
whatever,
and
then
you
could
tell
that
to
your
orchestrator
and
your
orchestrator
can
go.
Oh,
they
prefer
containers.
So
I'm
going
to
look
through
my
catalog
and
I'm
always
going
to
select
container
from
the
available
runtimes
for
the
given
function.
You
could
do
something
similar
for
for
the
server
side,
almost
as
an
additional
runtime.
A
Maybe
that's
another
way
to
think
of
it
where
you
instead
of
having
well
in
addition
to
having
this
the
stanza
container
and
the
one
about
exec,
you
would
have
one
about
server
function,
server
and
it
would
have
information
about
where
that
server
is
and
any
potential
credential
stuff.
I
guess,
like
you
were
saying,
might
need
auth
in
the
future
and
then
like
the
catalog
is
something
that
can
already
be
baked
in,
like
we
already
have
the
option.
A
In
theory,
it's
not
implemented,
but
the
design
has
the
option
to
bake
it
into
an
orchestrator
so
that
it
would
just
work
like
that
automatically
support
a
given
server
in
this
case
or
for
the
end
user
to
write
one
that
that
that
augments
it
and
they
would
have
the
chance
to
plug
in
their
own
targets
in
this
case
function.
Servers.
C
C
So
one
thing
I
want
to
clarify
so
in
our
in
our
use
case,
so
it's
it's
not
our
it's,
not
a
google
hosted
thing.
So
the
the
use
case
we
are
trying
to
what
we
are
trying
to
do
is
that
we
provide
this
orchestration
tools
to
the
users,
so
the
user
do
self-hosting
in
their
cluster.
C
So
that
means
like,
for
example,
each
enterprise
can
deploy
their
own
orchestration
tools
in
their
clusters
and
so
the
the
fun
current
function
as
a
service
is
maintained
within
that
cluster.
So
it's
not
shared
across
different
enterprise
or
things
like
that.
A
C
Yeah,
so
I
will
add
more
details
discussing
the
the
perspective
you
mentioned
yeah.
I
think
that's
worth
more
discussion
in
the
proposal.
A
Thanks
and
if
we
could
add
some
examples
of
what
the
configuration,
because
we're
talking
about
config
management
tools
when
it
comes
down
to
it,
what
the
configuration
actually
looks
like
for
each
one
of
the
stories,
I
think
that
would
be
super
helpful
to
understanding
the
user
experience
for
all
these
various
parties.
A
All
right,
if
there's
nothing
else,
then
thank
you,
everyone
for
attending.
I
think
that
was
our
only
topic
still
for
today.
Yep,
nothing
new
has
been
added
to
the
agenda.
So
please,
if
you
are
here
today,
please
add
your
name
to
the
attendees
list
and
thank
you
very
much
for
coming.
Our
next
meeting
will
be
two
weeks
from
today.