►
From YouTube: Kubernetes SIG CLI 20220420 - KRM Functions
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
this
meeting
again
of
the
krm
function.
Sub
project
today
is
april
20th.
My
name
is
katrina
berry
and
I
will
be
your
host
today.
I
think
we
have
at
least
one
new
face
joining
us
today.
So
if
you'd
like
to
introduce
yourself,
please
be
welcome.
B
I
guess
that
would
be
me.
I
was
at
the
previous
meeting
before
so
I
might
be
a
little
duplicate
here
but
yeah.
My
name
is
corey
jacobson
happy
to
be
here
looking
forward
to
sig,
cli
and
yeah,
just
trying
to
dive
into
everything
and
see
where
I
can
help
here.
A
We
have
several
topics
today,
so
let's
dive
right
in
the
first
one
is
that
we
are
discussing
potentially
changing
the
time
of
this
meeting,
because
some
of
the
folks
who
are
interested
in
attending
are
not
able
to
make
this
time
slot
and
I
posted
a
poll
in
the
six
eli
karam
function,
slack
channel
so
so
far
we
only
have
a
couple
of
respondents,
but
the
lead
time
would
be
to
move
it
to
1
30
pm
or
2
p.m,
pacific.
So
it's
considered
blue
later
in
the
day.
A
So,
if
you
plan
on
attending
this
meeting
going
forward,
please
go
and
answer
that
poll.
So
we
can
pick
a
time
that
works
for
everyone.
I
also
wanted
to
suggest
that
we
start
requiring
an
agenda
a
day
in
advance,
because
sometimes
we've
shown
up
to
this
meeting
with
nothing,
nothing
in
the
agenda
and
it
will
help
folks
to
plan
their
day.
A
If,
if
we
know
whether
or
not
the
meeting
is
actually
going
to
take
place,
so
I
would
propose
that
I'll
set
basically
a
reminder
for
myself
I'll
go
check
the
agenda
the
day
before
and
if
there's
nothing
in
it,
then
I'll
post.
A
cancellation
to
our
select
channel
any
objections
to
that.
A
C
No,
I
was
gonna
say
that
that
was
a
great
yeah,
but
you
know
like
okay.
Maybe
I
don't
need
to
say
anything.
A
I
I
know
we
also
have
like
natasha
as
a
regular
attendee
isn't
able
to
make
it
today,
so
I
think
I'll
bring
up
the
frequency
conversation
once
we
have
more
folks
who
generally
are
attending
present,
so
yeah,
please
go
fill
out
the
poll
and
we
can
move
on
to
the
next
item
so
carlos.
I
think
this
is
your
item.
C
I
added
that
item,
so
I
wrote
a
pr
based
on
one
of
our
previous
discussions
on
making
a
change
to
the
krm
function,
spec,
that
because
the
change
is
very
small
and
maybe
the
use
case
is
not
very
widely
used,
it
has
we
previously
discussed
we'll
do
a
b2
alpha
one.
So
I
prepare
a
pr
there's
just
two
things
that
I'm
not
really
sure
about,
and
they
may
affect
the
care
and
function
specs.
So
I
wanted
to
mention
them
here.
C
One
is:
we
could
make
an
option
to
auto
move
from
one
version
to
the
other,
but
I'm
not
sure
if
we
want
to
go
to
our
route
or
the
other
one
is
just
validate
such
as.
C
If
the
version
specified
is
one
we
can
validate
against
the
expected
schema
and
throw
whenever,
whenever
we're
trying
to
process
the
data,
the
other,
that
is
a
little
bit
more
concerning
probably
for
this
change
and
that
it
may
affect
whether
it
gets
merged
or
not
is
the
the
way
that
the
code
base
is
structured
is
a
little
bit
difficult
to
add
version
changes,
because
it
will
require
changes
across
the
entire
library
and
also
because,
at
least
at
the
version
that
it's
been
used,
go
didn't
have
generics
at
the
time,
it's
very
difficult
to
pass
different
kinds
of
values,
so
value
types,
and
in
this
case
like
if,
if
I'm
using
results
for
version
one
in
several
places,
I
need
to
make
sure
like.
C
I
have
two
versions
of
everything
and
then
it's
a
lot
of
code
application.
The
way
to
avoid
that
right
now
is.
We
can
just
include
both
fields.
New
one
and
the
old
one
which
for
context
is
proposed
value
was
singular.
Now
it's
plural
proposed
values
we
could
include
both
of
them
in
the
results.
Object.
C
That'll
mean
that
then
it'll
be
a
little
more
confusing
and
we'll
rely
on
the
struck
field
comments,
but
I
think
that
that
would
be
the
cleaner,
cleanest
way
to
implement
it
in
a
way
that
we
wouldn't
have
to
modify
the
entire
code
base.
Just
for
this
change,
so
just
bringing
those
up
to
here
in
case
they
are
interesting
for
this
question
and
whether
there's
an
idea
that
we
can
suggest
right
now.
A
Yeah,
I
haven't
had
a
chance
to
fully
review
this
pr,
yet,
unfortunately,
so
I
don't
want
to
give
you
a
half-baked
answer.
Just
like
off
the
top
of
my
head.
A
A
C
I
think
the
biggest
thing
is
ideally,
we
would
like
to
have
like
two
separate
objects
for
the
resource
list
or
results
based
on
the
version,
but
then
it's
a
little
bit
more
difficult
to
pass
those
around.
That's
why
we
need
to
both
feels
living
on
the
results
object,
at
least
for
now.
A
Yeah,
I
don't
think
that's
especially
a
problem.
I
don't
want
to
do
something
to
significantly
complicate
the
code
base
unless
it's
necessary,
like
what
the
upstream
types
do
is,
have
an
internal
version,
and
so
you
load
the
real
versions
that
are
consumer
facing,
and
then
you
have
a
conversion
stack
that
that
yeah
converts
into
the
into
the
internal
version
which
which
supports
all
of
them.
So
like
we
went
really
crazy.
That's
what
we
would
do.
I
guess.
A
If,
with
the
validation
approach,
perhaps
we
could
do
something
like
have
an
internal
type,
that's
embedded
in
the
strongly
version
types
and
and
have
like
each
strongly
version
type,
have
the
validation
on
it,
for
that
type,
so
be
very
similar
to
what
you
have
right
now,
except
that
the
validations
wouldn't
be
in
a
row
on
the
single
type
that
would
be
separated.
A
That
might
be
a
nice
intermediate
option
as
long
as
we're
being
really
clear
about
what
version
supports,
what
thing
and
have
a
nice
way
of
removing
the
support
for
the
field
when
we
go
to
v2
and
like
it
will
be
easy
for
us
to
do
that
cleanly,
I'm
not
too
worried
about
having
the
technicality
of
the
struct
type
in
our
in
library
that
supports
both
fields.
D
So
I
think
that's
probably
fine,
so
I
missed
some
the
earlier
part
that
carlos
said,
because
there
was
an-
I
guess,
probably
a
network
issue
on
my
side.
D
But
from
what
I
heard
is
we
kind
of
have
a
internal
types
which
represent
the
like
the
the
union
of
all
the
versions?
And
then
we
have
a
validation
for
each
version.
C
I
think
that
that's
pretty
much
it.
I
also
thought
it
would
be
interesting
to
share
like
how
it
felt
doing
this
pr,
because
when
she
said
that,
there's
probably
going
to
be
some
b2
in
the
future,
based
on
some
changes
that
google
wanted
to
do
that.
This
may
inform
what
it
means
to
make
a
v2.
So
also
that's
a
good
point
to
make.
D
Yeah,
I
think
what
what
what
you
proposed
makes
sense
like
having
two
failed
in
the
internal
types.
A
Next,
up
on
the
agenda,
we
have
server-side
carry-on
functions.
I
assume
this
is
your
item.
Munchie.
D
D
For
this
thing
I
would
expect
that
to
be
done
by
next
meeting.
So
currently
I
have
this
poc.
It's
currently
used
the
grpc
interface
and.
D
So
how
how
it
works
is
that
so
one
it's
receivable
request
from
the
a
user
or
whoever
want
to
evaluate
this
krm
functions
and
then
it
this
k,
wrapper
server
will
invoke
the
actual
the
entry
point
of
the
original
containers
to
evaluate
the
current
functions.
D
D
D
Yeah,
so
this
this
one,
the
the
pod
which
contains
init
container,
which
this
inner
container
just
copy
over
the
wrapper
server
binary
from
from
the
init
container
into
the
real
actual
crm
function
container.
D
Yes,
so
this
I
remember
in
our
slag
chat.
D
I
think
nick
nick
is
experimenting
on
also
experimenting
server
side,
the
current
function,
but
their
approach
is
extracting
the
the
binary
from
the
oci
image.
If
I
remember
correctly,
but
that
approach
have
some
some
issues,
we
have
also
tried
that
so
if
some
functions
have
extra
dependencies
in
the
image,
this
approach,
that
approach
doesn't
work
well,
but
that
approach
works
when
there's
only
one
single
binary
to
execute
there's
no
extra
dependency.
D
A
D
Or
are
you
not
considering
it
at
this
time
yeah?
So
this
yeah,
that's
true
work
so
did
for
exact
mode,
so
the
writer
server
binaries
can
be
used
directly
on
the
host,
alongside
with
the
cam
exact
function.
D
D
This
should
also
work
as
long
as
the
the
binary
support
the
same
architecture
and
os.
A
Right,
so
you
basically
need
the
catalog
to
make
that
work
correctly,
but
it
in
theory
it's
feasible,
the
catalog
it
lets
you
look
up
the
correct
architecture
and
download
the
binary
accordingly,
not
that
it's
implemented,
but
that's
part
of
the
design.
D
Yeah,
so
that's
where
the
using
containers
have
the
have
the
advantage,
because
in
the
cam
the
cam
function
container
and
the
wrapper
server
containers
they
can.
We
can
build
multi-architecture
for
it.
So
the
docker
buildex
can
support
that
and
when
you
fetch
around
the
the
container
runtime
try
to
fetch
the
the
container,
it
will
fetch
the
matching
one
and
then,
when
we
copy
over
the
binary
from
any
container
to
the
other
containers.
This
should
always
work
because
they
always
have
the
matching
architecture
on
os.
A
D
So
the
first
time
we
bring
up
the
the
container,
it
will
take
some
several
minutes,
not
several
seconds
yeah
like
if
the
the
image
is
already
fetched,
probably
take
like
three
to
five
seconds
and
the
in
the
subsequent
request.
D
We
can
keep
this
containers
alive.
In
that
case,
the
execution
timer
is
usually
below
one
second,
sometimes
a
false
for
some
current
functions
runs
fast.
It
only
take
about
150
milliseconds,
which
is
pretty
fast,
so
that.
A
D
Currently,
what
we
are
doing
is
one
container
runs.
One
single
function.
A
Okay,
so
if
you
have
a
pipeline
of
say,
five
different
functions,
each
of
which
is
source
for
different
container
you're,
going
to
be
spinning
up
five
different
kubernetes
pods-
and
you
just
leave
them
running.
Is
this
sort
of
like
a
multi-tenant
scenario
or
to
benefit
from
the
performance
gains?
Is
it
doesn't
have
to
be
like
a
single
cap
file
or
customization
that
repeatedly
evokes
the
same
function?.
D
We
imagine
it's
like
multi-tenant
solutions
like
there
are
multiple
different
team
in
the
same
company
or
organizations,
and
they
share
the
one
cluster
and
then
they
what
what
functions
they
are
invoking
like
it's
likely
to
be
curated
by
their
organization,
and
then
they
have
some
overlapping,
and
so
one
different
team
invoked
that
they
are
likely
to
have
a
to
run
functions.
That's
already
in
the
cache
it's
already
there,
so
it
will
be
faster.
D
A
Have
you
I
don't
know
if
this
is
even
a
crazy
idea
or
workable,
but
that
introduces
a
remote
dependency
and,
like
someone
has
to
run
that
server
right.
So
that's
there's
a
lot
more
moving
parts
like
in
terms
of
the
barrier
to
entry
to
use
the
krm
functions
versus
just
you
know,
having
docker
installed
on
your
laptop
or
even
just
having
access
to
the
binary
would
a
similar
thing
work.
A
I
guess
it
would
be
a
much
lower
benefit,
but
if
you
had
a
local
docker
container
being
reused
instead
of
a
one
that's
running
in
a
remote,
a
remote
pod
is
that
would
that
be
feasible
in
any
way.
D
Yeah,
that's
feasible,
but
why?
The
reason
why
we
are
currently
leaning
towards
running
it
in
kubernetes
cluster
is
that
running.
It
locally
means
the
user
need
to
have
a
container
runtime
locally,
either
something
like
a
docker
or
podman.
A
A
I'm
wondering
is
this:
when
you
make
the
cap,
will
it
be
a
pr
against
the
customized
repo
that
mainly
focuses
on
the
spec,
or
will
it
be
like
a
cap?
A
I
guess
there's
also
a
mini
cap
process
inside
customized,
but
were
you
thinking
more
on
that
side
of
things
or
more
of
a
full
cap
on
the
enhancement
side,
as
I
can
see,
there's
consequences
for
catalog
and
some
of
the
initial
the
existing
proposals
that
we
have
open
related
to
customize
there,
for
example,
like
is
this:
we
currently
describe
container
functions
and
exact
functions
and
starlark
functions
which
are
being
deprecated.
A
Is
this
a
is
this
a
fourth
one
and
how
how
do
we
present
that,
in
the
context
of
catalog
and
in
the
context
of
customized
and
captured
like
what,
what
are
the
user
interfaces
there
and
how?
How
do
they
differ
from
what
we
have
today.
D
I
think
what
I
want
to
propose,
at
least
initially,
is
the
change
to
the
spec,
to
define
the
interface
for
for,
for
krm
functions
to
support
server
mode
yeah.
It
will
have
impact
on
how
the
catalog
and
other
stuff
works.
D
Yeah,
so
that
part
I
haven't
really
fleshed
out.
I
yeah
it
will
impact
that,
for
example
in
the
catalog
it
should.
D
Let
the
user
discover
if
that's
support
the
way
2
or
we
want
function,
spec
or
and
also
one
a
user,
have
a
a
container
current
functions
container
image,
and
there
should
a
way
to
discover
if
this
one's
support
we
want.
The
way
to
something
like
that
should
be
should
be,
should
be
discussed
in
the
proposal.
A
So
you're
thinking
by
the
next
meeting,
that's
something
you
would
have
available
for
discussion.
The
next
meeting.
D
Yeah
yeah,
I
assume
it's
two
weeks
away.
A
D
A
Yeah,
this
is
very
interesting.
Thank
you
for
sharing
and
for
sharing
these
links,
so
we
can
take
a
look
at
the
preliminary
stuff
before
ahead
of
the
proposal.
On
my
side,
the
the
main
concerns
that
I'm
gonna
be
very
interested
to
to
look
at
the
details
of
how
they
shake
out
is
keeping
this.
You
know
we
want
the
performance
benefits
we
want.
A
You
know
to
make
this
a
platform
friendly
feature,
but
at
the
same
time
like
keeping
the
barrier
to
entry
low
and
the
sort
of
interchangeability
of
the
functions
high
are
priorities
for
me,
because
I
think
that's
a
big
power
of
the
function
specification
that
is
really
simple
to
implement
and
the
ecosystem
that
we
can
build
off
of
this
it.
You
know
that
there's
a
lot
of
potential
there
around.
The
fact
that
all
you
have
to
do
is
accept
this
input
format
and
emit
the
same
output
format.
A
All
right,
so
that
was
the
last
item
on
our
agenda.
Please
add
your
name
to
the
attendees.
If
you
haven't
already,
is
there
anyone
who
wants
to
present
a
stand-up
today?
I
have
nothing
personally.
A
All
right
there,
any
other
topics,
anything
new
with
the
the
registry
side.
A
All
right,
unless
there's
anything
else,
feel
free
to
speak
up,
but
with
that
I
think,
that's
all
the
topics
we
had
today.
So
thank
you
all
for
attending
and
hope
to
see
you
next
time.