►
From YouTube: Kubernetes SIG CLI 20220209 - KRM Functions
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everyone
to
the
six
cli
carry
on
function,
subproject
meeting
on
february,
9th
2022.
We
have
a
pretty
light
agenda
today,
so
I
guess
we'll
just
get
started.
Everyone,
please
add
your
name
and
company
to
the
agenda.
Here
you
have
a
chance.
A
So
the
first
topic
I
have
is
just
a
quick
one
that
I
put
in
in
here
and
basically
we
have
this
schema
that
we
proposed
in
the
cap,
and
I
was
working
on
the
carom
function
framework
this
past
week
and
something
that
I
ran
into
was.
A
It
would
be
really
nice
if
we
could
use
the
cube
builder
pattern
to
generate
the
open
api
for
a
given
karen
function
type
when
you're
writing
your
when
you're
writing
your
function
as
a
function
author
and
then
pull
that
back
in
for
use
as
validation
and
have
the
framework
help
you
with
that.
A
Now
the
queue
builder
obviously
doesn't
know
anything
about
our
types
that
wouldn't
make
sense,
but
it
knows
how
to
generate
crds,
which
look
almost
exactly
the
same,
but
not
quite
so.
I
wanted
to
float
the
idea
of
making
very
small
changes
to
this
part
of
the
of
the
spec.
A
A
I
have
some
code,
not
on
the
computer,
that
I'm
on
at
the
moment,
but
that
shows
an
example
of
this,
where
you
can
have
a
stretch
that
is
actually
for
krm
function,
but
it's
sparse
and
it
only
has
like
the
group
kind
version
schema
part
of
it,
and
you
can
have
that
load
a
crd,
and
it
doesn't
know
that
it
was
a
crd
because
it
looks
the
exact
same
and
it
has
all
the
correct
fields
for
the
validation
portion
of
a
krm
function.
A
And
that
means
that
you
can
literally
use
q
builder
to
generate
the
open
api
for
your
type
and
and
consume
it
back
in
a
validation
function
of
the
framework
as
though
it
were
actually
the
proper
care
and
function
description.
A
Now.
Ultimately,
I
think
it
would
be
awesome
if
we
did
build
a
bespoke
thing
that
generated
the
full,
correct
care
and
function
type
off
of
annotations.
A
The
way
qbuilder
does,
but
I
feel
like
that
is
really
far
out
for
us
and
with
this
really
simple
change,
to
basically
put
things
like
the
where
what
is
it,
it's
group,
I
think,
is
in
the
wrong
spot,
like
it's
under
a
names
names
field
instead
of
at
the
top
level,
it's
very
small
changes
and
then
there's
a
top
level
version
or
what
is
it?
A
It's
called
validation,
top
level
validation
field
that
can
provide
validations
for
all
versions,
that
we
don't
support
and
that
crd
has
with
those
those
really
really
small
changes.
Then
all
this
tooling
would
just
be
compatible
and
we
would
have
be
able
to
use
it
up
front.
So
I
just
wanted
to
make
that
proposal
and
I'll
make
a
pr
to
the
cap,
but
any
any
thoughts
about
that.
B
I
think
I
I
like
these
ideas
like
having
a
tools
like
cube,
like
controller
tools,
include
builders
that
generate
this,
so
it's
streamlined.
The
workflow
for
the
users.
B
But
for
the
part
that
you
said,
we
need
to
make
some
change,
so
what
change
do
we
need
to
make?
So
are
you
seeing
we
are?
We
need
to
make
it
more
like
a
crds,
so
we
can
reuse
controller
tools
to
generate
crds.
B
A
Yeah
essentially,
this
part,
like
the
part
that
describes
an
individual
krm
function,
is
like
super
close
to
crd,
and
my
proposal
is
that
whenever
our
schema
has
a
field
that
crd
also
has
we
put
it
in
the
same
place
in
the
structure
so
that
the
tooling
will
be
compatible
so
like
there
are
two.
There
are
fields
that
crd
has
that
make
no
sense
on
the
client
side,
and
I'm
not
saying
we
should
add
those
in
just
just
leave
them
out.
I'm
saying
where
there
is
overlap,
let's
just
put
the
fields
in
the
same
place.
A
So
that
would
mean
in
practice
I
think,
moving
the
group
field
and
the
having
a
top
level
validation
that
allows
you
to
provide
an
open
api
v3
for
all
your
versions.
Those
are
the
two
main
ones
that
I
noticed
so
far.
It
also
doesn't
like
mean
we
can't
have
additional
fields.
That's
totally
cool,
I'm
not
proposing
changing
that
at
all.
Those
can
be
wherever
they
want
just
like,
where,
where
the
fields
exist
in
both
types,
let's
make
them
exist
in
the
same
place
in
the
structure.
B
But
for
the
validation
schema
in
crd,
I
think
the
top
level
validation
schema
is
deprecated,
so
the
suggested
way
is
to
use
perversion.
Validation,
schema.
A
I
know
it's
older
on
the
type
there's
no
deprecation,
maybe
it's
somewhere
else.
I
was
just
looking
at
the
type
directly,
so
maybe
I
missed
it.
If
it's
deprecated,
then
yeah,
we
don't
need
to
support
it
for
sure.
A
In
general,
I'll
just
make
a
pr
that
shows
the
exact
changes
that
that
would
flow
out
of
that.
I
just
realized
that
this
morning,
actually,
as
I
was
working
on
this-
that
that
that
would
be
a
like
a
potential
big
leverage
point,
if
we
just
aligned
in
a
small
way.
B
I
I
think
I
generally
like
these
ideas,
so
our
initial
design
inspiration
is
like
inspired
by
the
crd
structure,
so
I
think
it
make
more
sense.
If
we
can,
we
can
make
it
even
more.
Like
crd.
A
A
So,
okay,
the
sdk,
the
registry
and
I
guess,
let's
grab
last
time,
there's
nothing
to
say
about
composition,
so
catalog.
A
Aren't
you
do
you
want
to
lead
with
the
function?
Sdk
stand
up,
you've
been
working
on
that
a
lot.
B
Sure
so
for
the
function
sdk,
so
as
you
suggested,
so
I
right
now.
I've
split
the
krm
slash
demo
package
change
into
a
separate
prs
with
changes.
B
So
it's
ready
for
reveal.
A
B
Yeah
another,
oh
sure,
and
another
thing
I
want
to
bring
out
is
so
I
have
mentioned
this
to
katrina,
and
I
also
asked
this
question
in
the
in
the
pr.
So
the
problem
is
that
in
our
we
you
know
we
rely
on
the
go
dash
yamo
package
to
do
the
lower
level
yamo
parsing
and
the
serialization
stuff.
B
So,
but
there
is
a
problem
with
that
when
passing
with
the
kubernetes
resources
since
kubernetes
resources,
doesn't
the
struct
doesn't
natively
provide
the
yaml
tag
which
the
go
dash
yamo
package
rely
on.
So
that
means
when
an
object,
embed
a
kubernetes,
let's
see
metadata
field,
so
our
library
won't
be
able
to
parse
it.
B
It's
really
frustrating
for
the
users.
I
think
we
have
seen
users
complain
about
this,
and
I
have
a
idea
to
solving
this,
which
is
right.
Now
it's
got
the
yamu
tag.
First,
I
think
we
can
fall
back
on
the
json
tag
if
the
yamo
tag
is
empty,
so
this
can
make
it
work
when
there's
only
json
time.
B
So
I
imagine
these
two
katrina,
so
it
seems
we.
We
even
though
we
have
an
internal
fork,
but
we
don't
intend
to
make
it
a
permanent
fork.
So
the
idea
probably
is
to
like
create
a
pr
in
the
in
the
upstream
ripple,
and
then
your
own
fork
use
the
script
to
copy
that.
B
I
think
that's
the
current
current
plan.
I
guess.
A
We
are
only
doing
that
for
emergency
fixes.
Generally
speaking,
we
don't
want
to
touch
the
yaml
library
because,
as
soon
as
say,
we
introduced
that
change
right.
The
changes
that
we've
made
so
far
using
that
mechanism
that
I
mentioned
were
to
fix
like
critical
regressions
from
the
end
user
standpoint
from
end
user
customize,
specifically
so
the
effect
of
making
those
patches
was
that
we
were
able
to
do
a
customized
release
that
included
some
go
yaml
fixes
from
further
ahead
of
upstream
than
we
had
been
before.
A
A
A
So,
from
an
end
user
point
of
view,
there's
no
new
features
in
the
internal
fork
and
we
kind
of
want
to
keep
it
that
way,
so
that
we're
not
stuck
with
it
forever,
whereas
the
the
thing
that
you're
proposing
here
that
is
definitely
a
feature
that
end
users
could
start
relying
on
where
you
know
you
can
suddenly
use
json
tags
like
that
is
the
major
feature
of
the
six
gammel
library
right
that
that
most
of
kubernetes
is
using
to
process.
A
This
example
is
based
on
the
v2
of
the
go
yaml
and
that's
the
enhancement
that
it
makes
to
it.
If
I
remember
correctly,
like
the
main
one
was
the
processing
json
text,
so
I
don't
want
to
just
like
silently
do
that
on
our
fork
and
get
stuck
with
it
forever.
A
So
I'm
wondering
I
actually
ran
into
this
myself
this
this
week,
because
I,
as
I
was
mentioning,
I
was
experimenting
using
cue
builder,
cue
builder-
does
not
allow
you
to
generate
anything
unless
you
have
embedded
meta
v1.
So
I
ran
into
this
exact
problem
where
a
bunch
of
the
points
in
the
framework
tooling
were
not
serializing
or
deserializing.
My
types
correctly
anymore,
once
I
made
that
change,
so
I
totally
empathize
with
this.
A
In
my
case,
I
was
able
to
fix
just
a
couple
specific
places
because
it
was
always
some
under
the
hood
framework
thing
that
was
doing
the
wrong
thing
that
did
not
affect
like
that.
A
Didn't
actually
need
our
nodes
at
that
point,
so
I
was
able
to
just
use
the
zigzammel
to
do
the
sort
of
round
tripping
that
I
was
doing
at
that
moment
in
time
to
just
transparently
fix
the
problem
without
exposing
without
needing
to
to
change,
go
yaml
and
without
needing
to
like
change
what
gets
exposed
to
the
end
user
in
terms
of
it
always
being
our
nodes.
A
So
I'm
wondering,
if
is
there
an
equivalent
change?
I
I
think
you
you
have
it
actually
in
your
pr
the
equivalent
change,
where,
like
there's
a
very
specific
place,
where
we
know
where
we're
handling
the
end
user
struct
as
a
struct.
So
when
we're
doing
that,
we
need
to
support
the
possibility
that
they're
using
json
tags.
So
that
means
we
need
to
load
or
we
need
to
unmarshal
into
it
with
the
other
library,
which
is
like
a
little
weird.
But
it's
it's
on
us.
It's
like
the
complexity
is
on
us
to
handle.
A
So
I'm
not
like
super
worried
about
just
doing
that.
Like
the
end
user,
never
knows
that
we
use
that
library
or
the
other
one
right.
B
A
A
That
is
also
used
by
sungo
projects
super
confusing
situation,
but
I
don't
think
the
solution
at
this
point
like
given
the
staffing
that
our
project
has
can
be
for
customized
slash
kml,
to
take
on
maintainership
of
a
permanent
hard
fork
if
it
probably
wouldn't
be
60
li
at
all.
That
would
maintain
a
hard
fork
of
goligamel
it
would.
It
would
probably
be
like
api
machinery,
so
I
don't
want
to
just
like
do
that
through
a
back
door
and
end
up
stuck
with
it.
B
So
but
another
concern
about
this
goyamo
package
is
that
so
it
seems
it's
not
very
actively
maintained,
so
last
commit
investory
branch
was
more
than
one
years
ago,
and
there
are
so
many
open.
The
pr
was
not.
We
were
standing
there,
but.
A
Yeah,
it
essentially
has
a
soul
maintainer,
and
it's
somebody
who
is
very
very
busy.
Natasha
has
some
experience
trying
to
get
something
merged
there,
where
they
actually
agreed
to
the
feature
in
general,
but
we
repeatedly
painted-
and
it's
like
the
critical
fix.
That's
in
our
fork
like
we
don't
want
the
fork
at
all
thought.
We
had
alignment
to
just
get
that
merged
upstream
and
it
never
happened.
So
it's
super
painful
for
us
and
for
customize
in
particular.
A
We
can't
a
as
I'm
sure,
you've
noticed,
like
all
of
k,
handles
built
around
the
ability
to
use
our
node,
which
is
what
yaml
v3
is
offering
over.
Well
any
of
the
other
options.
That's
why
kml
doesn't
lose
comments.
That's
why
it
lets
you
modify
the
details
of
the
style
of
the
various
yml
nodes,
we're
kind
of
stuck
with
go
yaml
v3
because
of
our
reliance
on
that
stuff,
which
is
giving.
C
I
share
your
concern,
though
absolutely
I
did
also
talk
to
jordan
about
potentially
creating
a
permanent
fork
with
api
machinery.
I
think
that's
definitely
a
possibility
that
someone
would
need
to
drive
that
so
so
jordan
was
receptive
to
the
idea
he
didn't
say
no.
He
said
we
would
probably
want
to
fold
it
into
the
zigzama
library.
A
C
A
A
D
Sega
they
are
wednesdays
at
11,
pacific
every
other
week.
Let's
see
when
the
last
one.
A
Okay,
so
maybe
we
should
all
go
to
that
one
to
discuss.
A
I
guess
I
I
could
mention
that,
like
the
stuff
that
I
was
talking
about
earlier
at
the
beginning
of
meeting,
I've
been
making
a
pr
showing
how
we
can
support
that
functionality,
basically
consuming
making
it
easy
to
use
open
api
schemas
for
validation,
basically
has
a
second
hook
in
the
load
function.
Config
that
allows
you
to
provide
a
schema
and
then
uses
it
automatically.
If
you
do
provide
one,
I
think
it
would
be
pretty
powerful.
A
So
yeah
that'd
probably
be
later
today
so
function
registry.
I
saw
some
prs
go
through
for
that.
That's
pretty
exciting!.
B
Oh
yeah,
so
for
the
function
registry,
so
natasha
is
moving.
The
render
helm
chart
functions
into
the
registry
with
some
timestamp
and
the
testing
setup.
B
And
for
the
for
the
so
I'm
helping
natasha
with
the
the
the
pro
test
job
setup
seems
so
we
will
need
a
doctrine
docker
for
our
testing,
but
it
seems
our
the
docker
demon
was
not
running,
so
I'm
still
investing
it.
Are
you
investigating
that.
C
C
C
A
But
yeah,
I
actually
don't
want
to
answer
the
question.
I
actually
have
no
idea
how
the
priesthood
job
works.
I
haven't
dug
into
that
aspect
since
I
joined
the
project
so
unfortunately
I
don't
know.
B
It
doesn't
turn
what's
customized
with
somebody
has
a
running
back.
C
A
B
Yeah,
so
basically
how
to
get
the
docker
demon
500
yeah.
So
it's
right
now
blocking
test
from
running
okay.
C
A
A
I
guess
we'll
be
able
to
once
that
once
that
is
done,
we
can
almost
transfer
the
issues
like
the
github
issues
over
to
the
functions
repo.
Yes,
all
those
ones
that
you
tagged.
That
would
be.
D
So
I
have
been
pretty
preoccupied
with
other
day
job
stuff.
We
are.
We
started
two
new
projects,
so
I've
been
kind
of
focused
on
getting
those
boots
wrapped
up,
but
I
have
returned
last
week
to
start
working
on
this
in
more
earnest.
So
my
hope
is
to
try
to
have
a
rough
work
in
progress
pr
at
the
end
of
this
week
or
early
next
week.
That's
what
I've
been
working
on
for
the
last
few
days.
D
B
So
if
we
change
the
current
function
schema,
so
it
will
impact
the
catalog
integration,
since
catalog
is
rely
on
that.
D
D
And
then
internally,
we
are
using
krm
functions
in
another
place
now,
and
there
are
two
folks
that
are
pretty
dedicated
working
on
that
and
I'm
going
to
try
to
invite
them
to
these
meetings.
Just
as
an
aside,
it's
not
a
part
of
stand
up,
but
we're
going
to
try
to
get
them
to
come
to
these
more
fairly
regularly,
so
that
we
can
also
leverage
their
their
talents
to
help
push
things
forward.