►
From YouTube: Kubernetes KubeBuilder Meeting 20210114
Description
KubeBuilder Meeting for 2021/01/14. See https://sigs.k8s.io/kubebuilder for more details.
A
All
right,
hello
and
welcome
to
the
first
q
builder,
controller,
runtime
and
controller
tools.
Meeting
of
2021.
today
is
thursday
january
14th.
As
a
reminder,
this
is
being
recorded,
so
don't
say
anything
you
don't
want
recorded
for
all
posterity
uploaded
to
youtube.
B
Hi,
well,
I
have
a
couple
points
in
agenda,
but
at
it's
pretty
busy
busy,
I
will
try
to
be
pretty
fast
so
that
we
can
go
through
all
the
points
and
the
first
one
was
we.
We
are
using
six
couple,
a
six
kubernetes
based
jump
parsing
for
the
configuration
files
and
there
are
a
couple
special
things
about
this
implementation
of
yum.
B
So
I
was
wondering
if
we
really
needed
the
the
slavery
of
or
we
could
switch
back
to,
the
one
that
we
were
using
previously.
There
are
some
advantages
and
drawbacks
from
both
of
the
approaches.
So
I
would
like
to
know
what
the
rest
of
the
members
think
about
it
just
to
number
them.
B
If
we
use
the
one
from
kubernetes,
we
are
doing
this
three
steps
process
to
to
parse
and
to
ma
to
marcel
and
to
on
marshall,
both
and
a
and
we
use
json
tax
and
marshal
json
methods.
B
If
we
switch
back
to
the
other
one,
we
would
have
to
use
marcel
yaml
instead
and
jump
tanks
instead,
but
we
would
only
have
one
step
to
to
marshal
the
the
configuration
files,
I'm
being
leaning
towards
both
directions
depending
on
I've
been
changing
my
mind.
So
I
don't
know
what
do
you
think
about
this.
C
I'll
try
men
with
my
two
cents.
I
think
if
we-
I
don't
know
if
we
do
this
in
q
building
right
now,
but
I've
seen
multiple
times
in
the
past
where
using
the
non-sigs
case,
I
o
yaml.
C
If
we're
ever
going
to
be
marshalling
your
unmarshaling
enamel,
that's
destined
for
the
api
server,
then
it
seems
like
we
should
stick
with
sigs
kate,
zayo
yaml,
because
I've
seen
like
unexpected
marshalling
unmarshalling
things
happening.
If
you
don't,
I
don't
know
if
we
do
that
or
if
we
plan
on
doing
that.
I
kind
of
have
this
hunch
that
we
might
based
on
the
fact
that
we
scaffold
a
bunch
of
yaml
that
ends
up
going
to
the
api
server.
B
C
C
Yeah,
I
think
you're
right,
I
think
like
if
you
look
at
q
builder
right
now.
I
think
we
just
you
know,
have
strings
that
just
get
written
directly
to
disk
as
yaml,
but
my
concern
would
be
if,
if
we
ever
marshal
or
unmarshal
those
strings,
would
we
kind
of
shoot
ourselves
in
the
foot
by
like
just
by
default,
using
the
one
that
we're
already
using
and
then
have
these
issues
where
things
marshal
are
unmarshalled
unexpectedly,
based
on
what
kubernetes
expects.
A
I
I
think,
the
main
from
my
experience,
the
main
point
where
you
want
to
use
the
like
that
the
underlying
ammo
implementation,
instead
of
the
six
kate's
yaml
wrapper,
is,
if
you
need
the
like,
if
you
need
the
structure
preserving
part
of
it
so
like
in
in
controller
tools,
there's
one
point
where
we
use
the
underlying
yml
library
in
order
to
do
transformations
on
the
gamel
without,
like
you
know,
keeping
comments
and
stuff
like
that
without,
like
actually
unmarshalling
but
like
that's
the
main,
though
that's
the
main
point
that
I've
seen
for
for
using
that.
A
So
if
we
ever
decide
to
do
that,
it
might
be
worth
revisiting.
But
if
we're
not
doing
that,
I
don't
know
if.
E
B
B
B
So
I
wouldn't
need
to
to
set
that
flag
and
to
solve
that.
I
I
uploaded
an
empty,
an
enhancement
proposal.
I
don't
know
if
you
have
had
time
to
to
review
it,
but
I
would
appreciate
some
feedback
on
on
the
new
algorithm
on
the
new
approach.
I
think
it
will
be
solve
those
problems
that
I
mentioned
and
will
be
a
bit
more
efficient.
F
Right
and
that's,
I
think,
a
fair
thing
to
question.
Originally,
we
wanted
that
actually
to
be
the
case
where
you
set
a
plug-in
on
init
and
then
all
other
commands
would
use
that
plug-in.
F
But,
for
example,
if
you
had
some
plug-in
that
could
initialize
and
create
an
api
for
a
project,
but
then
you
had
another
plug-in
that
you
wanted
to
use
to
create
a
web
hook.
You
couldn't
do
that
using
the
current
scheme
and
I
think
we
want
to
allow
that
going
forward
just
because
we
we
provide
the
option
or
we
we
don't
require
every
plug-in
to
implement
every
single
sub-command
interface.
F
B
B
B
Okay,
so
the
last
one,
the
last
point
I
had
was
I
was
revisiting
all
the
tests
and
checks
that
we
were
doing
in
cube
builder.
They
are
pretty
a
bit
tangled
and
unmissed.
B
The
first
thing
is
that
there
are
things
that
are
being
done
in
in
pro
and
other
things
that
are
being
done
in
in
github
actions.
For
example,
github
actions
allow
us
to.
B
Use
a
test
for
macbook
for
for
mac
and
it
also
allow
us
to
to
chain
test
so
that
they
are
not
running
all
until
certain
conditions
have
succeeded.
So
I
was
wondering
if
we
wanted
to
migrate
everything
to
github
actions
or
keep
brow
or
maybe
migrate.
Everything
too
pro,
I
don't
know
I
feel
like
proud,
doesn't
allow
to
do
everything
that
github
actions
does
and
that's
a
bit
of
a
limiting
factor.
So
I'm
a
bit
leaning
to
make
everything
in
github
actions
and
unforget
about
bro.
In
that
sense,.
G
A
If
you
structure
your
your
github
actions
separately,
there's
like
a
re-run,
individual
action
in
and
you
have
actions.
A
I
think
I
think
part
of
it
might
be
like
I
have
from
what
I've
seen.
There's
like
there's,
there's
three
kind
of
three
reasons:
we're
using
prow.
The
first
is
that
it's,
it
was
what
the
rest
it's,
what
the
rest
of
the
kubernetes
project
uses
by
and
large.
A
I
think
I
only
have
two
actually
and
the
the
second
one
is
that
like
it
gives
us
for,
for
like
certain
things,
it
gives
us
the
ability
to
like,
especially
for
unit
tests.
It
gives
us
the
ability
to
like
gate
on
who
can
actually
run
the
tests,
because
you
know
nominally
running
or
technically
running
the
tests.
The
unit
test
is
running
arbitrary
code
that
people
upload
and
so
prow
has
the
ability
to
like
currently
check,
if
someone's
in
the
in
the
org
or
otherwise
require
okay
to
test
versus
github
actions.
A
Just
runs
whatever
someone
uploads
and
so
technically
someone
could
upload
something
some
sort
of
malicious
code.
I
think,
which
is
why
we
currently
have
a
split
between
github
actions
and
prow
versus
github
actions,
as
you
mentioned,
does
allow
us
to
run
our
osx
tests
so
technically
we're
already
running
those
the
technically
untrusted
code
and
has
maybe
a
nicer
interface
in
some
places.
D
Yeah,
there
are
two
more
advantages
for
pro:
one
is
re-testing,
because
if
with
github
action,
you
have
a
workflow
where
at
some
point
in
time
someone
creates
a
pull
request,
it
gets
green
tests,
then
something
gets
merged
to
the
base
branch
say
master
that
conflicts.
With
this
pull
request,
then
nothing
will
rerun
the
github
actions
and
that
pull
will
merge.
The
that
was
created
sooner
but
merge
data,
and
you
end
up
with
a
broken
branch.
D
That's
one
thing
and
another
one.
I
know
how
relevant
that
is
with
power
you
can
integrate
with
test
grid
and
that
in
turn
is
very
useful
to
get
some
kind
of
trends
for
flaky
tests
and
debug
them
and
yeah
have
some
longer-term
view
on
what
tests
flag
more
often
or
if
tests,
flag
and
stuff
like
that.
A
Oh
yeah,
the
the
first
one
you
mentioned
is
is
super
important
right
that
one
that
one
slipped
my
mind.
I
think
github
made
progress
on
fixing
that
recently,
but
I
don't
remember
where
I
saw
that
so
I
take
that
with
a
grain
of
salt,
but
like
yeah
that
first
one
especially
is
very
important
and
and
the
the
test
grid
is
also
nice,
but
like
making
sure
we
run
the
test
against
the
merged,
the
merge
commit,
that's
actually
going
to
be
merged
is
important.
C
I
don't
really
have
a
huge
opinion,
one
way
or
the
other,
but
another
plus
for
github
actions
is
that
the
code
that's
used
to
define
the
test
is
stored
in
the
same
repository
as
the
code,
which
dealing
with
prio
sometimes
gets
a
little
bit
annoying
because
you
have
to
make
a
couple
of
commits
and
merges
and
multiple
repositories
before
you
can
un
unblock
yourself
from
something
that's
failing.
C
A
A
We
should,
we
should
maybe
add
a
note
to
like
our
dev
documentation,
just
in
case
people
like
have
this
thought
in
the
future,
because
it
will
probably
occur
again
just
as
for
why
the
why
things
are
the
way
they
are.
A
H
All
right,
I
guess
I'm
up
so
last
friday
I
submitted
pr
to
add
some
testing
clients
in
the
past.
I've
brought
up
the
idea
of
this
reactive
client,
so
you
had
said
you
know
reactor
to
oversimplify
reactors
bad,
and
so
I
added
these
air
injector
and
spy
clients
that
are
kind
of
more
special
purpose,
single
purpose,
clients,
testing
clients
to
inject
errors
or
to
spy
on
api
calls
and
then
alvaro's
feedback
was
I
like.
I
like
this.
I,
like
reactive,
client,
better.
H
I
really
he
likes
reactive
client
better.
So
I
thought
it
would
just
be
good
to
open
a
dialogue
on
what
we
want
to
do.
Do
we
want
to
include
reactive
client?
Do
we
want
to
include
the
other
two,
both
neither
yeah?
That's
it.
I
would.
G
D
Because
david
mentioned
that
you
were
as
you
as
in
sully
are
opposed
to
the
reactive
client
and
they
because
there
was
some
discussion
very
long
ago,
so
my
question
for
sully
would
be.
Are
you
opposed
to
the
general
idea
of
having
a
client
that
allows
injecting
arbitrary,
behavior
or
just
the
way
client
go
implements?
This.
A
Yeah,
so,
okay,
so
I
think
my
number
one
complaint
is
that,
like
the
actual
way
client
go
implements,
this
is
like
terrible
right
like
I
I
I
I
have
like
it.
Nothing's
typed,
like
everything's
type
assertions
like
the
resource
name,
is
just
like
an
arbitrary
string
that
sometimes
changes
out
from
underneath
you,
depending
on
how
the
underlying
api
machinery
is
feeling
that
day
like
it.
It's
not.
A
So
if
we're
gonna
have
a
reactor
all
right,
if
we're
gonna
have
something
that's
like
a
reactive
client
like
we
should
have
a
controller
runtime
style
interface
for
it,
as
opposed
to
a
client,
go
style
interface,
and
we
should
make
sure
that,
like
the
weird
corner
cases
are
like
not
weird.
So,
like
you
know
things
like
what
happens
if
you
return
like
false,
this
is
unhandled
and
it
falls
off
the
end
of
the
handler
chain,
like
there's
really
weird
behavior
in
client.
Go
that
doesn't
really
make
a
ton
of
sense.
A
There's
like
parameters
that,
like
you,
think
they
do
one
thing
and
do
something
else
so
like
we.
We
just
need
to
make
sure
that,
like
if
we're
going
to
have
a
reactive
client
like
we
need
to
make
sure
there
is
a
solid,
easy
path
and
you
know
maybe
have
like
easy
helpers
for
like
the
simple
cases
like
you
know,
inject
error,
or
whatever
separately
like
I
I
you
know.
I
originally
was
like
very
very
much
like.
A
I
would
like
to
push
people
towards
using
m
test
instead,
but,
like
I
think,
we've
gotten
enough
feedback
from
everybody
at
this
point
that,
like
the
reactive
client,
can
is
useful
in
certain
situations
that,
like
I'm,
no
longer
hard
opposed
to
a
reactive
client
but
like
if
we're
gonna
have
one.
A
H
Okay,
so
the
implementation
I
have
actually
just
uses
the
client
go
implementation
of
reactors
by
can
kind
of
it's
kind
of
an
adapter
to
to
the
to
the
api.
It
does
like
a
round
trip
through
converting
things
to
the
test
actions
and
then
converting
those
back
to
controller
runtime
style
calls.
H
Would
you
I
don't
know
I
hesitate
to
like
re-implement
reactors
in
our
own
way,
but
maybe
maybe
that
it
sounds
like.
Maybe
that's
what
you
would
would
prefer.
H
Maybe
it
wouldn't
be
so
bad.
The
api
to
error
to
the
air
injector
would
actually
be.
I
think,
pretty
similar
to
what
I
would
want
to
create
for
a
reactive
client.
That's
you
know
not
inspired
by
client
go
or
not
using
clanko's
reactors.
H
A
Yeah,
I
can
try
to
take
a
look
at
the.
I
haven't
gotten
a
chance
to
look
at
the
design,
yet
I'm
just
kind
of
scheming
over
skimming
over
it
now
yeah,
the
the
the
direction
of
like
how
inject
error,
works
and
stuff
is
definitely,
I
think,
like
if
it's
maybe
fine.
If
it
ends
up
wrapping
the
client
go
one
but
like
a
yeah,
we
need
something
like
error
like
inject
error
and
and
b
like
or
we
need.
A
We
need
an
interface,
that's
more
like
inject
error
and
and
b
like
we
just
need
to
make
sure
that,
like
behavior
is
well
documented
in
terms
of
like
what
happens.
If
you
take
certain,
do
certain
things.
A
Yeah,
yeah
and
stuff
like
that
there
are,
there
are
cases
where,
like
in
the
in
the
like
the,
for
instance,
in
the
reactive
client
from
client,
go
where
it
will
helpfully
attempt
to
apply
a
label
selector
for
you,
but
that's
not
really
documented
anywhere.
H
Yeah,
okay,
so
it
sounds
like
we're
staring
in
a
an
all
of
the
above
resolution.
To
this
does
that
anybody
disagree
with
that.
H
I
One
of
the
things
that
we've
talked
about
before
I
think
is
that
using
m
test
you
end
up
using
the
actual
cube
api
server.
So
validations
are
you're,
not
writing
your
own
validations
in
the
reactor
and
then
now
you've
been
implemented.
Like
you
know,
the
pod
api
server
validation
to
test
your
tester
controller
to
handle
like
all
the
different
ways
I
can
handle
it.
Is
there
something
we
could
do
to
use
the
cube
api
server,
validations.
A
D
Yeah,
I
also
think
that
if
you're
writing
tests,
sometimes
you're
probably
going
to
use
an
object
that
wouldn't
be
accepted
by
the
api
server,
because
all
the
fields
the
api
server
requires
might
be
completely
irrelevant
for
whatever
you're
currently
testing.
So
I'm
not
even
sure
if
it
makes
sense.
In
this
context,.
A
Yeah,
I
I
kind
of
think
if
you,
if
you
are
like,
if
you
are
using
reactive
client
like
you
are
probably
you
are
hopefully
in
a
situation
where,
like
there
is
a
real
reason
that
you
just
cannot
use
m
tests
like
and
like
if
you
needed
that
you
would
use
end
test,
and
I
also
kind
of
think
that's
how
we
should
document
the
reactive
client
like
prefer
m
test.
A
You
know
you
can
even
inject
error
like
there.
You
can
wrap
the
m
test.
A
You
can
wrap
the
normal
client
to
inject
errors
for
m
tests
if
you
want
but
like
if
you
end
up
getting
into
a
position
where,
for
some
sort
of
performance
reason
or
for
you
know,
there's
like
a
really
like
interesting
sequences
of
events
that
you
need
to
trigger
that
are
hard
to
get
your
trigger
in
a
realistic
environment
like
here's,
here's
here's
this
and
you
can
use
it,
but,
like
you
know,
we
don't
recommend
just
generally
using
this.
For
these
reasons,.
H
A
Yeah
I
mean
we
can.
We
can
maybe
discuss
this
at
the
end
of
the
meeting
a
little
bit.
This
is
this
has
kind
of
been
our
been
the.
I
guess
I
should
probably
say
this.
This
has
been.
This
has
been
my
kind
of
advice
since
basically,
as
long
as
we've
had
m
test,
but
we
let's
yeah,
we
can
I'm
happy
to
discuss
that
at
the
end
of
the
meeting
again
but
like
and
hear
your
thoughts,
but
that
could
get
into
a
long
discussion.
So.
A
All
right
thanks,
I
think
next
up
is
me,
but
I'm
gonna
move
myself
to
the
bottom,
because
I
have
a
demo,
so
I'm
gonna
look.
We
can
do
that
last
and
we'll
get
through.
I
think
the
other
points
are
probably
a
little
bit
quicker.
Joe
or
jenny.
Are
you
ready.
E
Yeah,
ours
is
real
quick.
So,
a
couple
weeks
ago
we
had
mentioned
that
we
had
we've
been
working
on
generating
an
apply
client
that
we'd
love
to
get
real
feedback
from
the
community.
E
On
on
that,
we're
gonna
have
probably
in
less
than
a
week
something
that
that
people
can
do
a
hands-on
experiment
with
so
basically
and
the
ability
to
build
a
cube
builder
that
will
also
generate
applied
type
bindings
in
at
least
one
one
or
two
ways:
either
the
structure
the
bit
or
the
builder
functions,
or
both
that
we
mentioned
before
the
only
question
we
had
was:
how
should
we
communicate
that
to
this
group
so
that
people
are
interested
in
trying
it
out?
We
could
see
it.
Is
there
a?
E
A
E
Okay,
yeah
we'll
we'll
do
a
little
blast
once
we
have
something
in
place,
we'll
probably
have
like
a
read
me:
everybody
can
fall
through
just
for
everybody
on
this
line.
If
you,
if
you
use,
apply
we'd,
really
love
you
to
try
it
out.
Basically,
what
we're
gonna
do
is
we're
gonna
have
like
two
or
three
weeks
where
it'll
be
available.
You
can
experiment
with
it.
E
If
you
have
like
an
existing
crd,
that
you
use
coupe
builder
with
it'd,
be
great,
if
you
just
try
running
it,
see
if
it
does
something
reasonable
for
you.
We're
gonna
have
like
a
questionnaire
at
the
end.
That
asks
like
certain
preferences
that
you
have
about
this,
and
so
it
could
really
impact
some
design
decisions.
It'll
also
be
used
to
inform
sig
api
machinery
on
on
which,
which
style
of
generated
bindings
we
like
best
for
apply
so
yeah
I'll,
send
out
a
blast
in
hopefully
also
the
week.
Thanks.
J
J
J
F
Yeah,
I
think
vince's
concern
with
quay
was
that
quay
is
down
a
reasonable
amount,
whereas
gcr
and
docker
hub
are
not
so
he
preferred
to
not
use
quay.
For
that
reason,
otherwise
I
yeah,
I
think
it's
reasonable
to
build
and
push
our
own.
I
guess
or
just
pull
and
tag
and
push
to
gcr.
A
Yeah,
the
the
previous
iterations
of
the
image
have
just
been
from
what
I
understand
just
you
know,
pull
and
tag
I
as
as,
like
I
yeah,
I
I'm
sorry,
but
I've
I've
been
a
little
bit.
A
I've
been
super
busy
kind
of
recently
and
I
haven't
like
I
haven't
had
time
to
work
on
this,
but
I
I
think,
like
I
think
we
should
have
a
a
cloud
build
config
that
does
the
pulling
tag
instead
of
requiring
an
individual
because,
like
that's,
that's
like
it's
not
great
right
like
I,
you
know
from
a
like
semi
selfish
perspective.
A
You
know
I
don't
yeah
if
I'm
on
vacation,
I
don't
want
to
have
to
go
like
push
an
image
and
from
like
a
more
and
then
like,
reflecting
that
to
a
more
project
stability
perspective
like
we
shouldn't
have
it,
you
know
it.
It's
not
great
that,
like
we
have
the
case
of
like
oh,
we
need
to
go
get
you
know
a
googler
to
push
this
image
or
whatever,
like
that's,
not
a
good
situation.
I
think
so
I
I
I
think
I
mentioned
this
before
but
like.
A
I
would
really
like
to
see
us
just
have
like
a
short
cloud
bill
config
for
that,
and
then
we
can
just
do
it
like
we
do
for
the
for
the
the
like
bundles,
that
we
publish
and
like
the
snapshots
and
stuff
like
that,
where
we
just
have
cloudville
config
sitting
on
its
own
branch
and
then
every
time
we
wanna,
we
need
to
push
a
new
version.
We
just
do
a
commit
to
that
branch
and
we
have
cloudbelt
looking
at
that
branch
and
it
just
builds
a
new
copy.
A
A
Okay,
cool
yeah
yeah
it
should
I
like.
I
imagine
it
would
be
it's
probably
a
pretty
short
cloud
but
config,
because
it
should
literally
just
be.
You
know,
pull
that
pull
the
existing
one
tag
it
as
the
new
and
then
tell
cloud
build
that
it
needs
to
be
pushed
to
yeah,
replace
yeah.
Just
let
me
know
when
you've
done
that
and
I'll
enable
cloud
build
I'll.
Make
sure
that
that
cloud
valve
is
enabled.
B
Thanks
just
just
a
comment
about
that:
should
we
move
that
part
to
the
other
repo
to
the
kubernetes
cover
builder
release.
B
A
Yeah
we
could
honestly
I'm
kind
of
ambivalent.
On
that,
like
it's,
it
seems
like
an
okay
proposal.
I
guess.
F
Next,
one
yep
just
a
quick
announcement.
We,
I
think,
are
ready
to
cut
v080
control
runtime.
This
is
going
along
with
what
we
talked
about
before
the
holiday,
where
we
are
going
to
try
and
cut
controller
on
time
versions.
F
Whenever
a
new
cube
version
is
released,
just
to
keep
more
closely
aligned
with
upstream,
the
070
release
was
really
big
and
I
think
it
it
definitely
included
one
coupe
bump,
but
it's
it
was
in.
I
guess
a
little
behind
upstream,
so
going
forward.
We're
gonna,
try
and
release
more
frequently
to
capture
new
cube
versions.
F
So
yeah
this
this
next
version
0.80
is
not
going
to
be
super
feature
full,
except
for
the
coupe
120
buck,
and
that
should
be
happening
today.
F
A
All
right-
and
I
think
we
have
one
more
thing
before
me:
oh
server
side
apply
any
plans
to
include
rapper
and
controller
runtime.
Yes,
there,
there
there
are
plans,
talk
to
talk
to
joe
and
jenny,
who
just
appeared
briefly.
A
They
are
working
on
server
side,
apply
related
things,
so
we'll
we'll
be
probably
adopting
a
similar,
basically
a
similar,
a
similar
thing.
You
know
using
the
same
kind
of
generated,
structs
and
stuff
generated
from
controller
tools,
but
you
know
just
working
with
our
client
instead
of
their
client.
K
I
guess
so.
Thank
you,
I'm
sorry.
I
just
this
question
a
bit
a
little
bit
later
than
other
folks,
and
there
are
some
plans
to
use
since
I
applied
to
do
some
better
testing
in
openshift.
K
So
it
can
tell
around
time
in
a
short
amount
of
time.
I
will
definitely
talk
with
jamie
and
jeep
bets
about
this,
but
yeah.
If
anything
else,
I
would
be
eager
to
try
it
out
help
with
anything.
A
Cool
all
right,
I
don't
know
if
joe
is
still
here
but
joe.
If
you
are
there's
a
there's,
a
extra
volunteer,
all
right,
okay.
So
the
last
thing
is
for
me,
so
I
have
been
working
on
something
kind
of
on
and
off
for
a
while,
and
it
has
recently
I've
recently
like
gotten
it
to
a
point
where
it
is
demo-able.
A
A
A
We
can
like
you
know,
people
don't
have
to
run,
learn,
go
and
and
install
the
go
tooling.
If
they're
say
you
know,
writing
something
in
like
python
or
whatever,
and
then
you
know
maybe
make
it
easier
for
upstream
as
well
to
like
generate
well-typed
and
validated
things
that
we
can
then
embed
in
our
crds
and
maybe
address
or
hopefully
address
some
of
the
issues
that
we've
had
with
the
existing
go
tooling.
A
So
hopefully,
I'm
gonna,
I'm
gonna,
give
a
quick
demo
here
and
then
also,
hopefully,
eventually
I
love
to
get
everyone's
feedback
on
it
and
should
be
a
cap
happening
soon
for
upstream
as
well,
and
then
I
have
some
not
quite
finished
internals
of
controller
tools
that
are
refactored
to
like
rebase
on
this
as
an
intermediate
step.
So
hopefully
this
will
be
the
new
internals
of
controller
tools
as
well.
A
All
right,
let
me
share
my
screen,
an
application
window.
I
think
it's
this
one.
J
A
Awesome
all
right,
so,
basically,
let
me,
let's
take
a
look
at
I've.
I've
actually
converted
all
of
core
v1,
but
it's
a
little
bit
intense,
so
I
figured
I'd,
show
you
kind
of
a
little
bit
of
a
snapshot,
a
subset
that
shows
kind
of
the
important
parts.
So
everything
is
grouped
under
a
group
version
in
this.
You
can
have
multiple
group
versions
per
file
if
you
want
to
instead
say
organize
your
code
by,
like
the
kind
so
have
all
the
kinds
next
have
all
of
one
kind.
A
Next
to
each
other,
or
you
know
you
have
maybe
a
small
api
and
splitting
it
up
between
files
is,
is,
is
unwieldy
and
then
we
have
documentation,
has
a
special
syntax.
A
This
prevents
weird
spacing
mistakes
and
accidentally,
including
comments
and
other
other
metadata
in
the
documentation
which
happens
sometimes
in
kubernetes,
and
then
the
the
naming
is
oriented
towards
like
what
it
ends
up.
Looking
like
in
the
json
this
for
generating
go,
this
gets
converted
into
go
forms.
A
As
you
can
see,
we
have
first
class
concepts
of
optional
and
kind
of
the
different
types
of
of
like
lists
and
mappings
that
we
have
in
kubernetes,
so
lists
versus
lists
maps
same
thing
for
like
defaults,
we
have
oops
first
class
concepts
of
defaults,
my
syntax
highlighting
broke
for
some
reason.
Very
briefly,
yeah
you
can
see.
We
have
all
the
usual
suspects
so
like
wrapper
types
and
structs
and
volumes
first
class
concepts
of
enums.
A
Among
other
things.
This
should
allow
us
to
publish
api
documentation
with
like
actual
descriptions
for
each
of
the
enum
variants,
and
also
to
like
more
easily
have
validation
for
enum
fields.
Instead
of
forcing
people
to
write
down
like
write
down
the
go,
constants
and
then
write
down,
slash
plus
enum
separately
and
then
finally,
we
also
have
unions,
which
is,
I
know,
been
a
feature
that
a
lot
of
people
are
asking
for.
So
this
is,
like
you
know,
the
kind
of
kubernetes
standard,
one
ofs.
A
A
There's
more,
but
you
know
I
got
a
little
lazy
and
want
to
type
out
the
whole
thing
and
then
finally,
we
have
kind
of
markers,
a
more
structured
form
of
markers
as
an
extension
point,
so
there's
some
built-in
ones
for
like
common
metadata
and
then
you'll
be
able
to
define
your
own
and
use
them
like
this
markers
are
strongly
typed
and
you
have
to
import
the
definitions
that
you
have
the
things
that
you
want
to
use
so
no
more
like.
A
Oh,
I
accidentally
misspelled
this
marker
and
controller
tools
just
like
ignored
it,
because
it
didn't
know
if
I
actually
meant
a
different
one,
because
you
have
to
import
them
and
specifically
type
them.
You
know
the
the
new
parser
will
be
able
to
go.
Oh,
I
don't
know
what
this
is.
You
didn't
import
the
definition,
so
I'm
gonna
error
out,
which
is
hopefully
a
better
user
experience.
A
Was
I
I'm
I'm
assuming
I
moved
a
little
fast
there
just
because
we're
running
a
little
short
in
time
right
I'll?
Take
that
as
a
no
comments
questions
for
now,
so
the
architecture
of
the
parser
and
this
will
eventually
be
bundled
into
a
single
helpful
command-
was
kind
of
like
modeled.
A
After
the
way,
proto
does
things
so
we
have
a
kind
of
oops
that
is
the
wrong
directory
hold
on
I
cd
up
to
too
many
directories
there
you
go
all
right,
so
we
have
an
intermediate
form
that
is
like
serialized
to
disk
that
can
be
consumed
immediately
or
like
we
can
save
around.
This
is
intended
for
consumption
by
tooling.
So,
like
various
parts
of
controller
tools.
Theoretically,
you
could
also
like
you.
A
And
then-
and
this
is
just
gonna
spit
out-
the
like
text-
form
of
the
the
intermediate
that
it's
represented-
the
intermediate
data
that
it's
generating
well
all
right,
so
we've
generated
some
intermediate
parks
form.
This
is
actually
been
serialized
to
all
around.ckdl,
which
we
can
see
is
a
lovely
blob
of
compiled
data.
It's
actually
it's
in
proto
form,
so
like
any
tooling,
can
import
the
proto
definitions
and
make
use
of
it,
and
then
we
can
feed
that
into.
A
This
is
just
like
a
hacked
up
form
of
the
same
machinery
that
we've
used
from
controller
tools,
and
I
have
evidently
screwed
something
up.
A
I
have
anchored
the
demo
gods,
let's
see,
let's
we
can
can
quickly
see
if
I
can
figure
out
what
I
screwed
up,
but
otherwise
it's
not
a
demo.
Unless
you
screw
something
up,
it's
definitely
not
a
demo.
Unless
I
screw
something
up,
what
did
I
screw
up.
A
A
So
you
can
see,
we've
got
like
an
actual
crd
that
we've
generated
here,
as
one
might
have
expected,
with,
like
api
version
kind
metadata,
the
spec
that
we
described
all
of
our
documentation
there's
for
we
have
we
even
for
like
that
union.
We
had
we
generated
like
the
very
weird,
complicated
blob
of
open
api.
That's
required
to
get
a
tagged
union
to
work
in
kubernetes
structural
schema
without
it
complaining
yeah.
A
So
that
is.
That
is
what
I
have
so
far.
I
also
have
not
quite
ported
from
my
initial
prototype,
the
equivalent
of
like
a
go
format
for
this
and
a
tool
and
also
a
tool
to
help
you
migrate
over
from
go
definitions.
You
give
it
go
definitions,
it
spits
out
the
intermediate
representation
and
then
there's
a
separate
tool
that
I
haven't
quite
finished.
Importing
from
my
initial
rust
prototype
that
spits
back
out
from
intermediate
representation,
the
equivalent
textual
idl
form
just
does
anyone
have
any
questions?
Comments,
concerns.
C
Got
one
so:
what's
what's
your
vision
for,
like
this
long
term,
would
would
we
instead
of
scaffolding
like
go
files
for
the
types?
Would
we
scaffold
this
instead
and
then
generate
go
and
crds.
A
Yeah
that
would
that
would
be
the
idea.
The
the
plan
for
upstream
as
well
is
is
hopefully
there's
so.
I
plan
on
having
it
kept
for
this
within
the
next
couple
weeks
and
showing
it
off
to
api
machinery,
and
then
hopefully
the
idea
is
upstream,
adopts
this
as
well
and
and
generates
the
go
types
as
well
but
yeah.
A
The
idea
is,
we
we'd
scaffold
this
out
and
you'd
be
able
to
write
this
instead
and
we've
made
we've
maintained,
compatibility
and
controller
tools,
so
we'd
probably
maintain
the
ability
to
pull
in
go
at
least
for
a
while,
but
like
the
internal
tooling
would
all
be
around
the
intermediate
like
the
compiled
form
so
like
we
just
compiled
the
go
to
like
the
compiled
form.
A
Yeah,
so
theoretically
you
could
you
could
go
to
to
from
ckdl
to
java,
so
kubernetes
the
project
is
definitely
going
to
maintain
a
generator
for
go.
I
know
I've
talked
to
some
folks
upstream
and
there's
like
concerns
over
whether
or
not
we
want
to
generate
other
things
off
of
like
the
eventual
open
api
or
versus
the
ckdl.
I
like
generating
off
of
the
ckdl,
because
you
kind
of
get
a
more
constrained
problem
set
and
so
like
you
can
generate
like
better
you
can.
A
You
can
make
better
decisions
about
what
you
generate,
especially
in
more
strongly
typed
languages,
so,
like
you
can
generate
like
you,
can
generate
like
rust,
rust,
enums
for
unions
for
instance,
or
or
stuff
like
that,
but
yeah
you.
You
could
definitely
like.
I
one
of
one
of
the
things
I
wanted
to
enable
is
people
consuming
this
and
do
an
interesting
thing
with
things
with
it,
generating
better
api
documentation,
for
instance,
is
like
a
thing
that
we
could
do
as
well
yeah
I
have
so.
A
I
have
my
initial
stuff
I'll
drop
a
link
in
the
document,
but
I
have
my
the
initial
work.
If
anyone
wants
to
like
go
and
play
around
with
the
syntax
there's
the
two
compilers
that
you
saw
below
or
here
a
couple
of
examples,
a
kind
of
formal-ish
grammar
and
some
vim
syntax,
highlighting
if
you're
working
in
vim
so
yeah
try
it
out
file.
Definitely
like
you
can
file
issues
on
this
repository.
A
I
What
like,
what
do
you
mean
like
how
yeah
sorry,
that's
probably
like
a
really
weird
question
coming
out
of
nowhere,
I
guess
what
I
was
considering
is
controller.
A
is
written
in
rust.
Controller
b
is
written
in
go
controller,
a
wants
to
use
controller
b,
but
it
would
have
to
like
would
have
to
then
generate
types
for
itself
right
and
how
we
could
you
know
if
controller
b
is
installed
in
a
cluster?
How
could
they
generate
those
types
before
I
would?
I
I
would
assume
that
would
be
from
the
open
api
spec
that
is
served
by
discovery,
so
I
guess
I
was
asking
if,
if
we're
potentially
also
thinking
about
serving
in
some
way
the
kcdl
for
for
this,
so
you
can
easily
generate
your
type
based
on
that.
A
A
This
is
one
of
them,
on
the
other
hand,
generating
based
on
the
open
api
from
the
crd
is
a
little
bit
wonky
because
you
don't
have
any
type
names
in
it
and
you
lose
a
whole
lot
of
structured
information.
So,
like
that's,
that's
like
maybe
a
longer
term
discussion.
A
My
my
thoughts,
kind
of
short
term
would
would
be
just
like
today.
You
know
we
effectively
say,
distribute
the
go
types
and
instead
we'd
say
distribute
the
kdl
or
the
ckvl
and
then
like
the
the
parser
can
import
either
via
from
kdl
or
ckdl
directly,
and
and
so
you
could,
you
could
generate
your
rust,
your
rust
bindings.
That
way,
it's
a
little
bit
closer
to
our
traditional
method,
but,
like
it's
also
doesn't
require.
You
know
any
any
changes
to
the
api
server
or
whatever
right.
A
All
right,
I
think,
unless
anybody
has
any
other
quick
questions
or
comments,
I
think
we
are
over
time
for
today.
So
with
that
I'll
see
y'all
in
a
couple
weeks.