►
From YouTube: Reverse Kube Resource - Jan Wozniak, Kubermatic
Description
In the Kubernetes world, it is a common use case to convert API resources written in Go to YAML manifests for further distribution whether as part of helm chart, kustomize template or other tools.
A
Hi
everyone,
my
name,
is
Ian
and
I
work
at
coopermatic
and
in
the
following
30
minutes.
I
would
like
to
share
with
you
my
build
Journey
writing
this
geeky
little
esoteric
kind
of
a
weird
tool
that
I
spent
some
of
my
free
time
writing
and
it's
called
reverse
Cube
resource.
Now,
essentially,
what
this
thing
does
and
in
the
input
it
accepts,
yaml
files
and
on
the
output
there
are
go
files
and
now
I
I
practiced
this.
This
talk
to
my
family
and
every
time
I
started
with
the
first
slide.
A
This
is
approximately
the
reaction
they
gave
to
me
now
granted
my
my
daughters.
They
are
one
and
four,
so
they
don't
understand
kubernetes
very
much
yet,
but
I
was
actually
expecting
even
even
people
who
understand
kubernetes
like
why
would
you
transform
yamos
into
gold
and
yeah
in
the
remaining
30
minutes?
I'll
describe
the
algorithm
and
I'll
try
to
put
in
a
couple
of
hopefully
entertaining
memes
along
the
way.
A
So
what
do
I
mean
when
I
say
generate
go
out
of
a
yaml
here
on
the
live
left
side
of
the
screen?
You
have
a
perfectly
fine
yaml,
manifest
of
a
resource
service
account.
It
has
all
of
the
properties
that
you
may
need
for
a
service
account.
So
it's
a
perfectly
valid
thing
to
to
send
to
a
cube
API
now
on
the
right
side
of
the
screen.
Here
is
semantically
equivalent
description
of
the
thing,
just
syntactically,
very
different
yaml
and
go.
They
are
very
different
things.
A
It's
a
lot
easier
to
transform
yaml
into
Json
they're
much
closer
in
their
domain
of
usage,
but
go
is,
is
a
very
different
thing
and
I
thought
yeah.
How
how
hard
can
it
be?
How
difficult
can
it
be
to
write
a
such
a
tool
that
will
take
one
and
generate
the
other,
and
also
there
is
this
question
lingering
in
the
room
like
why
on
Earth?
Would
anyone
want
this?
A
It
doesn't
sound
like
a
workflow
that
you
would
frequently
frequently
use
and
again
yeah
it's
a
little
esoteric
Tool,
but
the
the
observation
that
I've
made
and
couple
of
use
cases
that
that
I
stumbled
upon
were
that
whenever
you
want
to
run
or
or
deploy
your
applications
on
kubernetes
cluster,
the
odds
are,
you
will
deal
with
some
form
of
a
yaml,
whether
it's
it's
a
Helm
chart,
customized
templates
or
or
vanilla
yaml
files
exactly-
and
these
are
perfectly
fine
and
Fantastic
Tools
to
to
get
you
started
to
to
get
that
nginx
container,
serving
your
your
website
running
on
kubernetes
with
within
five
minutes
now.
A
A
What
what
are
the
operators
good
for
is
the
day
two
operations
and
automating
a
little
bit
more
advanced
things
than
and
extra
support,
and
then
the
yamos
and
Helm
charts
can
can
do
and
frequently
these
Frameworks.
They
are
written,
go
there
are
obviously
other
Frameworks,
but
but
the
mainstream
I
think
is
go.
A
So
with
that
logic
in
mind,
there
are
many
many
examples
in
in
the
real
world,
for
example,
Argo
CD
they
Define
their
resources
in
go
same
for
elasticsearch
search
manager,
kubeflow
datadog
flux,
many
other
as
well,
and
if
you
thought
that
this
was
just
a
GitHub
fashion
trend,
there's
a
screenshot
from
gitlab
and
and
also
gitlab
for
for
some
of
their
resources,
some
of
their
operators,
they
Define
their
resources
purely
purely
and
go.
A
There
are
obviously
other
paradigms.
It's
not
the
only
way
to
to
write
an
operator,
but
I
think
there
is
a
benefit.
There
are
actually
a
couple
of
advantages
from
my
perspective
and
mainly
they
relate
to
having
a
single
source
of
Truth.
You,
you
define
your
your
resources,
your
definition
in
the
same
language,
in
the
same
place
where
you
do
the
modifications.
So
whenever
something
happens,
whenever
you
need
to
react
and
modify
the
resource,
it
will
be
the
same
code
base
same
language.
A
It's
also
easier
to
spot
regressions
and
changes
and
goes
without
saying
that
you
get
compile
time
checks
for
go
if
you,
if
you
mess
up
in
the
yellow,
manifest
you
learn
on
the
schema
validation,
but
that's
a
little
bit
later
than
during
the
compilation
time,
and
it
definitely
started
out
of
a
curiosity.
I
just
wanted
to
see,
if
that's
possible
and
how
difficult
it
is,
but
it
actually
has
a
couple
of
usages
in
real
production
environments.
One
is
a
project
called
keema
at
sap.
A
I
was
introduced
to
to
this
project
throughout
my
career
at
coopermatic
and
keema,
previously
extensively
used,
Helm
and
yamls
to
to
maintain
a
set
of
applications,
but
project
is
slowly
evolving
and
moving
forward
away
from
from
Helm
and
yamos
defined
resources,
and
now
they
have
Go
written
operators
and
controllers.
So
this
tool
helped
to
automate
a
certain
part
of
the
flow
of
migration
and
deprecation
and
and
the
second
case
at
kubermatic
kkp
platform.
A
We
have
this
concept
of
add-ons
and
and
applications
you
can
also
for
simpler
applications.
You
can
Define
them
in
in
templates.
You
can
use
the
third-party
add-ons
from
from
Helm
and
yamos
directly,
but
for
slightly
more
complex
topologies.
You
need
to
again
know
a
little
bit
of
go
and
write
that
extra
operator.
What
do
I
mean
by
more
complex
topology,
for
example,
you
have
many
clusters.
A
They
should
share
a
certain
part
of
that
Helm
chart,
but
they
also
will
have
something
that's
deployed
elsewhere,
not
not
in
in
the
same
kind
of
infrastructure
cluster.
Some
of
them
can
have
all
of
the
deployments
all
of
the
services.
Some
of
them
should
have
just
some
part
of
it.
A
So
moving
the
CSI
Center
plugin.
From
from
this
add-ons,
simpler
concept
to
custom
controllers,
the
workflow
was
that
a
third
party
release
of
a
sender,
CSI
plugin,
is,
is
purely
distributed
in
Helm
and
I
thought
that
if
I
was
to
maintain
it,
if
I
was
to
constantly
keep
on
the
new
releases
at
the
changes
there
and
retrofit
them
back
to
a
go
code,
I
frequently
made
mistakes
so
rather
I'll
have
that
part
of
the
workflow
automated
that
it
will
always
generate
a
go
code
and
I'll
just
modify
and
manage
and
maintain
a
go
code.
A
Now.
Yes,
of
course,
I
could
definitely
do
that
manually.
It
would
probably
take
me
not
a
lot
of
time,
but
it's
a
manual
labor
and
we
as
software
Engineers,
we
don't
like
to
do
manual
labor.
We
rather
automate
things.
This
might
be
the
edge
case
where
potentially
the
automation
wasn't
completely
justifiable
or
necessary,
but
it
was
a
lot
of
fun
to
write
it,
and
now
that
we
have
the
first
section
kind
of
out
of
the
way
we
have
this
to
find.
Why
and
what?
A
Let's
look
at
how
it
it
is
written
and
the
slightly
more
geeky
part
of
it
and
that's
the
algorithm
and
before
I
go
about
the
algorithm.
I
would
like
to
put
down
some
some
fundamental
terminology.
A
These
are
a
couple
of
words
that
you'll
be
hearing.
When
I
describe
the
algorithm
itself,
they
the
first
four
they
come
directly
from
the
kubernetes
ecosystem.
So
it's
a
codec
Factory,
which
is
something
that
helps
you
to
get
from
gamos
to
runtime
objects.
Then
there's
a
runtime
schema
which
helps
with
the
mapping
of
what
group
version
kind
is
which
certain
runtime
object.
Then
there
is
this
interface
called
runtime
object.
A
A
meta
object
reflect
and
go
ASD
the
last
two.
They
are
actually
not
related
to
kubernetes.
These
are
more
cargo
packages
so,
as
already
briefly
described
the
codec
Factory.
There's
this
served
from
the
documentation
to
to
be
precise
for
those
of
you
who
would
like
to
read
it,
but
the
tldr.
It's
essentially
an
API
Machinery
package
that
provides
a
way
to
serialize
and
deserialize
the
wired
wire
data.
That
means
the
jsons,
the
ammos
into
a
proper
runtime
objects.
A
So
whenever
you
write
your
go
code,
you
are
modifying
the
the
runtime
goal,
structures
and
yeah.
That's
that's
exactly
the
function
that
the
Codex
Factory
is
used
for.
The
important
argument
for
for
this
algorithm
is
data.
That's
really
array
of
bytes,
where
the
content
of
the
AMOLED
exists.
Defaults
can
be
nil.
Objects
are
used
as
again
nil,
but
then
it
returns
a
proper
runtime
object
of
a
specific
version
so,
for
example,
the
Pod.
A
If
the
yaml
defined
a
pod
or
service
account
like
you
could
have
seen
in
the
beginning,
that's
what
you
would
get
in
the
return,
the
important
part
of
how
it
knows
what
to
translate
into
what
is
another
package
from
API
machinery
and
that's
a
runtime
schema
schema
in.
It
is
kind
of
a
glorified
terminology
for
a
map.
In
my
opinion,
it's
just
a
group
version
kind
to
go
structures
now.
Group
version
kind
is
something
that
you
define
in
your
yaml.
A
A
That
means
it's
core
version,
usually
V1
and
kind
pod
with
with
the
capital
p
and
the
the
this
is
an
example
how
the
client
go
initializes
the
schema,
because
again
it's
a
it's
a
map,
so
you
need
to
somehow
register
all
of
these
resources
there
you
can
it's
readily
to
use,
and
if
you
want
to
use
the
codec
Factory
with
the
schema
from
from
client
go
you'll
get
already
90
there.
A
A
So
what
is
the
runtime
object?
I've
mentioned
that
term
a
couple
of
times
it's
an
interface.
Every
single
resource
in
in
kubernetes
has
to
implement
this
interface,
but
luckily
you
don't
have
to
do
it
yourself.
There
is
a
generator
code,
gen
again
another
CLI
tool
from
directly
the
ecosystem
of
kubernetes,
and
if
you
properly
Define
your
your
structures
and
properly
annotate
them
you,
you
can
use
that
code
gen
to
generate
the
code
to
satisfy
this.
A
This
interface,
for
you
met
every
one
object,
so
we
have
a
runtime
object
and
then
we
have
a
meta
V1
object.
That
is
mainly
an
accessor
for
the
metadata
again,
every
single
resource
in
kubernetes
has
some
metadata
for
for
the
scope
of
the
tool,
the
important
metadata
where
the
namespace
and
name
of
the
resource
and
that
kind
of
wraps
the
kubernetes
section
of
the
terminology
reflection
package:
that's
that's
a
core
package
from
go
and
it
is
used
for
in
memory
during
the
runtime
inspection
of
some
of
the
objects.
A
So
you
have
a
structure,
you
have
a
function,
you
have
an
interface,
you
have
essentially
anything
and
you
want
to
figure
out
what
it
actually
is.
You
want
to
look
inside
of
the
object.
You
want
to
see
what
Fields
it
has,
what
what
are
its
values,
the
the
package
of
the
object,
where
what
structure
is
defined,
whether
it's
implementing
a
certain
interface
or
not,
and
for
for
these
there
are
mainly
two
valuable
helper
functions.
One
is
called
a
type
of
you
pass
anything
in
there,
that's
kind
of
how
it
works.
A
You
pass
anything
in
there
and
it
will
give
you
a
structure
defining
what
that
anything
is
so
you
you
will
be
able
to
look
at
the
fields
and
and
their
types
and
you'll
be
able
to
look
at
the
the
package
path
for
the
value
of
you
will
get
the
actual
values.
So
in
the
first
case
you
have
a
structure,
it's
got
two
Fields
X
type
of
integer
Y
type
of
integer.
The
type
of
will
tell
you
hey.
A
There
is
a
structure,
it
has
these
certain
fields
of
type
integer,
the
value
of
whenever
you
instantiate
the
structure.
Whenever
you
give
up
actual
meaning
the
proper
values
there,
it
will
tell
you,
oh
so,
the
structure
the
value
of
x
is
seven
value
of
y
is
25.,
so
you
can
self-reflect
on
essentially
any
part
of
the
go
code
and
the
last
probably
at
least
from
my
perspective,
the
most
geeky
part
of
this
section
is
the
ASD
parsing.
A
When
you
have
a
go
code,
you
can
investigate
the
the
go
code.
You
can
look
at
what
you
have
written
and
you
can
have
done
any
arbitrary
modifications
programmatically.
For
example,
if
you
have
a
code
that
where
you
defined
a
certain
function
and
you
want
to
add
a
new
field
to
the
function,
a
new
argument
to
a
function-
and
you
want
to
do
it
throughout
your
extensive
code
base.
A
The
ASD
package
is
fairly
useful
tool
in
in
this
case,
where
you
would
just
write
your
modification
algorithm
and
it
will
do
the
job
for
you.
So
the
first
section
defines
kind
of
the
the
objects,
the
structures
and
types
the
Importer.
A
Whenever
you're
self-reflecting
on
the
code,
whenever
you're
you're
investigating
the
code
frequently
you
will
import
packages
from
rest
of
the
world,
it
could
be
even
a
different
modules.
So
the
Importer
is
something
that
it
knows
how
to
figure
out
and
fetch
the
code
that
you
import,
a
parser
is
something
that
parses
the
source
code
and
gives
you
ASD
abstract,
syntactic
tree
for
you
to
easily
inspect
and
navigate
through
the
source
code
and
types.
A
So
a
brief
summary
yeah
we
have
the
codec
Factory,
which
is
along
with
schema,
something
that
converts
CMOS
to
runtime
objects.
We
have
schema
which
defines
what
group
version
kind
to
to
which
structure.
Then
we
have
a
runtime
object,
which
is
kind
of
a
generic
representation
of
every
single
type
of
the
object
might
be
one
object,
that's
a
that's
a
meta,
interface,
Reflections
and
ASD.
So
let's
combine
these
together
and
into
the
actual
algorithm
and
it
starts
with
the
yaml
parser.
A
A
The
first
thing
that
happens
is
that
it
identifies
the
document's
operators
because
parsing
the
whole
thing
at
once
would
be
challenging.
So
rather
it
looks
at
every
single
resource
independently.
So
hence
the
document
separators.
Then
it
ignores
comments
and
looks
at
the
resource.
Now.
A
The
first
thing
that
starts
is
we
take
the
codec
Factory
and
schema,
and
we
we
get
a
runtime
object
now.
This
is
not
a
go
code.
Yet
this
is
like
in-memory
structure,
it's
a
bunch
of
bytes
and
it
has,
as
you
can
see,
a
lot
of
extra
Fields
a
lot
of
craft
that
we
necessarily
don't
want
to
be
in
our
output
of
the
object.
A
We
don't
want
that
in
the
go
code,
because
then
it
will
not
only
be
syntactically
different,
it
will
also
potentially
be
semantically
different
and
how
how
to
get
rid
of
The
Craft.
There
is
something
called
an
unstructured
which
is
another
implementation
of
a
runtime
object
by
by
API
machinery,
and
it's
essentially
just
a
map
of
maps
kind
of
like
how
you
would
Define
your
yaml.
That's
Loosely,
similar
thing:
it's
a
recursive
structure.
A
It
doesn't
have
the
native
types,
so
you
don't
get
the
benefit
of
knowing
what
exactly
is
the
proper
type
of
a
certain
resource
and
and
its
values.
But
you
have
exactly
matching
of
your
yaml,
so
in
combination
we
can
use
these
two
runtime
objects,
the
one
with
the
proper
native
types
and
the
one.
That's
that's
unstructured
and
and
go
to
the
next
phase,
which
is
a
runtime
object,
processor,
so
they're
they're
the
same
to
runtime
objects.
From
from
the
previous
slide.
They
just
moved
from
right
to
left.
A
We
have
the
service
account
one
with
extra
craft,
the
other
one
exactly
matching,
but
missing
valuable
information
and
and
the
first
step
is,
we
need
the
go
file
and
every
go
file
needs
a
package.
That's
actually
passed
through
a
CLI
argument
and
then
we
abuse
Reflections
Reflections
are
going
to
be
kind
of
a
core
of
this
whole
algorithm.
A
So
we
look
at
the
type
of
the
first
runtime
object
with
the
proper
types,
and
we
know
thanks
to
Reflections.
What's
to
import.
We
know
it's
defined
in
a
certain
package,
so
we
can
write
that
code
down.
The
second
thing
we
can
do
from
the
type
is
we
know
its
type,
so
we
can
write
it
out.
We
know
that
it's
going
to
be
a
variable,
we
don't
know
the
name
yet
so
there's
a
placeholder,
but
we
have
some
Foundation
to
build
up
on
foreign.
A
Now
we
pause
with
the
reflections.
We
can
use
the
object,
meta
interface,
because
every
runtime
object
also
can
be
casted
to
to
the
object
matter,
and
we
know
that
it
has
a
certain
name,
the
object
we
can
use
the
get
name
for
for
the
comment
you
can
use
exactly
the
same
string
for
the
variable
identifier,
you
you
need
to
change
it
a
little
bit,
because
the
restrictions
for
names
on
your
resources
and
kubernetes
are
very
different
than
how
you
name
your
variable
identifiers
in
in
go.
A
Then
we
continue
to
abuse
Reflections
and
we
look
recursively
into
every
single
field
that
the
resource
has.
One
exception
is
type
meta.
We
can
skip
that
because
we
already
know
the
type
information
from
the
type
itself
and
and
from
group
version
kind.
So
we
can
skip
the
type
meta.
The
object
meta,
however,
that
contains
valuable
information
again.
A
So
we
know
that
there
is
a
new
import,
because
we
have
defined
new
object
and
it's
important
from
imported
from
a
different
package,
and
we
know
that
there's
a
new
field,
so
you
write
it
down
and
then
again
we
recurse
deeper
into
the
object
meta
and
every
time
we
recurse
into
something
we
look
at,
we
inspect
a
certain
field.
We
also
make
sure
that
it
exists
in
both
the
typed
object
type.
The
runtime
object,
as
well
as
the
unstructured.
The
unstructured
keeps
helping
us
to
to
figure
out
what
we
can
omit
and
drop.
A
So
we
put
a
name
there
and
we
put
a
namespace,
and
that
is
actually
it
that's
the
whole
algorithm.
It's
a
little
bit
of
Reflections
and
a
little
bit
of
API
machinery.
There
is
a
little
bit
of
post-processing.
One
notable
thing
is
from
the
ASD
package
to
tidy
up
the
Imports,
so
the
code
looks
a
little
bit
more
idiomatic
go
code,
and
that
is
essentially
how
you
can
translate
your
yaml
into
into
a
go
type
but
way
down.
Something
doesn't
add
up
right.
A
You
said
that
the
client
go
schema
that
we
used.
It
has
just
the
native
resources
how
about
crds,
how?
How
do
they
work
because
they
are
not
part
of
the
the
schema
and
yeah
that
is
correct.
The
algorithm
yeah
there's
an
easy
way
out.
We
can
just
dump
the
unstructured,
but
then
we
lose
the
convenience.
Then
we
lose
a
lot
of
value.
A
That
I
think
is,
is
really
good
to
to
have
and
I
I
had
this
idea
that
crd
should
kind
of
work
out
of
the
box,
with
with
some
extra
arguments
before
I
describe
how
exactly
the
crds
work.
I
would
like
to
show
you
the
sample
usage,
because
I
had
a
certain
user
experience
in
mind
that
I
wanted
to
achieve
and
it
kind
of
starts
with
get
clone
of
a
certain
version
of
an
openstack
provider.
Then
that's
how
they
distribute
their
software,
so
you
have
a
Helm
chart.
A
We
template
it
because
we
need
just
a
raw
yamos
and
then
we
pass
it
to
the
reverse
Cube
resource.
The
first
two
com
commands
are
not
that
important,
so
we
can
just
omit
them
and
look
at
the
last
one,
the
reverse
Cube
resource.
One
of
the
arguments
is
package.
We
need
to
Define.
This
is
going
to
be
the
name
of
the
package.
Then
the
second
one
is
optional.
A
The
the
header
file
you
can
have
some
boilerplate
embedded
just
like
every
generator
in
in
kubernetes
ecosystem
has,
and
then
you
pass
a
source
code
in
Cube
cuddle.
It
will
be
Dash
F
here,
I
chose
SRC
and
I
probably
should
have
chosen
Dash
f,
but
at
the
moment
it
is
this
way
and
then
you
redirect
the
output
to
a
file.
A
What
I
wanted
with
with
the
crds
is
to
not
really
have
a
difficult,
elaborate
command
line.
Arguments
where
you
would
Define
that
a
certain
crd
is
defined
for
certain
group
version
kind
and
how
you
register
it
to
schema,
but
instead
I
wanted
to
provide
a
directory
and
the
tool
itself
should
algorithmically
go
over
the
the
entire
source
code
find
the
crds,
find
their
definitions
and
plug
them
into
schema,
without
without
any
like
human
involvement,
without
anything,
that's
error-prone
and
and
that's
kind
of
the
algorithm
that
I
have
tried
to
implement.
A
So
there
are
certain
code
patterns
that
we
can
look
for
to
to
figure
out
what
are
the
crds?
They?
They
all
come
from
the
API
Machinery,
the
the
certain
patterns
that
we
are
looking
for,
one
is
a
runtime
and
the
other
one
is
runtime
schema
and
the
schema
somewhere.
If
you
write,
if
you
define
a
crd,
you
need
to
Define
that
it
has
a
certain
group
and
version,
so
we
can
look
through
the
source
code,
kind
of
a
glorified
grab
to
parse
it
to
ASD
and
then
on
the
AST
itself.
A
Look
for
for
this
code,
and
the
second
thing
is
a
crd-
has
a
certain
structure,
certain
annotations
and
and
certain
things
that
we
can
always
find
them
and
they
always
Implement
a
runtime
object
again.
Luckily,
you
don't
have
to
write
it
yourself.
You
don't
have
to
implement
the
get
object,
kind
and
deep
copy
object.
There
is
the
code
gen
or
gen
client
from
from
kubernetes
from
from
the
community.
That
does
it
for
you,
but
every
single
crd,
every
single
resource
will
have
this
somewhere
implemented.
A
So
the
algorithm,
the
gist
of
the
algorithm,
can
be
summarized
into
a
simple
for
Loop,
so
we
first
kind
of
initiate
the
parser.
It
looks
at
the
entire
code
base
that
we
pass.
It
then
for
every
every
single
symbol
present
in
in
the
ASD
we
iterate
over
it.