►
From YouTube: Kubernetes SIG Apps 20180212
Description
Kubernetes SIG Apps meeting 02/12/2018.
A
So
some
announcements-
well
one
big
announcement,
helm
summit-
is
coming
up
next
week.
Believe
it's
Wednesday
and
Thursday
in
Oregon
hope
to
see
a
lot
of
you
guys
there
I'll
be
there.
I
know
Matt
I'll,
be
there
believe,
not
sure
if
Anand
is
gonna
make
it,
but
a
lot
of
the
community
will
be.
There
should
be
an
exciting
discussion,
both
about
kind
of
where
we're
at
and
where
we're
going
in
home,
3.
A
So
moving
on
to
updates
for
SATs
we're
really
just
working
right
now
on
moving
stuff
over
to
the
v1
API,
there
is
a
PR
by
Matt
Liggett
that
enabled
the
storage
format
to
v1
by
default,
and
we
want
to
try
to
get
conformance
done
by
the
end
of
this
release.
I
don't
really
have
any
updates
on
jobs.
Does
anyone
else.
B
Ok
for
jobs
itself,
I
don't
have
any
of
those
there's
one
PR
for
cron
jobs.
That
I
would
like
to
get
merge
for
one
point:
ten,
which
is
about
initializing
finally,
initializing
jobs
out
of
the
cron
job.
That's
the
only
thing
that
I'm
hoping
to
find
some
time
this
week
and
push
it
over
Oh
help
for
one
point.
C
A
D
D
Yeah
so
when
I
say
tools
for
kubernetes
I'd,
let's
have
in
mind
cube
CTL
for
one
moment
and,
for
example,
helm.
So
these
are
usually
CLI
components,
usually
written
in
go,
and
they
talk
to
a
component
on
your
kubernetes
cluster
in
case
of
cube
GTL,
its
DPI
server
in
case
of
helmets
called
tiller,
and
essentially
they
talk.
D
If
we
talk
about
these
tools,
they
usually
have
also
some
web
dashboards.
If
we
talk
about
GTL,
there's
the
Bernie
CY
dashboard
when
talking
about
helm,
there's
monocular
and
the
problem
that
usually
appears
is
that,
for
example,
helm
does
its
communication
over
G
RPC
with
the
tiller
server
and
right
now,
there's
no
ER
PC
web
public
that
that
you
can
use
so
most
of
the
times
you
end
up
having
to
keep
in
sync
API
is
for
G
RPC
on
for
HTTP
for
your
a
web
dashboard.
D
So
chip
CTL
is
basically
a
toolkit
for
building
CLI
components
with
a
kubernetes
server
and
the
web
dashboard,
and
it's
basically
packaged
so
that
it
can
start
working
on
your
tool
just
by
forking.
This
repo
renaming
your
your
tools,
adding
your
methods
and
basically
just
deploying
it
so
right
now,
there's.
D
There's
nothing
on
my
on
my
cluster
right
now
and
basically
it's
a
self
deploying
tool.
It
uses
the
kubernetes
api
to
deploy
itself
and
right
now.
It's
it
created
a
pod
with
two
containers
it
create
contains
the
server
components,
the
client
you
have
locally.
You
can
either
make
it
build
it
or
download
it
from
work
from
github
release
and
basically,
at
this
point.
D
It
creates
an
authenticated
tunnel
using
the
kubernetes
api
back
to
the
to
your
server
and
it's
and
it
does
whatever
you
are
your
servers.
Your
service
does
its.
You
can
also
do
server-side
spinning
because
it's
it's
g
RPC
and
these
are
basically
implemented
so
that
you
can
understand
how
to
extend
these
commands
going
forward.
As
I
said,
there's
also
a
web
dashboard.
So
there
is
a
proxy
command.
D
That
starts
a
proxy
on
port
8081
locally
and
basically
it's
it
exposes
whatever
you
have
defined
in
your
in
your
service.
Now
how
it
does
that.
So,
as
you
know,
this
is
the
proto
Buffy
RPC
and
it
uses
a
project
called
G,
RPC
gateway
that
basically
creates
a
proxy
in
front
of
your
G
RPC
server
that
translates
JSON
from
web
requests
to
protocol
for
your
RPC
server
and
the
other
way
around
forever
doing
spawns.
And
basically
you
define.
D
You
say
that
for
this
RPC
for
this
jar,
PC
method,
I
want
the
get
method,
mapped
to
slash,
API,
slash
version
and
it
will
automatically
generate
based
on
the
on
the
protobuf.
It
will
generate
client,
the
RPC
client
that
you'll
use
in
the
CLI.
It
will
generate
the
G
RPC
server
and
it
will
also
generate
the
GRP
sings,
a
play,
the
proxy
that
will
stand
in
front
of
the
on
the
G
RPC
server,
and
it
will
also
generate
the
types
of
client
for
the
angular
dashboard
that
you
see
right
here.
D
So
essentially,
all
you
have
to
do
is
update
your
definitions,
your
API
once
and
all
your
clients
or
your
API
is
all
your
servers
and
clients
are
in
sync:
it
uses,
as
I
said,
the
protobuf
compiler
to
also
generate
the
RPC
gateway
and
the
swagger
definitions,
and
it
uses
swagger
collisions
you
like
to
to
basically
create
the
typescript
client.
So,
as
I
said
for
this,
this
repo
and
you'll,
have
you
only
have
to
add
your
commands
here?
Yeah,
add
your
command
here
you
implement.
D
Whatever
server
method,
you
have
add
a
Cobra,
CLI
command,
add
an
angular
dashboard
components
and
that's
it
there's
examples
on
how
to
do
both
of
those
things
and
you
can
basically
emulate
the
version
and
proxy
and
stream
commands
that
are
implemented
in
web
and
in
in
CLI
work-in-progress.
First
of
all,
is
integrating
with
draft
so
that
you
can
easily
iterate
through
through
your
builds
state
management
using
at
CD.
It's
in
progress,
I
already
started
it,
and
also
using
config
maps,
are
back
and
SSL
help
wanted.
D
A
Thanks
that
was
cool.
Does
anybody
have
any
questions.
E
D
Essentially,
if
you
think
of,
if
you
want
to
create
a
tool
like
helm
or
wide
draft,
this
is
basically
starting
points
out,
so
that
you
don't
have
to
do
all
the
plumbing
in
connecting
your
gr,
PC,
client
and
server
and
connecting
to
authenticated
tunnels.
Ladies,
so
basically
it's
just
a
starting
point
for
you
to
create.
Oh.
F
G
G
D
G
G
H
A
F
G
A
I
think
Phil's
question
pertains
to
a
lot
of
times
with
kubernetes,
a
user
has
specific
auerbach
permissions
defined
in
there.
There
could
be
cuttle
urban.
That
could
be
quick
and
a
lot
of
times.
We
use
the
pattern
of
impersonation
in
order
to
carry
the
privileges
of
a
particular
user
to
the
service
that
is
performing
an
action
on
behalf
of
them.
A
It's
just
a
pattern
that
we
like
to
try
to
carry
and,
to
be
honest,
I'm
not
like
I'm,
not
on
the
security
side
like
I,
get
why
you
would
want
to
do
impersonation
and
I've
seen
it
used
successfully
as
a
pattern
before
installing,
with
the
necessary
permissions
and
not
impersonation.
Not
impersonating
is
also
a
valid
approach,
depending
on
what
your
your
exact
concerns
and
needs
are,
and
what
your
threat
model
is
right.
A
That,
like
what
Bill
was
talking
about
is
on
one
hand.
Impersonation
is
one
way
that
we
ensure
that
a
user
has
permissions
rather
than
a
service
having
commissions,
but
there
could
be
valid
models
where
you
want
the
server
to
have
a
specific
set
of
permissions,
and
that's
it
and
then
the
user
can
access
that
server
and
he
doesn't
really
need
to
impersonate.
Yes,.
G
Example
of
this
is
like
I
have
control
and
I
wanted
to
move
stuff
out
of
the
client
into
the
server.
For
all
the
various
reasons
you
might
want
to
do
that
using
this
tool
when
it
runs
in
the
client
and
I
create
deployments,
it's
using
my
credentials
to
create
those
deployments
right,
but
then,
as
soon
as
I
move
it
into
the
server.
It's
not
using
my
credentials
anymore,
right
and
so
I
might
be
able
to
escalate
privileges
or
do
other
weird
stuff
and.
A
One
of
the
I
don't
know
if
you
want
to
call
it
issue,
but
the
current
design
of
the
core
controllers
that
running
controller
manager.
They
all
have
a
specific
set
of
our
back.
That's
pre-installed
in
the
cluster
which
okay,
it's
one
way
to
do
things,
but
maybe
impersonation
might
have
been
a
better
idea.
Go
in
go
forward
with
that,
but
it
gets
a
little
bit
better
to
do
our.
D
E
Just
one
one
piece
of
feedback
you're,
just
looking
at
the
readme-
and
it
seems
like
the
assumption-
is
that
this
is
for
somebody
who
appreciates
the
model
that
almond
draft
implement.
However,
it
doesn't
describe,
doesn't
give
any
reason
to
why
that
model
may
be
good.
What
not!
So
if
you
could
outline
that
that
would
probably
help
we
use
this
because
right
now,
I'm
looking
at
it
does
like
it
says.
A
J
J
We
can
all
right
perfect
thanks
a
lot
for
having
me
today.
I
want
to
show
you
cube
and
before
I.
Do
that
very,
very
good.
A
bit
of
the
motivation
behind
that
I
believe
that
we
should
be
able
to
interact
with
a
committees
cluster
the
same
way
we're
interacting
with
our
local
machines.
You
think
about
how
do
you
launch
in
the
binary
you
just
type
in
the
name
of
the
binary
in
your
shell
hit
enter?
J
If
you
want
to
see
what's
running
on
your
box,
you
and
ups,
if
you
want
to
get
rid
of
one
of
the
processes,
you
get
the
process,
ID
and
say,
kill,
process,
any
questions.
Why
don't
we
have
that
for
creatives
and
create
this?
We
have
to
learn
all
these
concepts,
parts
deployment,
services
and
so
on,
and
if
Q
Caudill
we
are
writing
demo
files
and
so
on
and
so
forth,
so
I
go
with
is
to
provide
a
99%
of
people
up
there
same
interaction.
J
J
J
J
F
J
J
J
J
J
J
J
J
So
if
we
do
PS
now
we
see
that
there
is
a
cluster
process
running
from
the
source
death
test.
But
yes,
that
is
using
exposed
to
service
that
is
called
test
which
is
derived
from
and
the
source
then
can
override
that
or
or
you
can
go
at
many
other
things
through
environment
variables.
And
now,
if
we
say
curl
has,
we
should
hopefully
get
hello
from
objects
and
that
curl
is
actually
running
in
an
apartment.
I
J
J
And
then
I
get
all
the
contracts
that
I
could
say.
You
know
use
whatever
context
here,
so
you
can
switch
between
different
contexts
and
within
a
cluster
I
have
a
name
is
obviously
that's
the
normal
contract
and
then
within
these
environments.
So
you
can
essentially
launch
different
things
in
a
century
a
bunch
of
flavors
or
for
the
resources
created
and
the
built-in
command
curl.
Essentially,
technically
it
simply
launching
a
hearts
between
and
maybe
debugging
the
debug.
And
if
we
then
look
again,
you
would
see
that
is
the
underlying
cube
huddle
commander.
It's
actually
launching.
J
H
J
It's
not
for
production,
it's
definitely
not
for
production.
I!
Don't
want
people
like
you
can
use
it
in
production.
You
know
if
you
want
to
more
easily
interact
with
a
cluster,
but
it's
really
for
prototyping
developing
quickly.
Imagine
you
have
two
or
three
micro
services
and
you
were
working
with
a
new
one.
That
depends
on
others
all
the
others,
and
you
really
want
to
quickly
iterate
and
at
some
point
in
time
you
say:
okay,
now,
I
run
a
CDC
pipeline
put
it.
J
You
know
the
container
image
somewhere
and
then
it
actually
goes
through
the
normal
life
cycle
and
will
be
pushed
into
production.
It's
really
for
people,
you
know
your
average
Joe,
node
or
eighth
know
Ruby
or
whatever
programmer.
That
has
access
to
a
cluster
for
aqueous
cluster,
but
doesn't
want
to
attend
all
who
ever
learn.
J
J
Right,
so
underlying
what
I'm
using
them,
not
quite
sure,
if
currently
I'm
using
the
week
line
package
so
is
written
going
it's
using
the
which
more
or
less
is
a
shell.
So
you
have
fun
quickly.
You
have
a
history
there
and
so
on
and
so
forth.
The
rest
is,
you
know,
shelling
out
to
80%
you
cuddle
and
a
few
things
like
LS
and
till
the
PWD
and
stuff
like
that.
That's
just.
E
J
Got
a
lot
of
cool
feedback
in
terms
of
you
know,
make
it
scriptable,
and
you
see
that
not
to
do
end
to
end
testing.
There
are
many,
many
more
things
that
you
know
still
not
work
in
terms
of
interactivity
that
you
know
you
launch
something,
and
then
you
actually
want
to
interact
with
that.
So
I
appreciate
help
because.
E
Yeah
I
was
at
some
point.
I
was
looking
at
some,
some
of
the
shells
written
and
go
and-
and
that's
that's
an
interesting
territory
but
I've
been
hearing
more
recently
than
there
is
this
fight
in
vice
project,
called
the
oil
that
sort
of
truck
he's
trying
to
write
a
new
shell
but
they're
trying
to
make
a
new
shell.
Oh.
J
E
G
If
nobody
else
wants
to
I
can
jump
in
all
right.
The
big
news
for
us
is
two
point.
Eight
point
one
came
out
last
week
and
the
helm
summit
is
next
week,
so
we're
between
the
two
most
of
the
core
team
is
working
hard
on
getting
the
preps
done
for
the
helm
summit
out
in
Portland
we're
excited,
we
have
a
pretty
good-sized
crowd
and
some
excellent
excellent
speakers
in
the
lineup
and
I
think
public.
K
Young
week
so
and
one
week
we'll
have
the
the
trans
meeting
and
then
the
other
week
we'll
have
a
chance
office
hours,
so
the
charts
office
hours
is
tomorrow.
This
is
a
good
place
to
come.
If
you
have
PR
that
are
outstanding
and
would
like
to
you
know,
talk
to
some
maintains
and
try
to
get
them
merged.
K
C
K
L
Hey
hi,
we
got
some
cool
feature
last
week
where
you
can
anonymously
grab
index
yeah
mo
but
put
basic
off
on
the
API
routes.
So
that's
pretty
nice.
Also.
This
is
more
news
about
me,
but
also
kind
of
for
the
project.
I'm
gonna
be
working
with
code
fresh
on
extending
their
helm
support.
So
as
part
of
that
they're
letting
me
spend
a
lot
of
time
working
on
the
chart,
museum
project,
so
I
expect
some
cool
things
in
the
near
future.
L
A
A
Let's
see
all
right
if
we
all
right
but
there's
one
discussion
topic
we
have
and
if
we
do
have
I
want
to
give
people
a
chance
for
other
discussion
if
they.
If
we
have
to
talk
about,
if
there's
you
know
no
other
open
discussion,
topics
and
people
are
amenable
to
it,
then
maybe
we
could
seek
it
in
that
sound
cool.
G
A
Okay,
so
one
discussion
topic,
so
the
application
objects
er,
D
controller,
that
we
have
a
campout
for
there's
number
one
I
think
some
confusion.
That
I
mean
even
between
people
who've
been
in
careers
for
a
long
time
about
the
camp
process.
A
A
Repo
process,
where,
like
you
put
in
a
feature
you
put
in
a
design
document
when
the
design
was
accepted,
that's
what
we're
gonna
go
implement
and
that
that
kind
of
is
what
it
is
and
at
the
end
of
the
feature
like
when
you're
trying
to
merge
in
for
the
release.
If
what
you
implemented
meets
the
design
constraints
that
were
in
the
design
proposal,
that's
you
know
kind
of
like
okay.
We
should
definitely
merge
this
with
the
kept
process.
Apparently,
what
we're
supposed
to
do
is
if
we've
decided,
we
want
to
do
something.
A
A
M
A
I
mean
honestly,
my
only
thing
is
so
wise
man
once
said:
we
should
use
process,
but
only
as
much
processes
as
useful
and
not
a
drop
more.
That
being
said,
we
shouldn't
diverge
from
kind
of
I.
Don't
think
we
should,
as
a
stake
diverge
from
whatever
processes
are
going
to
be
prevalent
across
the
community
in
general,
but
it
seems
like
we
have
an
interest
in
doing
something
here
and
I'm
I
just
want
to
see
what
people
think
the
path
forward
should
be.
A
Should
we,
as
the
kept
process
says,
accept,
what's
out
there
now
and
continue
to
iterate
on
that?
Should
we
maybe
break
it
up
a
little
bit
more,
for
instance,
and
then
have
because
I
mean
I?
Imagine
what
will
happen
is
there's.
So
there
are
many
people
who
are
very
interested
in
that
metadata,
for
instance,
and
what
that
should
look
like
and
I
imagine
that
can
be
iterated
on
very
rapidly
and
whenever
we
come
up
with
as
a
design
right
now
is
probably
not
going
to
look
like
the
end
result
of
it.
A
I
think
the
life
cycle
of
the
application
object
is
another
thing
that
we
could
iterate
on
and
maybe
independently
of
the
metadata.
So
I'm
just
wondering
as
we
move
forward
this
and
we
set
up
a
repo.
Does
anyone
else
have
any
ideas?
What
it
should
look
like?
What?
What
do
you
think?
What
does
it?
Our
sake
think
works
best
for
us.
N
A
question
kind
of
about
the
process
because
I've
been
following
the
camp
and
I
think
like
I,
mean
I
think
the
application
object
is
the
interesting
idea,
but
it
seems
like
the
community.
Is
you
know
mixed
about
whether
it
should
be
added
in
as
a
part
of
the
core
or
something
separate?
So
the
question
is:
if,
if
we
kind
of
walk
away
from
the
cap
and
just
start
implementing
something
as
like
a
CRD
with
our
own
set
of
controllers,
is
there
an
opportunity
and
then
to
bring
that
back
to
core?
Or
does
it
forever
exist?
A
A
We
should
do
Serg
and
if
it's
proven
to
have
value
as
a
CRD
and
as
an
ecosystem
component
that
we
consider
bringing
it
back
into
correlated,
you
can,
if
you
build
something
so
like
to
give
a
little
bit
of
history,
deployment
started
as
deployment
config
and
open
shift,
and
it
was
developed
as
something
external
to
kubernetes
before
it
was
contributed
back
into
the
core
of
our
close
api.
So
it
there's
definitely
a
way
to
do
that
now,
on
the
flip
side
of
it.
A
Looking
at
it
from
the
perspective
of
a
suite
you're
gonna
spend
significant
software
hours,
go,
get
it
outside.
You're
gonna
spend
more
suite
hours
moving
it
back
in
and
you
probably
are
going
to
end
up
with
two
separate
things,
one
in
core
and
one
as
an
extension.
If
you
choose
to
move
it
back
in
that
you're
gonna
have
to
support
going
forward.
A
I
think,
ultimately,
what
the
feedback
that
I
got
from
the
community
at
large
is
that
if
we
need
to
be
able
to
use
extension
mechanisms-
and
we
can't
put
everything
in
core
and
as
we
move
forward,
we
should
prefer
doing
extensions
and
then,
if
we
prove
that
it
really
has
value
as
an
extension,
then
moving
it
into
core,
and
we
should
focus
our
energy
on
easing
the
path
to
developing
extensions,
as
opposed
to
not
developing
an
extension,
because
we
don't.
We
have
questions
around.
A
B
N
A
Don't
want
to
go,
do
something
in
isolation.
I'd
rather
get
feedback
from
the
community
and
build
something
together
if
there's
interest
in
doing
something
together
and
ultimately
the
feedback
is
like.
That's
not
what
we
want
to
do.
That's
fine
but
I
feel
like
that's
the
antithesis
of
the
feedback
I'm
getting.
It
seems
like
we
all
want
to
do
something
and
we
all
kind
of
wanted
to
converge
on
something
at
least
similar
and
compatible.
A
But
there's
you
know
no
other
process
for
doing
that
other
than
setting
up
a
cap
as
it
is
for
repo,
that's
something
we
could
do.
But
how
does
that?
How
do
we
keep
that
from
progressing
to
just
implementing
random
things
without
actually
discussing
it
with
each
other
and
making
sure
we're
building
the
right
things
for
the
community
at
large.
M
A
Sure
and
I
mean
ultimately
I.
Think
if
we're
going
to
do
something
but
it'll
happen
is
we're
gonna
put
it
in
a
sig
maintained
repo,
so
I
think
that
the
question,
and
maybe
I'm,
not
phrasing
it
well.
The
question
is:
if
we
do
that,
how
do
we
want
to
manage
our
software
in
our
sick?
Do
we
want
to
do?
Do
we
not
want
to
do
caps?
Do
we
want
to
do
caps
some
other
times?
Do
you
think
there's
a
benefit
to
doing
a
cap?
N
When
you
talk
about
caps
in
this,
in
this
perspective,
are
you
saying
you
know
branch,
a
separate
repo
to
start
doing
this
like
work
on
the
app
object
that
accept
caps
towards
the
app
object,
repo
or
whatever?
Is
that?
Is
that
what
you're
talking
about
yeah
I
mean
I'm,
yeah
I,
don't
see?
Well,
I,
don't
see
what
the
problem
is
doing
this
like
like.
Why
not
take
it
in
a
separate
repo?
Don't
don't
go
fast,
but
go
slow
so
that
we
make
sure
to
have
time
to
get.
N
You
know
input
from
everybody,
and
you
know
whether
it
doesn't
have
to
be
as
complex
as
a
cap,
but
I
mean
just
simple.
You
know
specs
to
improve
the
application,
just
put
together
a
process
whereby
you
know
people
can
get
involved
in
a
collaborative
way
like
I.
Don't
I
mean
it
like
if
they?
If
the
goal
is
to
go
slowly
and
make
sure
we
get
everybody's
input,
then
just
like
let's
go
slowly
and
get
everybody's
input,
it's
not
like
well.
A
I
mean
here's
the
thing,
so
that's
kind
of
a
exact
feedback
I'm
looking
for,
if
we
feel
like
the
cap
is
a
clunky
process,
we
could
just
set
up
like
issues
on
github
on
the
repo.
If
people
would
rather
do
it
that
way,
we
could
have
a
separate
issue
for
like
application,
metadata,
Application,
Lifecycle,
so
forth
and
so
on.
We
can
offer
designs
that
way.
A
That
is
one
path
to
move
forward
to
right,
and
it's
not
really
about
how
fast
we
move
I
think
we
can
move
as
fast
as
people
want
to
move
and
people
are
comfortable
with.
Ultimately,
adoption
is
the
test
of
how
well
you're
doing
right.
So
when
you
start
seeing
people
pick
it
up
to
use
it,
that's
when
you
know
that
you've
got
something
relatively
solid.
It's
just
like
if
I
think,
there's
a
lot
of
there's.
Definitely
a
lot
of.
A
Work
on
helm
or
wants
to
go
work
on
kubernetes
core,
isn't
complete.
It's
not
a
completely
separate
process
where
your
mind
is
just
blown
in
it
like
you're,
not
completely
there
so
unfamiliar
with
it.
It
makes
no
sense
to
you
right
so
I
mean
before
anything
gets
done.
I
just
wanted
to
try
to
get
feedback
about
like
when
we
start
doing
our
own
repos
as
a
cig.
A
A
Would
we,
as
a
cig,
be
more
comfortable
doing
things
like
using
feature
branches
and
release
branches
as
opposed
to
trying
to
push
everything
into
master
simultaneous
like
the
kubernetes
process
is
stood
up
because
kubernetes
is
a
very
unique
project
that
gets
more
PRS
than
pretty
much
anything
on
github,
but
I.
Don't
think
everything
in
general
that
we
do
as
a
cig
is
gonna.
Have
those
seen
constraints
right.
G
G
The
specification
is
just
an
attempt
to
gain
clarity
on
a
particular
tool,
format,
method
of
doing
something
or
something
like
that,
and
then
whether
or
not
these
get
implemented
is
really
left
up
to
whether
or
not
other
people
find
them
useful.
So
you
can
look
through
the
registry
peps,
for
example,
and
see
ideas
that
were
formalized
but
never
adopted
likewise
or
if,
if
you
really
want
to
see
this
in
action,
go
look
at
RFC's
right,
there
are
billions
of
them,
of
which
some
of
them
were
utter.
G
Failures
right
and
people
specified
something
very
carefully
and
then
tried
to
build
it
and
it
didn't
meet
their
needs
and
so
on.
But
it's
a
successful
process,
because
now
you
can
look
back
and
say:
oh
here's,
an
RFC
for
somebody
who
tried
to
do
something
similar
to
this.
It
failed
right
so
I
like
the
idea
that
that
you're
trying
to
push
with
the
caps
I
really
think
that's
the
way
we
ought
to
go.
G
I
just
feel
like
there
that
the
definition
itself
is
loaded
with
a
major
ambiguity,
which
is
whether
we're
describing
a
feature
request
for
kubernetes
core
or
whether
we're
trying
to
agree
upon
a
standard
for
something
so
I
liked.
The
way
you
were
handling
things
that
the
application
thing
I
too
was
like
startled
when
it
jumped
into
the
acceptance
statement
like
wait.
What
what
I
thought
we
were
still
discussing
this,
but
now
I
see
it
really
has
more
to
do
with
the
the
sort
of
when
I
think
our
Mis
labelings
of
the
states
they're.
So.
A
Right,
like
I,
mean
to
me
accept
it
and,
and
you
know,
Matt
Farina
was
confused
about
the
same
thing
right
he's
like
accept.
It
means
this
is
what
you
should
go,
build
and
I.
Think
one
of
my
concerns
is
that
that's
exactly
what
it
means
for
the
design
proposals
and
the
community
that
we
have
right
now
and
that
that
ambiguity
and
I
could
see,
causing
a
lot
of
confusion
to
people
who
do
of
working
core
right
right,
whereas.
G
G
Implementable,
but
we
don't
know
if
it's
in
the
state
that
we
all
agree
that
it's
the
best
way
to
implement
something
so
correct,
I
mean
that's.
Why
the
the
sort
of
the
explanation
of
the
state
flow
there
makes
it
sound
more
like
describing
the
progress
of
a
feature
request
for
a
kubernetes
core,
and
not
so
much
like
your
typical
standardization
process
or
something
would.
K
You
know
covers
a
lot
of
different
things
and
it's
a
little
bit
hard
for
for
people
to
get
feedback
in
in
current
state
and
I.
Think
what
would
help
is
if
there
were
some,
you
know,
maybe
even
smaller
proposals
which
cover
specific
things
like
you
know,
what
should
a
metadata
look
like?
What
should
have
the
controller
react,
and
these
are
all
kind
of
different
things
that
I'm
sure
different
people
have
different
opinions
on
so.
B
G
Sure,
okay,
let
me
just
give
you
a
quick
thing
and
then
I
can
and
the
two
weeks
I'll
do
a
little
more
detail,
but
real
quickly,
qu
builder
is
supposed
to
be
an
ST
save
for
building
api's
starts
out
so
have
an
empty
mini
coop
project.
Here
you
can
see,
and
so
the
first
thing
you
do
is
initialize
a
new
project.
G
So
this
is
just
gonna
neutralize,
a
new
project
and
it
prints
out,
like
here's
all
this
stuff,
I'm
initializing
for
you
and
creates
all
these
packages,
and
then
this
says,
go
ahead
and
create
a
resource,
so
I'll
go
ahead
and
create
a
resource,
and
then
it
prints
out.
Okay,
here's
all
the
files
I'm
generating
for
you
and
you
give
it
like
the
group
and
the
version
and
the
kind
of
the
resource.
And
then
it's
gonna
go
run
all
the
code.
Generators
for
that
resource.
G
You
have
a
nice,
client
and
Informer,
and
all
this
stuff
you
can
see
it
generates
both
a
type
file
and
a
controller
file,
as
well
as
the
test
files
that
go
go
with
these.
So
I'm
just
gonna
pop
one
of
these
open
real
quickly
in
another
tab,
while
it's
generating
the
code
for
you
and
you
can
see
it
creates
this
thing
where
it
adds
like
these
importance
and
all-important
annotations
like
Jen,
Klein
and
deep
copy
and
open
API
Jen
and
all
this
sort
of
stuff.
G
So
the
cool
thing
here
is:
it
generates
a
full
controller
for
you,
and
all
you
need
to
do
now
is
implement
this
reconcilement,
but
it
gives
you
a
client
and
everything
to
talk
to
kubernetes,
and
so
you
don't
have
to
do
any
of
that
stuff
and
it
just
prints
one
out
for
you
to
reconcile
and
then
real
quickly.
You
may
be
interested
to
see
that,
like
everyone
knows,
tests
are
important,
but
they're
kind
of
obtained
right
and
getting
started
is
probably
the
hardest
part.
G
This
gives
you
the
fully
functioning
test
that
checks
at
the
reconcile
loop
is
run
with
the
right
controller
or
with
right
keys.
No
errors
occur
when
you
pray
an
instance
of
this,
and
so
it
just
gives
you
a
little
slot
that
says:
okay,
as
you
add
logic
to
your
controller,
just
go
ahead
here
and
check
that
the
controller
did
those
things,
and
this
brings
up
a
control
plane
using
a
test
harness
written
by
some
of
those
folks.
So.
A
G
Terminal,
you
can
see
my
terminal,
but
no,
oh,
not
the
code,
Oh
cuz,
it's
okay
got
it.
You
lock
it
to
one
screen,
I,
guess
and
I
locked
it
to
one
tab
in
the
terminal,
which
is
weird,
so
it's
finished,
generating
the
code.
Only
just
pull
this
up.
Alright.
Can
you
see
it
now
yeah?
Okay?
So
you
can
see
here.
This
is
what
was
generated
and
then
you
add,
it
says
like
insert
your
code
here,
that's
what
I
was
talking
about
and
then
let
me
show
you
the
controller
again
quickly.
G
So
this
is
the
controller.
That's
been
created
for
you
and
has
this
reconcile
function
here
and
that's
gonna
get
called
and
it
does
all
the
crazy
stuff
with
the
queuing
at
the
Informer's
need
and
sets
us
listeners
for
you
and
then
down
here.
You
can
see
an
example
of
how
to
watch
other
things.
It's
really
simple.
G
This
is
going
to
go
ahead
and
compile
the
controller
and
install
this
here
at
ease
and
not
do
all
this
stuff.
While
this
is
getting
set
up.
I'll
just
talk
about
some
of
the
features
that
are
built
in
here
and
why
you
might
want
to
use
this
so
what
I've
demoed
is
like
wow.
This
really
is
to
get
started,
but
it
does
some
pretty
advanced
stuff
all
right.
It
doesn't
advance
stuff
like
our
back
rules
for
you
creating
reference
documentation
and
all
that
sort
of
stuff.
So.
G
You
can't
see
I'm
going
to
create
one
of
these
things
you
can
see
in
the
tab
you're
looking
at
it
just
ran
the
reconcile,
so
you
can
see
that's
running
there,
it's
running
against
mini
coop,
and
so
that's
your
whole
thing.
You
just
got
an
API,
that's
empty
and
you
can
just
start
dropping
in
your
schema
wherever
you
want.
Some
things
that
you
might
find
interesting
is
generates
these
docker
files,
and
one
of
them
is
an
installer
and
the
Installer
will
create
a
container
that
will
install
the
controller
install
the
crts,
install
the
are
back.