►
From YouTube: Kubernetes SIG Apps 20180226
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
A
C
D
C
Ok,
so
what
I'm
chatting
about
today
is
something
that
I
demoed
briefly
I
think
two
weeks
ago,
but
a
little
more
in
depth
about
building
a
toolkit
for
building
Furness
api
extensions.
So
briefly,
I
just
have
like
wanting
to
go
over
quickly.
What
it
looks
like
to
build
it
Kevin
is
an
API
extension
end-to-end,
and
so
it's
actually
several
steps
to
this.
C
So
the
first
is
setting
up
your
development
environment,
which
may
mean
downloading
and
installing
a
bunch
of
code
generators
at
a
specific,
then
setting
up
project
workspace,
which
may
mean
getting
vendored
libraries
from
a
bunch
of
different
kubernetes
repos.
This
can
be
tricky
depending
on
how
you
do
the
ven
during
and
how
many.
When
you
pull
in
your
dependencies,
you
can
actually
get
dependencies
at
the
wrong
versions
that
are
incompatible
and
then
spend
quite
a
bit
of
time
trying
to
unwind
that
ball
of
yarn.
C
C
After
that,
you
need
to
set
up
your
controller
business
logic,
which
means
setting
up
a
worker
queue
with
setting
up
listeners
for
Informer's,
defining
reconciliation
function.
Wiring
all
these
pieces
together
so
that
you
have
a
single
set
of
Informer's
shared
across
many,
possibly
multiple,
reconcile
loops
and
then
view
only
at
this
point
you
really
get
to
write
the
business
logic
for
your
app.
C
But
the
philosophy
behind
this
is
is
not
that
this
stuff
is
impossible
to
do
a
lot
of
people
do
it,
but
it
feels
a
lot
like
walking
down
the
hallway
with
a
bunch
of
marbles
on
it.
Right
like
it
would
be
better
if
just
the
marbles
weren't
there
right,
because
it's
just
opportunity
to
trip
and
fall
and
get
things
wrong,
so
I'm
going
to
show
you
the
set
of
commands
that
does
all
the
things
I
just
said:
I'm
gonna
run
them
all
at
once
and
then
kind
of
talk
through
what
they're
doing
so.
C
The
first
command
here
sets
up
the
initialization
that
does
the
project
structure
I
talked
about.
It
sets
out
vendor
directory
it
unpacks
ATAR,
so
everything
everyone's
on
the
same
set
of
libraries.
So
you
don't
get
people
with
slightly
different
versions
of
transitive
dependencies
that
then
have
different
bugs
in
them
and
different
behaviors.
So
we
all
get
to
work
from
the
same
base
and
then
it
sets
up
the
base
directory
structure.
C
The
second
command
here
creates
your
first
resource,
and
this
sets
up
the
stuff.
I
was
talking
about
where
it
creates
the
schema
file
with
the
write
annotations
or
the
types
that
go
file
with
the
write
annotations.
It
sets
up
the
test
for
that.
It
sets
up
the
controller
function.
It
sets
up
the
test
for
the
controller
function.
C
It
wires
all
the
pieces
together
and
then
now
you
have
a
nice
place
to
drop
stuff,
and
you
can
see
here
the
output
from
running
that
command
because
you
a
nice
little
text
that
says
here
edit
your
API
schema
here,
that's
where
the
schema
is
and
that's
where
the
test
is
either
your
controller
function.
Here,
that's
where
the
business
logic
is
semester.
C
And
then
you
can
see
at
the
bottom.
It
says
like
it
just
printed
insert
your
code
here
to
reconcile
whatever
so
you
can
see
is
actually
running,
and
so
the
last
commands
I
ran,
we're
just
basil
run
to
generate
basil
rules
and
then
a
basil
run
to
run
the
controller,
and
so
you
can
see
we
have
actually
CRTs
installed
into
a
mini
coop
cluster
with
the
whole
thing
running
so
mm-hmm
you
already
have
right
now,
just
starting
from
scratch.
C
You
know
that
you
can
get
an
API
working
right
and
from
here
you
can
edit
it
and
refine
it,
but
you
never
in
that
stage
where
you're
like.
Oh,
my
gosh,
like
I,
can't
get
the
vendors
working
and
I
have
this
problem
that
isn't
related
to
the
framework
I'm
using
at
all.
It's
just
this
other
random
problem,
so
I'm
going
to
open
up
some
of
these
files.
The
first
I'll
open
is
the
schema
file.
C
This
should
be
pretty
familiar
to
everyone,
but
it
does
set
up
these
comments
for
you,
the
gen
client,
the
deep
copy
stuff,
this
stuff
changes
from
time
to
time,
and
it's
really
frustrating
when
you
don't
get
it
correct
right.
So
we
just
do
all
that
for
you,
we
show
you
like
what
the
JSON
file
JSON
tags
should
look
like
this
back
in
the
status
and
then
just
say
and
search
your
code
here
right.
C
C
So
I'm
going
to
go
ahead
and
say
so
add
this
line
here
that
says
it
has
a
maximum
value
of
five
right
and
the
reason
I'm
only
here
instead
of
some
other
file
is
because
now
all
my
codes
together
and
when
you
look
at
the
go
struck,
you
can
actually
see
that
it
has
a
maximum
right
there.
Next
thing,
I'm
gonna
do
is
gonna
pop
open
the
test.
C
So
what
this
does
is
it
does
nice
about
a
nice
test,
storage
test,
and
so
this
now
allows
me
to
add
to
this
the
ability
to
check
the
validation,
I
just
added
so
so
I'm
not
gonna,
do
that
at
this
moment,
but
but
you
can
now
use
this
test
to
check
that
you're
doing
things
correctly
and
you
don't
have
to
spend
all
that
time
saying:
okay,
how
do
I
get
an
API
servers
final
write.
This
test
will
actually
spin
up
a
local
control
plane
for
you.
C
It
works
on
a
Mac
that
works
on
Linux
and
spin
up
a
local
control,
plane,
install
the
CR
DS
and
make
sure
everything
started
working
great.
So
that's
your
schema.
So
the
next
piece
is
your
controller.
So
we
generate
a
controller
for
you.
I
know,
there's
actually
a
couple
solutions
in
this
space,
but
this
one's
integrated
into
this
framework
and
does
something
similar
where
it
creates.
This
reconcile
function
for
you
and
it
goes
ahead
and
says:
okay,
anytime,
there's
an
update
or
creation
for
one
of
these
types.
It's
going
to
call
this
function.
C
You
can
call
this
function
and
this
makes
sure
it
does
all
the
correct
best
practices
is
that
kubernetes
api
is
used,
which
is
like
shares
and
shared
informers,
so
you're
not
creating
a
new
Informer
for
each
type,
make
sure
that
it
has
a
queue
where
each
key
is
only
there
once
you
don't
reconcile
the
same
thing
ten
times
in
a
row
which
can
be
tricky
to
get
right
like
if
you're
just
doing
a
pure
watch
and
in
reconciling
on
each
watch,
then
you
won't
get
exactly
what
you
want
and
there's
a
nice
and
it
function
down
here.
C
You
can
see
the
crate
right,
make
sure
that
there's
not
an
error
and
then
checks
that
a
reconcile
happens
within
ten
seconds,
and
so
here
after
that
reconcile
happens,
you
can
add
your
code
to
re-read
any
objects
from
the
system
and
make
sure
that
everything
is
working.
The
way
you
want
it
to
again.
This
is
not
a
needy
test
that
requires
all
cluster
to
spin
up.
It
brings
up
a
local
control
plane,
and
so
you
can
even
run
multiple
these
on
the
same
machine
at
the
same
time,
but
you
are
checking
the
actual
observed
logic.
C
C
C
C
C
And
what
this
does
is
it'll
actually
install
a
namespace
set
up.
Our
back
rules
set
up,
make
sure
service
accounts
are
set
up
correctly
and
set
up
potentially
depending
on
what
you're
doing,
certificates
and
all
the
pieces.
You
need
to
run
your
controller
in
the
cluster,
so
this
means
you
don't
have
to
figure
out
like
what
is
the
secure
way
of
running
my
service
or
whatever
it
is
I'm
doing.
We
just
give
you
the
secure
way
and
then
give
you
knobs
to
override
these
things.
C
So
it
has
this
strategy
that
it
creates
you
as
a
CRT
install
strategy.
You
can
actually
go
ahead
and
embed
this
in
another
struct
that
you
define
and
then
go
and
tweak
different
pieces
and
say:
hey
actually
want
to
install
but
staple
sets
that
have
employment
or
whatever
it
is.
You
want
to
do
and
then
go
ahead
and
install
this.
C
One
other
thing
was
showing,
while
we're
on
the
our
back
rules
is.
This
can
actually
allow
you
to
customize
how
your
our
back
rules
are
defined,
so
it
will
automatically
generate
our
back
tools
for
the
things.
I
didn't
know
that
you're
creating
for
your
api's
and
give
you
the
ability
to
do
whatever
you
want
for
those
pieces.
But
what
happens
when
you
want
to
start
watching
pots?
Well,
when
you
want
to
start
watching
pods
now
you
need
our
back
rules.
Allow
you
to
watch
and
list
those
or.
C
Deployments
or
a
different
thing,
and
by
when
they're
loosely
coupled
to
the
controller,
it's
easy
for
them
to
get
out
of
st.
right,
and
so
what
we've
done
here
is
divided
annotations
that
allow
you
just
annotate
your
controllers,
with
what
the
are
back
rules
that
extra
requires
are
and
then
now
we
just
generate
them
and
install
them
automatically,
and
so
this
means
like
in
here
when
you
update
your
reconciliation
logic
in
that
same
code
review,
someone
could
say,
hey,
add
the
are
back
rules
right
there,
and
so
those
will
get
generated
for
you.
C
C
So
this
is
generated
code
for
you
from
all
those
annotations,
and
you
can
see
it
generates
a
bunch
of
different
stuff.
This
is
where
the
are
back.
Rules
get
generated,
so
you'd
have
to
regenerate
the
code,
but
you
can
see
here
that
the
get
rules
function
defines
all
the
are
back
roles
that
will
get
turned
off
those
comments
and.
C
B
C
You
can
see
the
our
background.
It's
just
that
generated
you.
So
there's
the
deployments
and
pods
different
our
backbone.
So
that's
all
anyway!
That's
that's
it
and
the
goal
here
is
just
to
make
it
so
that
you
have
to
know
less
about
the
best
practices
today
for
setting
up
each
one
of
these
pieces
and
can
just
focus
on
writing
some
business
logic
before
great
guys
and
controllers.
C
A
F
There
are
a
few
things:
are
there
were
some
some
like
libraries
called
operator
kit,
I,
think
to
to
they're
called
operator
kit
literally,
and
there
is
there's
another
one
that
pops
up
more
recently
but
I,
don't
I,
don't
know
how
these
things
compared
to
one
and
to
what
you
just
showed,
but
the
kind
of
thought
I
had.
My
main
question,
basically,
is:
what
are
you
thinking
to
do
around
the
debris
based
problem?
Assuming
there
is
a
problem
like
the
you
know,
the
generator
has
updated.
How
do
we?
C
C
Some
of
the
techniques
I'm
trying
to
use
for
that
is
to
bribe
default
instances
of
almost
all
the
api's
right
and
so
that,
when
a
new
function
is
added
to
an
interface,
for
instance,
that
it
doesn't
break
your
code
right
that
we
just
add
that
you
embed
the
default
instance
and
then
we
give
you
a
reasonable
default
behavior.
So
that's
I,
think
the
code
generation
and
giving
you
sane,
empty
defaults
is
probably
the
best
way
to
address
that
and
there's
a
install
command.
C
So
the
if
you
download
the
latest
instance
it'll
like
update
all
your
vendor
libraries
to
the
latest
versions.
The
second
piece
for
the
operator
kit
and
I
saw
API
server
builder
there.
So
I
I
wrote
an
API
server
builder
and
a
lot
of
this
code
is
coming
out
of
that
and
maybe
I
super
builder
was
targeted
at
aggregated
API
servers
and
folks
that
were
writing
series.
We're
saying:
hey
I,
really
like
some
of
the
compilers.
C
Sorry
opposed
that,
and-
and
so
this
is
kind
of
coming
out
of
that-
and
if
you
saw
in
the
Installer
to
actually
can
install
I
created
API
servers
as
well
so
I'm,
hoping
that
we
end
up
on
a
kit
that
can
instant
that
is
agnostic
to
do.
You
want
these
through
aggregate
api's
or
CR
DS
and
can
just
generate
like
the
implementation.
C
They
don't
even
get
up
anywhere
right
now,
it's
a
fork
of
API
server
builder,
because
I'm
still
trying
to
work
through
figuring
out
what
repo
this
lives
will
end
up.
Living
in
there's
I.
Think
the
kubernetes,
incubate
and
cigs
repos
have
isn't
quite
totally
settled
down
yet
so
I'm
trying
to
get
it
as
quickly
as
possible.
F
C
So
I'd
say:
there's
there's
a
couple
we
can
do
better,
or
rather
I
can
do
much
better
right.
There's
absolutely
code
here
that
gets
generated
that
doesn't
need
to
be
generated,
but
I
just
took
the
I
started
out,
taking
the
examples
and
generating
them
right,
so
instead
right
and
then
going
backwards
from
there
right.
The
idea
is
to
refine
that
into
libraries.
The
nice
thing
about
that
approach
is
at
least
its
canonical.
C
Instead
of
trying
to
write
it
from
scratch,
and
maybe
getting
some
details
wrong
well,
but
it
could
putting
into
libraries
is
better
the
things
that,
like
the
the
Informer's
piece,
we're
gonna
always
need
cogeneration,
for
that
until
API
machinery
radically
everything's
called
doing
things.
The
one
area
where
it's
probably
a
toss-up
is,
if
you
look
at
the
like
the
reconcile
function
in
my
example,
takes
the
actual
type.
A
Good
luck,
so
you've
got
something
working
and
a
road
map
yeah
got
again
so
for
anybody
who
doesn't
know,
there's
gonna
be
and
there's
a
new
structure
coming
from
incubator
to
cig,
repos
and
I.
Think
it's
finally
in
place.
If
you
track
down
Aaron
quicken
Burger,
he
can
help
you
I
think
he's
the
one
who's
helping
get
that
going
now,
but
I
think
all
the
I's
are
dotted
and
T's
are
crossed
and
wants
to
set
one
of
those
up.
E
E
E
C
So
I
would
say:
I
machinery
or
not
sorry
I,
went
to
cigar
contexture
a
few
weeks
back
to
ask
the
question
of
like
where
would
something
like
this
go
and
the
the
what
they
told
me
was
that
it
would
either
go
an
API
machinery
if
sig
was
going
to
sponsor
it
or
it
should
go
as
a
side
project
and
just
be
like
under
Google
cloud
in
Google
platform,
or
something
like
that.
Okay,.
E
And
then
only
one
other
comment
so,
having
spent
a
lot
of
times
a
lot
of
time
modifying
and
adding
two
types
taco
and
doing
cogeneration
inside
of
kubernetes.
The
thing
I
like
about
this
is
it
actually
is
a
somewhat
significant
improvement
on
what
I
do
today
right
when
I'm
modifying
stuff
inside
of
kubernetes,
core
and
mail
and
I
thought
was
it
like
for
the
operator,
kids
from
the
ones
I've
seen
they
seem
to
provide
idioms
kind
of
on
top
of
generated
code
as
opposed
to
handling
the
generated
code
in
the
way
Phil
said.
C
Yeah,
that's
a
good
point
like
you
can
we
could
you
can
already
break
this
up
a
bit,
but
we
could
make
it
more
clear
how
to
do
that,
where
you
don't
have
to
take
the
controller,
for
instance,
from
this
you
can
take
all
the
other
pieces
and
then
you
could
just
hook
in
a
your
own
controller
kit
piece.
Alright,.
A
A
So
please
let
us
know
and
we'll
share
here
and
if
you
add
stuff,
please
come
back
and
demo
it
for
us,
but
we
do
have
some
other
things
to
get
to
in
the
agenda
today,
and
the
next
thing
up,
I,
believe,
is
the
discussion
topics
and
the
first
one
we
had
was
the
helm
summit,
which
happened
this
last
week
and
the
helm
summit
there's
notes
available,
I
somebody
added
it
to
the
agenda.
So
thank
you
of.
We
took
notes
in
the
session,
so
the
basic
rundown
of
how
the
helm
some
it
went
was.
A
It
was
a
two-day
event.
The
first
day
kicked
off
with
a
keynote.
Then
there
were
a
number
of
sessions
that
were
20
to
30
minutes.
It
was
a
singletrack
conference,
so
everybody
was
in
the
same
track.
That
was
followed
by
a
series
of
lightning
talks
that
were
about
five
minutes
long
and
a
lot
of
this
touched
on
how
people
were
using
helm
and
some
of
the
use
cases,
and
and
even
how
people
were
working
around
things
in
home.
But
there
was
a
lot
of
the
pragmatics
of
helm
today
and
people
using
it
at
different
companies.
A
There
was
I,
think
wpengine.
There
was
reddit.
There
were
some
others
who
talked
about
what
they
were
doing
the
second
day
in
the
first
days.
Keynote
was
about
where
home
started
and
where
it's
going.
The
second
day
kicked
off
of
the
keynote
on
again
where
things
are
going
and
where
we're
going
specifically
with
helm.
3
from
here,
there
was
a
series
of
lightning
talks.
Many
of
these
were
more
forward-looking,
lightning
talks.
A
There
was
a
Q&A
with
the
helm
and
charts
maintainer
who
were
present,
and
then
there
was
the
probably
half
the
day
was
an
unconference
dial
and
where
we
talked
about
where
things
are
going
helm
through,
whatever
people
wanted
to
talk
about,
and
the
notes
that
are
available
linked
here
onto
the
helm
summit
section
are
notes
from
the
different
talks
that
happened
that
day
in
the
different
lightning
talk
session
or
not
lightning
talks
the
unconference
talk
sessions.
There
were
some
takeaways
I'll,
tell
you
some
of
the
duze.
We
came
up
with
afterwards
and
I.
A
Think
I
noticed
Matt's
colada
do's
away
to
extract
user
stories
from
some
of
those
sessions,
because
one
of
the
things
that
home
3
wants
to
focus
on
is
user
stories.
What
are
those
not?
It
can
get
easy
to
get
into
solution
airing
engineering.
But
what
is
the
root
thing
that
needs
to
be
solved
for
whom?
And
we
want
to
kind
of
try
to
start
extracting
some
of
those
from
the
different
we
roundtable
sessions
that
had.
A
We
want
to
kind
of
document
who
the
users
are
and
in
my
mind
anyway,
and
this
will
go
through
pull
request
review.
You
start
with
the
app
operators
who
are
the
most
important,
because
that's
the
end
user
who's
got
to
get
things
done.
Then
you've
got
to
land
extension
developers
is
another
type
of
user.
Then
there's
the
developers
of
helm
themselves
and
when
I
look
at
the
priorities,
I
kind
of
do
put
them
in
that
order,
but
we're
gonna
document
this
and
get
it
going.
A
There
was
a
lot
of
talk
on
CRTs
and
so
with
CR
DS.
We're
gonna
set
up
a
meeting
to
discuss
them.
I
will
be
sending
out
in
a
little
bit
here
day,
a-doodle
to
start
collecting
times
for
people
who
are
interested
in
discussing
CR
DS,
how
we
use
them
the
future
of
them
to
try
and
figure
out
where
those
fit
in
with
the
application.
Cr
DS
that
I've
been
talking
about
with
the
app
now
forking
group,
how
this
place
into
helm
and
kind
of?
How
do
we
bring
this
together?
There's
really
complicated
ways.
A
We
could
go
simpler
ways
and
what
we
learned
there
was.
We
aren't
in
agreement
and
sitting
down
at
a
table
for
a
half
an
hour
or
even
having
side
discussions
afterwards,
wasn't
enough
time
to
come
to
conclusions
and,
of
course,
we're
gonna
communicate
a
path
forward
to
yes,
there
was
lows
from
us.
Thank
you
Brian.
It
was
another
one
of
the
examples
there
and
and
stuff
similar
to
that.
That's
using
crts.
A
We've
got
a
lot
to
talk
about
to
work
out
all
the
bugs
and
edge
cases
and
kinks
to
cover
well
what
helm
needs
to
do
for
people,
and
then
there
was
we're
gonna
command
and
communicate
the
helm
three
paths
forward,
just
to
kind
of
say,
hey
folks.
We
want
to
make
sure
everything's
clear
communicated.
Well,
so
people
don't
say
we
don't
know
what's
happening.
We
want
to
make
sure
that
you
know
helm.
3
is
kicked
off.
Where
do
we
go
from
here?
A
What
are
those
steps
we
want
to
make
sure
everything
is
communicated
for
everybody,
and
so
that's
one
of
the
other
action
items.
That's
going
to
be
happening,
and
so
I
kind
of
just
gave
an
overview.
Real
quick
we've
got
a
number
of
other
people
on
the
line
who
were
there?
Does
anybody
else
want
to
jump
in
and
give
their
two
cents.
D
H
E
A
I
I
A
I
Right,
I'll
I'll
just
start
from
the
top
then,
and
if
this
is
boring,
you
can
chat
or
something
and
we'll
take
it
from
there
you're
all
the
first
people
to
see
these
slides,
so
they
might
be
a
little
rough.
All
right,
so
cute
control
apply
is
a
thingy
that
you
give
it
partially
specified
objects
and
and
magic
happens,
and
those
show
up
in
your
control
plane
and
then
controllers,
actuate
those
objects
and
they
show
up
in
your
cluster.
I
So
there's
really
sort
of
two
things
that
queue
control
apply
is
one
is
the
operation
that
you
that
you
perform
on
a
single
object
to
make
that
object?
Look
the
way
you
specified
in
your
manifest
and
the
other
is
the
operation
that
you
perform
on
an
entire
directory
full
of
objects.
So
you
can
you
can
dump
that
whole
thing
into
a
queue,
control
apply
and
it'll
make
them
all
so
I'm
mentioning
this
right
at
the
beginning,
just
to
warn
you
that
I'm
only
talking
about
the
single
the
single
object
apply
case.
I
So
just
just
a
warning.
Okay,
let's
talk
in
more
detail
about
what
queue
control
apply
is
so
right
now
you
have
an
object,
that's
partially
specified
it
doesn't
have
to
be
fully
specified
because
many
fields
are
defaulted
or
maybe
maybe
the
object.
End
is
a
different
version
than
the
cluster
knows
about.
So
you
may
not
even
have
all
of
the
fields,
but
the
essential
fields
are
filled
out
on
this
object
on
disk.
You
run
it
through
queue.
Control
apply.
If
the
object
doesn't
exist
in
your
cluster
right
now,
it
will
be
created.
I
I
How
is
this
better
than
the
alternative
is
maybe
a
better
way
of
asking
it
and
the
alternative
to
having
this
system
where
you
specify
fields,
and
then
the
system
makes
it
look
like
that
the
alternative
ends
up
looking
like
either.
Maybe
a
bunch
of
separate
API
calls
like
suppose
you
want
to
want
to
change
the
number
of
replicas.
Well,
maybe
you
have
two
and
you're
writing
a
system
to
keep
this
up
to
date.
I
You
have
to
integrate
with
a
particular
API,
that's
specifically
for
changing
the
number
of
replicas
or
you
want
to
add
an
environment
variable
to
your
container
in
a
pod
template.
You
have
to
write
a
an
API
specifically
for
doing
that.
So
the
idea
here
is
phrasing
this
as
a
manifest
file.
A
declared
a
manifest
file
means
instead
of
writing
a
API
integration
for
every
resource
for
every
operation
on
that
resource,
which
is
a
lot
of
API
calls.
I
I
Another
aspect
of
this
is
that
users
interact
with
the
exact
API
content
that
they're
submitting
to
the
to
the
system,
and-
and
this
is
good
for
a
number
of
reasons-
one
is
users
get
familiar
with
the
API
resource
layout
and
if
tools
along
the
path
from
user
to
actuated
system
object.
If
they
are
all
working
on
this
common
resource,
then
the
tools
themselves
can
be
composable.
If
a
tool
is
putting
out
bad
output,
the
user
can
examine
that
output
and
they're
familiar
with
the
format
of
that
output.
I
The
the
the
user
is
learning
the
actual
API
resource
in
the
system
and
not
a
not
some
other
language
that
that,
like
constructs
that
resource
for
them,
so
your
knowledge
is
more
transferable
if
you
switch
to
another
tool
so
that
that's
sort
of
like.
Why
is
this
a
good
thing?
Why
do
we
want
to
support
it
and
I
think
this
is
really
the
thing
about
the
kubernetes
api
that
is
actually
in
advanced
over
other
api
systems
like?
Why
do
we
have
the
kubernetes
api
system?
This
is
why
you
don't
do
this
with
G
RPC.
I
So,
after
all
that,
why
is
it
good
I
have
to
tell
you
why
it's
not
good,
and
you
should
not
do
anything
imperative
with
this.
It
doesn't
make
sense,
don't
charge
credit
cards,
don't
open
the
pod
bay
port
doors,
that's
not
what
this
is
for
most
so
so
most
things
in
the
kubernetes
ecosystem
are
declarative
API.
I
Okay.
So
now
that
we
know
why
it's
good
and
why
it's
not
good?
Why
is
it
hard
and
the
reason
why
it's
hard
is
because
the
the
input
to
this
operation
is
the
users
previous
applied
state
and
their
current
desired
state?
And
from
these
two
pieces
of
information
we
have
to
deduce
what
the
user
was
trying
to
accomplish
so
I
think
in
general.
I
That's
that's
like
you
need
the
full
operational
toolkit
of
human
intelligence
to
figure
that
out
so
we're
going
to
cheat
and
reduce
the
scope
of
the
API
service
area
until
it's
unambiguously
defined,
but
that's
the
idea.
Okay,
so
that
was
just
the
so.
The
goal
of
the
Q
control
apply
well.
Currently,
it's
Q
control
apply,
but
I'm
changing
it
to
just
be
apply.
The
goal
of
apply
is
to
all
users
and
systems
to
cooperatively
determine
the
desired
state
of
an
object.
If
you
look
at
my
expand,
my
speaker
notes
here.
I
So
if
we
want
a
bunch
of
stuff
and
I
say,
the
goal
here
is
that
the
control
plane
delivers
a
comprehensive
solution.
This
is
because,
right
now,
all
of
this
is
written
in
the
client-side,
so
Q
control
actually
has
logic
which
stores
an
annotation
in
objects,
and
when
you
run
you
control
apply,
it
reads
your
object
off
of
disk.
It
reads
the
annotation,
it
does
a
diffing
operation,
it
constructs
the
strategic,
merge
patch
and
it
sends
that
to
the
to
the
API
server
yeah.
I
So
that's
what
it
does
today.
Our
goal
is
to
formed
the
correct
version
of
this
operation
and
it
has
to
be
located
in
the
control
plane.
Okay,
non
goals,
as
I
said,
I'm
not
going
to
worry
about
the
multi-object
apply
case
for
the
moment.
I
think
we
have
to
solve
the
single
object
apply
case
before
we
can
think
about
fixing
the
multi
object,
apply.
I
I'm,
not
just
I,
think
it's
a
reasonable
extension,
but
I'm
not
discussing
today,
adding
an
API
it
does
arbitrary
merges
so
like
I,
send
two
objects
to
the
server.
It
sends
me
back
the
result
without
activating
anything
I'm,
not
talking
about
that.
Api
I
think
it's
useful,
but
that's
sort
of
kind
of
a
trivial
addition.
I
After
we
do
the
big
chunk
so
and
I
don't
want
to
design
it
here,
there's
additionally,
some
sources
of
user
confusion
that
cannot
be
fixed
by
moving
things
into
the
into
the
control
plane
and,
and
so
I'm
gonna
exclude
those
for
the
moment,
and
that
is
some
users
change
the
name
of
their
object
on
disk
and
then
rerun
queue
control
apply
and,
to
their
surprise,
they
get
a
second
object
rather
than
having
the
first
object
be
renamed.
This
is
just
sort
of
fundamental
to
the
way
the
system
works.
I
I
So
what
is
the
problem
and
and
here's
some
problems
with
our
current
with
our
current
system
user
does
a
post?
Then
they
change
something
and
applies
surprise.
When
you
did
a
post,
it
didn't
store
the
last
applied,
State
annotation
and
something
strange
might
happen
and
other
ones.
Suppose
you
start
with
a
pie.
You
or
somebody
else-
does
acute
control
edit
and
then
later
you
apply
again
with
your
original,
manifest
surprised
that
that
you
control
edit-
and
maybe
maybe
you
wanted
that
to
remain
in
effect.
I
Alternatively,
maybe
I
tweaked,
some
annotations,
maybe
I'd
make
some
changes
to
the
last
update,
annotation
and
a
plat.
Something
strange
will
happen
if,
if
I
make
a
resource
and
apply
that,
and
then
my
coworker
makes
a
few
other
changes
and
applies
that
also
then
something
very
strange
will
happen,
because
the
file
they're
applying
is
not
related
to
the
file
I
apply
and
the
dipping
operation
will
return
nonsense.
I
So
why
is
it
hard?
We've
had
cute
control
apply
since
very
near
the
beginning
of
the
project,
and
it
still
has
these
bugs,
but
we've
had
a
bunch
of
really
smart
people
working
on
it.
So
what
what
has
sort
of
gone
wrong
and
the
the
conclusion
I
came
up
with
two
sort
of
reasons
for
why
why
it's
still
broken
I
think
the
sort
of
the
practical
reason
is
there's
too
many
components
have
to
change
at
the
same
time
in
order
to
get
a
fix.
I
So
if
you
want
to
fix
a
bug
in
the
and
the
diffing
merging
or
any
of
that
logic
right
now,
today,
you
have
to
change
a
schema.
You
have
to
change
the
strategic,
merge
patch
format,
which
is
not
versioned
and
very
difficult
to
change.
You
have
to
change
the
client-side
diffing
logic.
You
may
need
a
server-side
as
well,
so
it's
like
four
different
places.
You
have
to
change
to
get
anything
fixed
and
because
we
don't
want
to
break
old
clients
you're.
I
I
This
is
the
reason
that
I
hope
to
actually
fix
just
by
delivering
design
dock,
which
is
I,
think
developers
didn't
have
a
clear
mental
model
for
how
this
feature
was
supposed
to
work,
and
so
some
fixes
fixed
it
in
this
direction
and
some
fixes
fixed
it
in
that
direction.
So
I
I
hope.
If
everybody
can
read
one
document
and
understand
how
the
feature
is
supposed
to
work,
we
can
we
can
get
our
fixes
aligned
in
the
same
direction.
I
Okay,
so
actual
components
of
the
of
the
proposal,
removing
the
logic
into
the
control
plane,
making
I'm
going
to
make
a
new
logical,
apply,
verb,
I'm,
writing
a
dry
run.
This
is
so
client
can
see
like
if
I
applied
this.
What
are
you
going
to
do?
I
want
to
track
a
last
applied
state
per
client
ID.
This
means,
if
Alice
applies,
one
thing
Bob
applies
the
other
thing
their
last.
They
don't
work
off
the
same
last
applied
state.
I
It
also
allows
us
to
generalize,
apply
two
controllers:
that's
right
now,
controller
send
patches,
or
sometimes
they
update
the
entire
object.
I
think
I
think
the
apply
if
we,
if
we
get
the
bugs
fixed
apply,
is
just
a
more
general
and
more
functional
variation
of
patch,
with
a
more
understandable
format.
I
So
it's
a
top-level
other
API
verbs
can
understand
it.
We
can
use
the
stored
last
applied
States
to
help
with
error
messages.
Yeah,
you
know,
like
I,
think
I
cover
that
in
a
minute
we
can
use
this
extra
field
to
detect
when
a
user
is
sending
a
object
that
they
sourced
from
get,
because
it
will
include
this.
You
should
never
have
this
field
when
you're
sending
a
apply
request,
yeah,
and
we
to
support
this
everywhere.
I
We
need
to
add
some
extensions
to
open,
API
schemas,
so
so,
like
I,
think
this
needs
to
work
on
C
or
D
resources,
as
well
as
built-in
resources
we're
running
low
on
time.
So
I
will
go
a
little
faster
yeah,
so
new
API
verb.
My
current
recommendation
for
the
implementation
is
we're
just
going
to
send
it
as
a
patch
you'll
send
a
particular
content
type.
I
If
you
get
a
conflict,
you
get
a
nice
error
message.
Each
horizontal
pod
autoscaler
is
managing
spectat
replicas.
Are
you
sure
you
want
to
change
it
from
2648
down
to
five?
Probably
you
don't,
but
this
easy
to
do,
that
with
apply
today.
I
support,
dry,
run.
Multiple
appliers
applies
more
general
than
patch.
Let's
use
it
in
controllers.
Instead
of
patch
I
would
like
to
deprecated
strategic,
merge
patch.
I
I
A
So
so
you
know
I,
don't
know
if
a
lot
of
people
know
this,
but
strategic,
merge
patch,
which
has
trouble
and
I
understand
the
issues
with
CRTs.
That's
actually
something
used
in
helm
today
and
being
able
to
call
an
API
to
deal
with
this
because
we're
doing
it
with
standard
objects.
Today,
when
we
do
it,
instead
of
importing
libraries
from
coop
control
and
doing
that
and
passing
it,
that
would
make
managing
things
a
lot
easier
for
us,
I
think
I'm,
the
helmsman
yeah.
I
A
You
can
get
it
and
go
and
I'll
be
honest,
trying
to
import
it
from
coop
control
and
then
use
that
if
anybody
tried
to
use
helm
2.0
and
the
bugs
were
Sierra
T's,
it
was
actually
changes
in
the
API
that
we
didn't
know
and
just
dealing
with
this
space,
because
it's
difficult
with
the
changes
and
moving
parts
and
as
you
refer
to
it
as
being
broken,
sometimes
has
caused
us
some
heartache
and
and
some
pains.
So
if
making
this
easier,
I
think
would
make
a
lot
of
people
happy
to.
E
Extend
on
what
Matt
said,
a
lot
of
the
reason
people
have
been
been
during
code
from
Kuby
cuttle
is
to
get
some
of
the
semantics
that
are
implemented
in
Kuby
cuddle,
but
you
can't
just
like
use
client
goat
again.
To
my
knowledge,
this
is
like
the
last
one
like
the
Reapers
are
dead.
A
lot
of
the
other
things
that
were
kind
of
embedded
as
imperative
logic
and
Kuby
cuddle
have
been
mitigated
completely,
but
apply
is
still
a
big
one
and
thankfully
I
mean
because
helm
vendors
at
helm.
Does
it
right.
I
Yeah
sounds
great,
so
I
was
my
purpose
in
presenting
this
was
twofold:
one
just
informational,
so
you
all
know
what
I'm
thinking
and
to
to
collect
any
blockers
or
things
you
think
I
need
to
cover
in
the
design.
My
hope
is
to
take
this
to
sick
architecture
as
soon
as
they
have
a
free
slot
and
get
like
a
design
approval
or
something
of
that
nature.
So
please,
let
me
know
if
there
is
anything
you
think
the
design
needs
to
address
so
have
you
started
it
kept
for
this?
I
A
A
Well,
it
looks
like
our
time
is
coming
to
an
end
and
I
know:
people
have
other
meetings
if
you
could
circulate
the
document
and
this
presentation
to
the
cig,
apps
mailing
list
we'll
definitely
take
a
look
at
this
I'm
really
curious,
especially
since
we're
in
helmand
3
planning
how
we
can
approach
this.
So
thank
you.
Yeah.
I
A
Right
well
with
that
I
think
we
can.
We
can
about
call
it
and
give
everyone
a
chance
to
have
a
couple
of
seconds
before
they
get
to
their
next
meeting.
Thank
you.
Everyone,
oh
and
we
normally
do
stand-ups
in
this
meeting,
because
we
went
along
with
demos
and
discussions,
which
is
perfectly
fine.
We
bump
those
to
next
week.
It
was
one
of
those
things
we
thought
might
happen,
so
that's
just
been
bumped
to
next
week.
So
with
that
everyone
have
a
wonderful
week
and
we'll
see
it
on
the
Internet
bye.