►
From YouTube: Kubernetes SIG Apps 20170619
Description
Kubernetes 1.7 updates for sig apps, Service Broker demo, Kubernetes JSON schemas, 3 simple use cases for defer containers, and discussion
A
Everyone
welcome
to
big
apps,
it's
June
19th.
If
you
could
pronounce
speaking
just
mute
your
mate
yourself,
please,
my
name
is
Michele
Dan's
gonna.
Take
our
notes
for
today.
Thank
you.
Dan.
We
have
a
few
announcements.
I
think
the
most
important
thing
on
the
table
is
the
kubernetes
1.7
updates
and
progress.
So
Dan,
do
you
have
anything
for
us
there
so
I.
B
B
A
Dan
are
you
talking
because
they
can't
hear
you?
Okay,
let
me
work
on
majority
school,
okay,
thank
you,
and
so
we'll
just
go
right
into
demos
and
then
we'll
go
back
to
two
areas:
one
got7
and
first
up
we
have
Aaron.
He
is
going
to
do
a
service
workers,
ml
and
Aaron's
from
Microsoft
cool.
C
Thanks
Michelle
thank
everyone
for
having
me
I've,
been
to
a
couple
of
these
meetings,
but
not
enough.
Sadly,
so
I'm
really
excited
to
be
here.
As
Michelle
said:
I'm
Aaron,
Schlesinger
I'm,
a
colleague
of
hers,
I
work
at
Microsoft
and
formerly
Deus
and
I
am
a
co-lead
on
the
sink
Service
Catalog
special
interest
group.
C
So,
as
Michele
also
mentioned
we're
working
on
the
product
project
called
Service
Catalog,
you
can
check
it
out
at
kubernetes
incubator
and
the
point
of
this
thing
is
to
connect
kubernetes
with
open
service
brokers.
You
might
also
know
open
service
brokers
as
cloud
foundry
brokers,
and
these
are
things
these
are
REST
API
servers
written
in
any
language
that
adhere
to
a
fairly
simple
specification.
C
So
there
are
a
few
important
operations
of
these
brokers.
There's
provision
bind
and
get
the
catalog.
Those
are.
Those
are
basically
the
three
big
things
that
we're
going
to
need
to
focus
on
today
and
you're
going
to
need
to
sort
of
be
familiar
with
so
get
the
catalog
is
fairly
straightforward.
It
just
gets
the
list
of
things
that
the
broker
can
provide
and
you'll
probably
notice
some
common
brokers.
You
may
have
noticed
them
already.
These
are
things
like
broker
for
AWS
or
a
broker
for
Google
Cloud.
C
The
demo
I'm
going
to
show
today,
of
course,
since
I
work
at
Microsoft,
it's
going
to
show
provisioning
as
your
sequel,
Postgres
database,
so
I
actually
have
a
few
slides
I'm
going
to
share
my
screen.
Don't
worry
it's
not
going
to
be
too
painful.
Most
of
this
will
be
demo.
So
let
me
go
and
try
to
share
my
screen
here.
C
And
can
anybody
not
see
this?
You
speak
up.
If
you
can't
see
this
okay
I'll.
Take
that
as
a
all
good.
Let
me
try
and
make
this
little
bit
better,
all
right.
So
yeah
as
I
mentioned
this
is
the
intro
slide.
So
the
way
that
I
look
at
Service
Catalog
comes
from.
Basically
this
knowledge,
anyone
who
runs
stuff
in
kubernetes
or
almost
anyone
probably
knows
that
just
writing
a
bunch
of
pods
either
isn't
enough
or
you
have
to
do
a
lot
of
work
to
get
other
things
in
your
cluster
to
do
work
for
you.
C
Let
me
try
to
advance
this
slide.
So,
in
essence,
your
app
takes
on
service
dependencies,
and
these
are
more
than
just
like
your
apps-
is
a
ruby
app
and
it
relies
on
this
ruby
gem,
but
in
a
distributed
system.
It's
more
like
your
app
talks
over
the
network
to
that
Postgres
instance,
and
by
providing
a
kubernetes
native
API
to
provision
in
those
services,
so
that's
to
create
a
new
instance
of
a
service
and
then
to
bind
to
that
service,
and
that
is
to
say
to
indicate
your
intent
to
use
that
new
instance.
C
So
with
that,
we
are
building
we're
currently
in
alpha
we're
building
a
system
that
sits
on
top
of
kubernetes.
That
looks
a
whole
hell
of
a
lot
like
kubernetes,
but
it
provides
these
four
resources
and
it's
kind
of
touched
on
them,
so
instead
of
having
paas
and
services
and
so
forth.
Instead,
we
have
brokers,
that's
one
service
classes,
instances
and
bindings,
so
I'll
explain
those
in
the
demo.
C
C
This
thing
will
be
an
aggregated
server,
so
you
will
actually
use
coop
CTL
to
create
new
bindings
and
new
brokers
and
so
on
and
so
forth,
and
even
better
you'll
be
able
to
put
those
things
into
a
helm,
char
and
I'll
touch
on
that
a
little
bit
more
and
better
in
a
second
and
then
the
last
piece
here
I
mentioned
is
that
this
thing
connects
your
kubernetes
cluster
to
brokers
and
the
broker
is
conformed.
To
this
thing,
I
mentioned
called
the
open
service
broker
API.
C
Also
a
member
of
that
working
group,
so
we're
kind
of
talking
about
two
sides
of
the
system,
there's
the
kubernetes
side
and
then
there's
also
the
API
side.
That's
not
really
related
to
kubernetes
but
of
course,
is
very
important
to
the
kubernetes
effort.
So
let
me
shut
down
this
PowerPoint
slide
and
let
me
go
into
a
demo.
So,
first
and
foremost,
you
can
go
check
out
this
repo
and
I'll
I'll
provide
these
slides
shortly
after
I
do
the
demo,
so
you
guys
can
you
know,
go
and
save
it
or
check
it
out
or
whatever.
C
But
if
you
have
a
second,
you
can
head
over
here.
It's
github,
/ar,
schl,
ES
and
then
meta
as
your
service
broker
helm,
so
I'm
just
going
to
be
doing
a
demo
straight
out
of
the
readme.
So
let
me
open
up
my
terminal
I'm
going
to
make
this
nice
and
big
and
everybody.
Can
anybody
not
see
this
speak
up?
If
you
can't
see
this
okay.
A
C
Okay,
so
I'm
going
to
be
using
coop
CTL
to
talk
to
the
service
broker.
So
first
things.
First,
let
me
show
you
what
I
have
installed
in
my
cluster.
So
this
is
an
azure
container
service.
Kubernetes
cluster
I
have
installed
two
apps,
the
first
one
is
called
catalog,
and
that
is
the
service
catalog
software.
So
that
again
is
the
thing
that
connects
my
coop
CTL
commands
with
the
broker
and
the
broker
is
called
masd.
C
So
that's
called
the
meta
Azure
service
broker,
and
this
is
just
something
again
that
conforms
to
that
broker
API
and
it
provides
me
as
I'll
show
in
a
second
with
access
to
a
bunch
of
different
Azure
services.
So
the
first
thing
I
did
and
I
actually
preceded
this
demo
with
a
few
steps
that
generally
take
a
long
time.
I
don't
want
to
have
you
guys
waiting
here
for
five
minutes
at
ten
minutes
or
something
spin
up,
so
I
did
that
stuff
beforehand.
So,
first
things
first
I
have
installed
a
broker
resource.
C
So
there's
the
resource.
This
is
not
by
the
way,
a
third-party
resource.
This
is
a
resource
that
is
provided
by
the
catalog
API
server,
so
you'll
notice,
here,
I'm
running
a
one,
I
think
I'm
running
a
one,
six
one
cluster
right
now,
so
I've
specified
a
new
context.
In
this
context,
points
to
a
new
cluster
quote:
unquote
cluster.
That
speaks
the
same
open,
API
spec
that
Coop's
ETL
expects.
So
this
thing,
like
I,
said
it
looks
a
lot
like
a
kubernetes
cluster
API
and
therefore
coops
ETL
can
talk
to
it.
C
So
you'll
see
that
all
my
clips,
ETL
commands
are
going
to
end
up
using
this
context,
and
this
is
something
that's
going
to
be
going
away
in
1:7
and
beyond,
because
we'll
be
able
to
use
aggregated
API
scores.
So
that's
the
first
thing
I
did
was
I
created
a
broker.
Now,
after
I
had
created
the
broker,
Service
Catalog
picked
it
up
so
service
catalog
has
an
API
server
and
a
controller,
similar
architecture
that
you
see
in
kubernetes
core.
C
C
So
here
we
essentially
the
catalogue
of
services
that
I
can
provision
and
bind
to
and
essentially
I
can
get
to
use
in
my
app
the
document.
Db
is
no
sequel,
storage,
Postgres
Redis,
basically,
a
pub
sub
system,
single
server
and
then
blob
storage,
basically
as
your
equivalent
of
s3.
So
now
that
I
have
this
list
of
things
that
I
can
use
in
my
app.
C
The
next
step
that
I
actually
also
already
did,
was
to
create
an
instance
and
the
instance
is
the
declarative
resource
that
I
can
tell
service
broker
that
I
can
submit
to
service
broker.
That
is
to
tell
it
to
provision
an
instance,
but
what
I've
done
is
already
submitted
the
instance
to
tell
service
broker
to
provision
me
a
new
Postgres,
a
Postgres
database.
C
So
now,
you'll
notice
that
the
broker
here
and
the
service
class
here
did
not
have
a
namespace,
and
that
implies,
of
course,
that
they
are
available
in
the
entire
cluster
and
that's
by
design.
We
want
entire
clusters
to
have
access
to
the
catalog
of
services,
and
then
from
there
we
have
some
ACLs
that
allow
operators
of
the
cluster
to
decide
which
namespaces
can
get
access
to
which
parts
of
the
catalog.
C
C
C
Let's
show
it
out
an
animal
and
you
can
see
now.
This
is
the
entire
llamo
of
the
instance.
It's
fairly
straightforward
and
probably
would
look
familiar
if
you
have
seen
pretty
much
any
other
cuckoo-bananas
resource.
If
I
scroll
down
here,
we
can
see
in
the
status
field.
We
have
these
fields
that
basically
deal
with
asynchronous
operations.
So
here's
one
called
async
op
in
progress
when
I
first
submitted
the
instance.
This
thing
was
true
and
what
that
meant
was
the
instance
of
the
Postgres
was
in
progress
of
being
created.
C
So
behind
the
scenes
we
we
create
quite
a
bit
of
infrastructure
in
Azure,
so
we
create
a
new
server,
few
other
things
couple
rules
and
then
finally,
we
create
the
new
database
inside
of
pulse
grid
so
for
about
10-ish
minutes
after
I
had
originally
created
instance.
This
was
true
and
there
was
a
condition
in
here
saying
the
instance
is
being
asynchronously
provisioned,
and
that
was
all
done
behind
the
scenes.
All
that
I
saw
in
the
instance
was
it's
being
asynchronously
provision
and
then,
finally,
after
all,
the
operations
were
done
in
the
broker.
C
My
controller,
the
controller
inside
of
Service
Catalog,
ended
up
just
continuously
polling
the
broker.
When
the
broker
finally
reported
that
everything
was
done,
then
this
message
said
what
it
says
now
and
the
reason
says
provision
successfully.
Everything
is
ready
that
it's
true
and
we're
all
good
to
go
so
programmatically.
If
you
were
talking
to
service
broker,
you
would
wait
for
status,
is
true
and
type
equals
ready
and
and
that
that's
how
you
would
know
that.
C
Okay,
now,
my
application
is
ready
to
start
binding
to
this,
and
what
binding
means
it
will
show
in
a
second
is
create
credentials
and
indicate
to
both
the
broker
and
to
the
operator
of
the
kubernetes
cluster.
Whether
that
may
be
that
your
application
intends
to
use
that
kubernetes.
Excuse
me
in
that
Postgres
instance
on
Azure.
So
let's
show
creating
a
binding
now.
C
So
you
can
see
here
again
fairly
straightforward,
manifest
that
you're
probably
familiar
with.
If
we
go
down
to
the
status,
we
have
something
that
looks
a
little
bit
like
what
we
saw
in
an
instance.
We
can
see
injected
bond
result
status
is
true,
and
if
we
go
up
here,
we
can
see
that
there's
a
field
called
a
secret
name.
So
what
secret
name
is
is
it
is
what
we
use
to
tell
service
catalog,
where
it
should
put
the
credentials
for
the
database
that
it
got
back
and
then,
additionally,
right
above
the
secret
name.
C
We
have
this
name.
This
thing
that
references
an
instance
and
that's
what
we
use
to
tell
service
catalog.
What
is
the
instance
to
which
I
want
to
buy?
So,
as
you
remember,
I
created
an
instance
of
Postgres
called
my
post
press
1,
and
this
is
what
we're
using
to
tell
service
catalog.
What
we
want
to
bind
to
so
I
want
to
bind
to
that
instance
I've
just
created,
and
then
I
want
to
write
the
credential
that
I
got
back
from
the
Azure
broker
out
to
this
secret.
So,
let's
check
out
what
that
secret
looks
like.
C
C
It
can
just
mount
that
secret
of
the
same
name.
It
will
expect
that
the
same
keys
are
in
the
secret
and
as
long
as
it
can
use
those
keys
to
connect
to
a
Postgres
server,
everything
else
will
be
ready
to
go
for
it.
So
this,
in
effect
kind
of
takes
away
the
complicated,
DBA
and
credentials
passing
the
kind
of
informal
credentials
passing
that
you
might
have
had
before,
and
it
also
now
adds
in
due
to
the
presence
of
a
binding
and
an
instance.
C
It
adds
in
a
little
bit
of
an
audit
trail,
because
we
not
only
can
see
that
the
binding
and
instance
are
there
and
they're
connected
to
an
individual
application,
but
we
can
also
consume
the
events,
the
event
log
for
all
of
those
resources
and
kind
of
have
an
audit
trail
of
what
happened
as
well.
So
with
that,
let
me
go
back
to
the
rest
of
my
slides
here.
There's
not
a
whole
lot
more
of
those,
but
just
a
couple
notes
on
Service
Catalog.
C
C
Soon,
we're
going
to
be
accelerating
our
momentum,
accelerating
our
velocity
I
should
say
once
1.7
comes
along
because,
as
I
said,
a
graded
api's
will
really
enable
us
to
provide
a
way
more
seamless
experience
for
the
whole
thing,
because
we
can
then
put
instances
and
bindings
like
I,
just
showed
into
helm,
charts
and
once
they're
in
helm
charts.
You
can
effectively
bundle
your
application,
together
with
all
of
its
service
dependencies
and
there's
the
self
documenting
that
happened
at
that
point.
If
you
look.
C
C
C
Right,
yeah,
your
option
was
not
there,
we
go
so,
let's
look
at
our
binding
again
make
sure
it's
really
gone.
You
can
see
that
the
binding
is
gone
and
now
what
we
would
of
course
expect
is
that
the
because
the
binding
was
gone,
we
would
expect
like
can't
mention
that
the
Service
Catalog
made
a
request
to
the
backing
meta
as
your
service
broker
to
do
the
unbind
operation
and
then,
additionally,
since
the
credentials
would
have
been
invalid
after
the
unbind
operation
happened,
the
Service
Catalog
should
have
deleted
the
secret
as
well.
C
C
New
pods
that
launch
expecting
that
secret
will
now
no
longer
be
able
to
launch
and,
of
course,
that's
a
good
thing
right,
because
the
Postgres
no
longer
exists
so
the
whole
lifecycle
that
create
and
update
and
remove
is
handled
in
a
hoover
native
native
way,
just
because
we're
using
crudities
familiar
resources
here.
So
with
that
I'm
going
to
stop
sharing
again
and
I,
know,
there's
a
bunch
of
chat.
So
if
whoever,
if
someone
asked
questions,
if
you
could
repeat
them
verbally,
that
would
be
greatly
appreciated.
B
C
A
Get
Aaron
under
percussion
I
was
curious
in
your
demo.
Why
the
bindings
in
happened
or
why
didn't.
A
As
part
of
the
provisioning
stuff
like
turned
out
automatically
I
know,
they
understand
why
they
have
to
be
two
separate
steps,
but
for
like
the
workflow
I
was
curious.
You
know
why
you
didn't
have
them
together
and
then
later
you
had
exclaims.
You
know
you
might
want
to
use
a
different
type
of
instance,
for
a
more
like
production,
buddy
environment.
It's
not
the
reason
why
you
don't
come
together.
It
don't
work
with
that.
There's.
C
A
little
more
so
one
piece
in
as
I
mentioned
in
1/7
and
beyond
what
would
be
really
helpful
and
what
we
will.
Definitely
what
we
will
definitely
suggest,
at
least,
is
that
a
binding
and
an
instance
go
in
the
same
shell
chart.
But
another
thing
is
I
think
can't
mention
is
in
chats.
Now
too,
it
is
possible
to
create
binding
of
multiple
bindings
to
a
single
instance.
So
effectively
you
can
have
one
Postgres
database
and
then
multiple
applications
can
use
it.
E
C
That's
correct,
so
if
you
did
a
helmet
stall
right
now,
what
would
happen
is,
of
course,
the
application
wouldn't
launch
because
it
wouldn't
have
the
access
to
the
secret
and
the
binding
would
work
because
the
instance
wouldn't
have
completed
yet.
But
what
what
the
controller
behind
the
scenes
will
do
is
it
will
continue
trying
to
bind
and,
of
course,
what
the
kubernetes
core
controller
will
do
is
back
off
until
the
secret's
ready.
A
Right,
thank
you
so
much
we're
going
to
hold
off
on
any
other
questions.
So
you
can
move
on
China
item,
let's
go
ahead
and
do
the
surveys.
One
got
seven
updates
if
you're
ready
again
yeah.
F
So
hopefully
you
can
hear
me
now
good,
so
1.7
I
added
a
couple
of
links
into
the
notes.
I
mean
that
I
guess
the
general
status
is
that
in
two
days
on
the
21st,
we're
planning
to
cut
the
release
candidate
and
the
launch
is
still
the
goal
for
the
launch
is
still
a
week
after
that
so
June
28th.
So
that's
still
what
we're
working
towards
there's
one
link
that
has
a
list
of
all
the
open
issues
that
need
to
be
resolved
before
we
can
launch
and
that's
across
all
SIG's
I.
F
Think
as
of
this
morning,
there
were
about
there
were
27
and
then
the
second
link
is
actually
specific
to
cig
apps
and
there
were
three
last
night,
but
people
have
cleared
them
up,
there's
only
one
remaining
right
now.
So
that's
all
that's
left
from
we're
actually
in
pretty
good
shape
from
from
our
SIG's
perspective
yeah,
and
this
is
down
from
about
there
about
a
hundred
or
so
open
issues
about
a
week
ago.
So
all
the
SIG's
have
been
working
towards
making
sure
that
they
cozies
out.
So
that's
a
list.
F
If
there's
anything
that
you're
aware
of
that's
not
on
the
list,
like
you
know,
let
me
know
or
create
it
and
make
sure
that
you
tag
it
with
milestone
one
seven,
but
as
far
as
we
know
like
that's,
that's
everything.
That's
remaining!
So
that's
the
one
schedule.
If
there
are
any
other
questions,
let
me
know,
but
as
of
right
now
we're
still
working
towards
the
same
dates
that
we
had
originally
planned.
So
it's
still
set
first
June
28th,
okay,.
A
I'll,
just
let
us
know,
thanks
for
the
update,
Dan,
I'm
sure.
G
That
so
there
was
one
issue
that
got
marked
as
1.8
and
someone
just
came
through
and
market
Clayton
has
gone
back
and
said
it's
important
to
have
it
at
1.7,
so
I
think
there's
actually
still
two
issues
open
for
our
said.
I
can
link
the
other
issue.
That's
can
remove.
The
PR
is
already
open
for
it
so
and
recruit.
So
it's
basically
just
easy
testing
for
stateful
setup
date
and
we'd
like
to
get
that
in
for
1.7
yeah.
F
G
So
that
is
the
one
issue.
The
issue
is
not
currently
pad
with
the
correct
milestone
it
was
originally,
but
someone
came
through
and
had
it
with
one
eight
I
think
in
order
to
just
adjudicate
it
and
resolve
it
and
I'd
like
to
get
it
back
to
their
correct
Nelson.
Okay,.
F
G
A
G
A
That
works.
Thank
you!
Okay
and
we'll
see,
let's
go
into
the
next
demo.
If
you
have
questions
on
one
out,
seven
just
reach
out
to
and
or
ask
in
the
chat
for
anything
a
little
bit.
Okay,
good
enough
demo
is
by
Gareth
he's
going
to
talk
about
two
Rennie's
json
schemas
and
testing
to
burn
his
config
files
and
he's
been
doing
some
work
behind
the
scenes
there.
So
gary
take
away
Jeff,
Scott,
okay,.
H
From
so
I
been
thinking
and
wanting
to
build
and
building
a
bunches
like
I,
have
developer
tools
around
committees
and,
let's
not
start
down
a
path.
I'm
hopeful
might
be
interesting
to
other
people
doing
similar
things
of
so
people,
probably
familiar
with
the
open,
API
spec.
So
basically
all
of
communities
is
described
in
this
I
wouldn't
luckily
also
generate
a
document.
It's
currently
up
to
fifty
six
thousand
lines
of
Jason
I
just
checked
so
most
people
are
probably
not
the
novelist
I'm
actually
manually
reading
it.
H
But
one
of
the
interesting
bits
in
here
is
this
definitions
bit.
This
describes
all
of
the
types
in
coubertin's,
basically
using
a
superset
of
json
schema
and
so,
for
example,
api
groups
have
at
their
requirement
in
version
and
certified
kinds
of
there's
a
description
here
about
what
is.
They
have
different
properties
properties
of
types,
and
this
is
used
internally
to
auto-generate
clients
and
servants.
So,
and
so
everything
just
works
from
a
single
set
of
definitions.
H
You
can
also
use
this
to
alter
generate
different
types
of
clients
which
of
them
before,
but
I
wanted
to
build
things
that
were
that
didn't
require
existing
clients.
They
didn't
require
a
full
kubernetes
client.
They
didn't
require
a
server
to
be
running
and
I
just
wanted
the
JSON
schemas
so
because
there's
actually
a
whole
bunch
of
pools
and
libraries
that
just
support
Kirkland
schema
and
serves
I
haven't
seen.
Jeff
schema,
it's
just
a
way
of
basically
doing
what
we
just
looked
at,
describing
like
have
a
schema
of
some
data
in
JSON.
H
So
between
those
two
things,
especially
like
there's,
a
lot
of
JSON
schema
stuff
inside
the
spec,
but
it's
not
really
useful
for
just
raw
JSON
schema
tools
and
I
wrote
this
somewhat
happy
at
the
moment.
But
that
seems
pretty
much
work
open.
The
API
you
to
JSON
schema
tool,
that's
out
there
and
I'll.
Show
you
working
your
seconds
as
well.
Two
interesting
and
and
in
combination
of
that
opens
the
wrong
one.
H
In
combination
with
that
tool
and
the
bass
script
I
got
a
repo
hook
of
different
flavors
of
all
of
the
JSON
schemas
for
all
of
the
basically
the
last
sort
of
like
two
major
versions
everywhere
seams.
So
I'll
show
you
why
this
is
interesting
in
a
moment.
But
if
we
dig
in,
we
can
have
a
look.
That's
all
we're
talking
about
bindings
earlier,
and
here
is
the
json
schema
for
a
binding
and
so
silver,
so
much
Jason.
H
But
let's
see
that
in
action
and
white
interesting
so
and
all
I'm
doing
there
is
I'm
going
to
run
the
Oakland
API
station
skimmer
tool,
I'm
going
to
point
it
at
a
URL
in
this
case
I'm
just
pointing
it
at
get
a
master
that
could
be
at
your
humanity's
installed
at
the
adversity
of
the
URL.
That
exposes
the
actualization
to
add
the
open,
API
deck
and
I'm
passing
a
standalone
flag
which
I'm
going
to
be
detailed
with
people
are
interested
but
otherwise
gloss
over.
H
H
You
you
you
for
most
users,
you
wouldn't
have
to
run
this
in
that
I've
already
run
this
for
a
whole
bunch
of
versions
and
they're
available
on
URLs
on
the
internet,
so
you
can
either
just
download
them
for
local
usage
or
just
actually
tighten
your
ad
tool
straight
of
the
URLs
on
get
hook.
That
other
thing
appeals,
I,
might
pop
a
nice
like
proxy
for
that
and
some
pointers
for
the
your
a
little
bit
nicer
permanent.
H
B
C
H
Here
we
go
right
so
that
that
created
a
skein,
a
solder
with
a
whole
bunch
of
Jaison
Thailand
and
I've
got
a
hello,
nginx
Jason.
So
basically
it's
a
typical
config
file
for
committees,
in
this
case
in
Jason,
rather
than
Jamal
I'll,
come
to
that
in
a
second
and
the
reason
it
does.
That
is
simply
because
the
JSON
schema
standard
sort
of
tool
actually
just
for
Satan
there's
not
really
good
reason
for
that,
and
it's
easy
to
build
anything
on
top.
H
So
I've
got
a
deployment
described
in
like
a
Korean,
easy
deployment
described
in
that
JSON
file
and
I'm,
making
the
output
slightly
nicer.
With
that.
The
share
can
cancel
acknowledge
this.
The
JSON
schema
tool
itself
is
pretty
low-level
and
I'm
going
to
point
it
at
the
specific
schema
for
the
thing
I'm
doing
so,
and
it
didn't
say
anything
because
it
turns
out
that
schema
file
is
valid
and
and
all
can
we
do
again.
H
Specs
have
to
have
templates
for
deployments,
otherwise
they're,
not
fairly
firm
reserves,
and
that
now
errors
up
so
that's
saying
like
well,
is
a
violation
problem,
and
so
when
you
can
do
that
with
any
type
and
I'll
be
seen
Jason
and
just
using
the
Rajasthan
schema
tool,
and
that's
not
that
interesting,
it's
I
said
pretty
much
lower
level,
but
that
starts
to
enable
us
to
do
higher
level
things.
I've
got
all
the
ideas,
but
here's
sort
of
one
I
made
a
bit
earlier
and.
H
H
I've
got
a
bunch
of
a
bunch
of
kinetic
files
and
in
those
actually
yamo
and
in
Jason
and
in
action
using
JSON,
accusing
the
case
on
the
glib
and
and
hopefully
a
big
enough
I've
written
some
basically
unionists
for
those
files
I'm
using
Python,
just
as
'red
like
from
an
implementation
point
of
view.
Importantly,
there's
about
twenty
lines
behind
the
scenes
you
can
implement
anything
I'll
might
try
and
build
a
proper
unit,
testing
library
at
some
point.
H
But
here
we're
saying
we're
actually
building
like
unit
tests
for
our
config
files
and
for
all
of
them
they're
going
to
be
validating
and
that
we're
able
to
validate
them
on
the
fly,
because
we
have
those
schemas
handy,
I'm,
doing
some
of
the
tests
and
that's
sort
of
our
scope
really
here.
So
let's
see
that
actually
working
and.
H
Oh
everything
passed
so,
let's
break
something
so
I've
said
that
to
run
whenever
I
change,
any
of
the
Tom
big
files-
and
hopefully
I-
should
check
that
failed.
Non
is
not
type
string
that
said,
you'd
be
cast
and
I
could
like
to
again
stop
failing.
Replication
controllers
have
a
metadata
name,
and
that
has
to
be
a
spring.
H
Accessing
and
so
there's
no
and
again
like
there's
no
table
qgl,
there's
no
community
server
and-
and
in
this
case,
I'm
literally
just
using
the
schemas
from
master
largest
generators
and
I,
could
just
pipe
the
map.
The
URLs
that
I
got
in
that
repository
and
I
could
also
extend
this
to
test,
for
example,
and
this
file
against
a
number
of
different
versions
of
communities
rather
than
just
one.
H
We've
got
gamers
for
all
of
them
and
they
vary
slightly
in
different
places,
and
so
will
the
Yammer
file
work
on
like
one
for
series,
as
well
as
the
one
seven
series,
and
we
could
find
out
whether
it
will
be
valid
and
there's
all
lots
of
potential
for
building
tools
around
this.
And
this
is
sort
of
like
a
unit
testing
type
framework.
Ii
thing.
H
But
I
thought
also
I
extending
some
of
the
editors
there's
a
whole
bunch
of
JSON
schema
support
in
like
things
like
vs
code,
but
that
would
be
an
interesting
thing
to
explore
the
schema
star
and
if
we
get
the
jason
scheme,
is
so
communities
into
there.
I
think
a
bunch
of
editors
will
be
able
to
give
you
immediate
error
of
feedback
on
invalid
committees
convicts,
I
think,
doing
validation
tools
for
helm
and
I
showed
briefly
there.
H
Some
of
the
case
on
it
validation,
bits
pieces,
I've
got
a
puppet
module
that
generates
committees,
configs,
do
adding
validation
into
there
as
well
sort
of
on
my
list
of
things
to
do.
There's
also
the
whole
like
because
we
have
the
schemas
for
all
the
different
versions
generated,
and
you
can.
Basically,
if
we
can
do
a
clever
dick,
you
can
actually
see
the
evolution
of
the
types
again.
H
That's
useful,
like
migration
tool
for
migration
checking
and
fill
that
there's
all
sorts
of
like
testing
and
validation,
bits
that
this
might
make
a
bit
easier
and
so
I
wanted
to
demo
I,
mainly
just
sort
of
such
say:
hey
I've
done
this.
If
anyone
is
interested,
please
have
a
look,
and
hopefully
that
was
of
interest.
H
H
So
the
other
thing
is
always
where
weather
example.
So,
for
those
of
you
not,
though,
like
look
like
lots
of
documentation
has
examples
in
all
those
examples
still
valid.
The
answer
is
known
as
any
idea
and
which
probably
means
over
time
no
and
obviously
it's
easy
to
add
check
again
that
don't
require
anything
else
other
than
this
game
is
now.
A
H
Not
yes,
yeah
I,
say
basically
I
I'm
I've
been
mainly
interested
in
building
that,
like
towards
unit
testing
framework
for
things
like
a
tonic,
another
and
others
like
generated
type
stuff
and
but
the
route
to
get.
There
is
very
much
via
like
providing
me
building
blocks
that
are
useful
for
other
use
cases.
So
yeah.
G
H
If
anyone
that
is
interested
in
using
them,
let
me
know
because
I'll
think
I'm
chip
to
bits
now,
but
yet,
like
uh-oh,
I,
think
as
well,
when
related
to
like
some
of
the
service
broker
demo
and
like
custom
resources
and
if
they're
expert,
like
you,
could
totally
regenerate
the
schemas
for
your
own
communities.
Config.
A
A
A
D
So
cool
so
a
little
bit
of
history
here
we
propose
little
containers
in
1.7
and
we
couldn't
get
it
inside
1.7
good
nap,
because
one
of
the
thing
is
that
we
didn't
get
conscious
in
the
community
about
the
design.
Number
two
is
that
we
couldn't
have
a
wind-up
of
proper
discussion
about
the
uses.
So
here
is
my
first
item.
D
We
want
to
do
the
containers
back
in
1.8
and
then
we
want
started
as
early
as
possible
because
of
the
amount
of
work
involved,
because
we
are
going
to
add
a
attribute
in
the
code,
a
PA
of
our
communities.
We
might
might
end
up
doing
a
lot
of
examples
and
documentation
the
first
time
code.
So
this
might
need
a
lot
of
time,
so
you
want
to
bring
it
to
the
community
attention
as
soon
as
possible.
So
difficult
ain't
is
a
built-in
duty,
any
containers
or
public.
You
do
not
know.
Any
containers
here
is
an
example.
D
I
would
like
to
show
so
instantly
containers
have
now
graduated
to
the
actual
part
stick.
But
this
is
the
previous
example
which
I've
tried,
so
you
would
be
able
to
run
a
bunch
of
containers
before
actually
starting
your
add
containers
in
the
pod.
So
this
book
proposal
is
very
similar
to
any
convenience.
So
this
is
our
word
is
going
to
look.
The
initial
version
is
going
to
be
a
part
of
annotations.
D
D
So
I
want
to
start
with
the
determination
grace
period.
I
feel
that
there
is
a
scope
of
consuming
things
on
the
chain
bridge.
Beginning
Venegas
I
mean
in
the
real
world
in
which
in
grace
to
be
this,
I
mean
it
basically,
nothing
but
an
extension
to
middlin,
like
I
mean
in
the
real
world
would
probably
have
banks
giving
you
additional
few
days.
If
you
miss
your
do
of
for
that
month,
get
a
credit
card.
D
You
or
something
say
the
bank
gives
you
like
five
days,
our
grace
period,
but
you
end
up
paying
the
bank
on
the
second
day.
There
is
to
be
three
days
or
not
counted,
I
mean
nobody's
keeping
track
of
the
rest
of
the
three
days.
So
great
speed
is
something
that
you
would
like
to
achieve
your
goal
as
soon
as
possible,
after
the
extension
is
the
data.
So
what
point
we
are?
D
The
computing
example
is
that
you
try
and
take
a
log
on
an
index
or
a
semaphore,
and
then
you
try
for
fights
against,
but
there
are,
if
you
could
get
a
lot
in
the
first
or
second
second,
it
wouldn't
be
waiting
for
it
anymore.
Unfortunately,
from
whatever
have
experimented,
please
stop
hook,
definitely
range
for
the
entire
base
brigade,
even
if
you
could
complete
to
complete
the
process
or
determination
or
the
clinic
process
long
before
the
actual
race
to
be
at
the
time.
D
D
There
are
some
applications
which
might
need
clean
up
and
do
is
clean
up
for
the
time
taking
for
those
thing
up
or
will
be
very
hard
to
predict,
we
go
hand,
especially
when
these
cleanups
include
their
network
file.
Transfer
or
a
lot
of
this
right,
so
these
things
depend
on
the
infrastructure
infrastructure,
so
we
should
be
able
to
provide
a
termination
grace
period
of
grace
of
it.
Graciously
like
you
know,
60
seconds
or
20
seconds,
even
though
the
cleanup
steps
could
be.
D
We
expect
the
team
of
steps
to
they
completed
in
two
seconds,
just
to
be
safer.
We
should
be
able
to
give
a
longer
grace
period
and
then,
if
the
cleanup
completes
before
we
should
be
able
to
wrap
up
the
actual
application
itself,
so
I
want
to
present
three
simple
use:
cases
which
I
believe
is
only
possible
to
achieve
using
something
like
different
ranges
and
which
is
not
possible
to
implement
using
free,
stop
books.
So
the
first
use
case
is
stateful
applications
which
I'll
be
your
in
memory
to
sub
fuse
cases
that
innovator.
D
You
have
to
do
a
sleeper
motion,
a
giving
determination
or
you
have
to
rebalancing
of
your
shard
I
mean
if
your
application
is
a
java
application,
then,
if
you're
terminating,
you
will
have
to
worry
she
eventually
balancing
of
the
keys,
the
other
one
is
performing
as
graceful
simulation
for
a
multi
container
parts.
Most
of
the
pods,
which
is
not
multiple
containers,
you
are
only
bothered
about
the
main
container,
but
if
the
sidecut
container
also
has
some
sequence
and
that
needs
to
be
coordinated
with
the
other
container,
then
it's
very
difficult
fix
and
facial
books.
D
Finally,
this
would
nicely
give
us
a
initial
initial,
incision
and
termination
obstruction,
especially
for
the
higher
level
gated
controllers
or
operators
in
the
future.
So
very
simple
illustration:
imagine
a
table
set
with
four
replicas
and
because
of
a
lot
of
the
elections,
the
technical
tree
is
now
the
master.
So
three
things
can
happen
to
this
state
problem
set,
especially
the
final
replica.
D
We
have,
we
got
last
autumn
with
index
either
it
can
get
an
update
request
of
or
it
could
get
it
could
be
skinned
out
or
it
could
be
edited
out
because
there
is
a
better
eligible
part,
the
qualified
for
that
mode.
So,
in
the
two
cases,
once
we
beat
the
fog
beaches
of
state
of
termination-
and
it
is
doing
it
is
trying
to
secure
during
determination
this
period,
it
can
do
two
things.
One
is
sleeper
motion,
the
other
one
is
rebalancing.
If
it's
a
Charlotte
application,
it
might
do
rebalancing
overseas.
D
So
in
the
shadowed
application
again,
you
would
not
be
knowing
how
long
it
would
take
to
rebalance
point
number
one.
It
depends
on
the
application
state.
For
example,
if
you're
ready
cash
has
been
running
for
months,
it
might
have
a
few
gigabytes
of
data,
but
in
your
test
environments
that
this
could
have
only
three
megabytes
of
data,
so
be
balanced
with
two
megabytes.
We'll
take
this
couple
of
seconds,
both
rebalancing
like
a
couple
of
gigabytes
to
take
a
long
time.
So
again
you
should
be
able
to
predict
there.
D
There
might
be
some
use
case
in
the
other
areas
as
well,
so
here
the
use
case
I
could
think
of
it
as
a
family
application,
and
we
have
two
sides.
Our
applications
I
mean
we
could
see
a
lot
of
because
see
the
trends
going
towards
people
building
applications
like
this,
the
primary
application
gets
focus
on
it.
So
it's
for
our
values
and
then
we
have
high
cards
for
performing
membership
and
we
have
another
side
card
performing
act
event
States,
which
is
our
you
know,
leader
in
slaves,
which
is
but
this.
This
thing
is
like
this.
D
This
side
of
this
type
of
design
pattern
is
becoming
pretty
common.
So
if
this
this
is
the
part,
then
terminating,
this
part
might
have
some
sequence.
For
instance,
you
would
first
ask
you,
would
first
need
to
ask
you
training
application
to
stop
accepting
new
connection
client
connections,
and
then
you
should
probably
perform
active
passive
switch.
You
should
probably
make
the
application
of
passive
from
this
activity
production,
and
then
you
should
remove
the
membership
of
this
part
with
its
peers,
and
then
you
should
read
the
globe.
D
It's
a
client
connections
of
existing
client
connections
are
being
served
and
then
you
should
go
ahead
and
delete
any
check
out
the
application.
So
these
things
involve
cooperation
of
termination
scripts
among
the
containers.
So
you
couldn't
possibly
do
this
with
these
topics,
because
this
topic
would
be
started
in
parallel
for
all
these
containers
and
then
it
would
be
very
hard
for
you
to
perform
such
a
cooperating
tasks
once
they
have
been
subject
and,
finally,
the
use
cases
abstraction
abstraction.
D
We
only
have
a
nice
layer
of
abstraction
for
initialization
and
depo
containers
would
bring
the
same
functionality
for
simulation
to.
So,
if
you
have,
if
you
have
a
requirement
for
any
arbitrary
client
code,
and
then
we
have
your
own
insulation
and
simulation
speaking
inserted
without
disturbing
the
customers
code,
then
you
could
use
different
container
dynamic
countries.
Some
of
the
examples
here
I'd
like
to
know
you
want
to
start
billing
and
then
start
billing.
D
You
start
billing
as
soon
as
the
cabanas
are
started
and
its
top
billing,
as
soon
as
the
containers
are
being
stopped,
so
that
would
be
V
V
achieved
without
fetching
the
containers,
customers,
container
images,
etcetera.
If
you're
writing
a
test
framework
or
now
created
to
run
a
lot
of
tests
or
CA
systems,
then
you
could
easily
initialize
the
test
using
militantly
and
then
upload
the
distresses
enter
for
something
without
knowing
anything
about
the
runtime
of
being
at
best
containers.
D
Similarly,
if
you
are
building
an
operator
for
server,
let's
type
of
workload,
you
might
have
a
lot
of
runtimes
to
the
server
less
functions
so
without
driving
our
termination
scripts
for
each
server.
Let's
run
times,
we
could
have
a
generic
startup
initialization
acceleration
code
using
any
time
before
containers,
so
you
could
simply
say
that
this,
so
this
particular
run
time
is
not
there
anymore,
so
you
could
say
that
the
resident
end
out
for
5
10
%
controller
the
monster
function
that
executes
these
run.
D
Things
are
typically
make
them,
wait
for
a
particular
time
out
and
then
they
just
go
and
die
I
mean
by
the
time
data
you
can
again
inform
the
contain
I
mean
from
the
central
control
of
that.
They
know
this
and
then
the
most
event
times
normal
no
longer
exist.
So,
finally,
you
could
also
use
it
in
winter
in
federated
the
our
controllers,
where,
before
you
start
a
particular
party
using
the
separate
controller,
you
could
either
global
DMS,
which
is
just
different
from
the
local
DNS.
D
And
then
then,
these
spots
are
gracefully
getting
terminated
because
the
nicely
remove
be
separated,
BMI
Spanky's
from
them
as
well.
Even
if
the
customer
code
has
their
own
initial
initialization
and
termination
semantics,
you
could
easily
insert
our
own.
On
top
of
that,
you
could
add
our
insulation
as
the
first
one
and
then
add
our
cleaner
at
the
last
one
and
then
simply
implement
the
termination
grace
period
depending
on
the
needs.
D
So
the
behavior
of
the
containers
would
be
that
by
default.
It
would
be
a
free
stop
trigger
just
like
this
topic,
and
then
the
restart
policy
is
just
to
avoid
confusion
or
the
behavior
would
be
that
it
can
be
restarted.
It
then
it's
silly,
then,
if
you
have
configured
the
different
container
as
well
as
a
stop
books
in
your
pot
spec,
then
the
two
containers
fix
dividends
and
fix
topics
or
disabled
and
then
images.
D
If
there
are
any
for
the
different
containers
that
are
different,
the
application
convenience,
they
would
be
a
battle
input
when
the
application
is
running,
but
the
thing
as
you'll
be
able
to
get
the
logs
and
difficult
anus
would
survive.
Google
of
restarts
and
yeah
dipper
containers,
as
we
mentioned
before,
the
four
containers
word
the
go
ahead
and
kill
the
application.
Everything
I've
finished
a
long
before
the
actual
simulation
takes
period.
And
yes,
it's
a
particular
difficulty.
My
step
is
not
completed.
We
would
try
as
much
as
possible
and
during
determination
in
place
period.
D
The
this
this
one
alone,
I
think
there
is
an
open,
PR
right
now
this
might
come.
This
might
be
added
to
official
books,
but
everything
else
is
still
true
as
of
today.
So
that's
it.
There
is
an
open
or
PR
work
in
progress.
Here
we
have
implemented
the
initial
minimal
viable,
the
four
containers
for
all
top
of
the
master,
so
people
are
welcome
to
try
it
and
then
give
us
feedback
and
we
will
be
happy
to
know
what
is
the
next
step
to
bring
this
into
one
or
eight
rows,
not
like
I
didn't.
D
A
Well,
I'm
interested
in
taking
a
look
at
that
book
sounds
interesting.
Just
anybody
have
any
questions
for
those.
A
You
all
right,
so
it's
one
but
I,
think
it
just
hold
on
for
another
minute.
We
have
a.
We
should
start
on
the
1.8
planning
and
all
touch
base
with
you
on
how
we
should
go
about
doing
that
now,
I
would
have
the
PM
leaves
and
all
that-
and
we
can
start
that
in
the
next
week
or
so
since
we
don't
have
time
today
and
that's
totally
fine,
others.
Men
are
on
how
many
other
announcements
of
ending
the
fabs
last
minute
a
sensitive
like
make
as.
G
One
quick
thing:
well:
actually,
we
probably
should
spend
some
more
time
talking
about
it,
because
film
does
client-side
validation
and
I.
Don't
see
that
there's
a
way
to
turn
it
off
if
I
use,
even
the
newest
release
of
helm
against
kubernetes
1.7
I
can't
set
an
on
root
user
if
I'm
going
to
mount
volumes,
basically
PI
security
context
doesn't
work
because
of
a
wired
compatible,
but
non
schema
compatible
change.
G
It
was
made
in
1.6
to
1.7
and
for
a
lot
of
the
staple
application
charts
anything
that's
going
to
run
as
a
non
root
user,
which
probably
should
be
anything
everything
it's
not
going
to
work
against
kubernetes
1.7.
So
there
are
some
options
in
terms
of
ways
we
could
potentially
address
it
a
long
term.
It
seems
like
what
the
community
is
going
with
is
we
shouldn't
be
doing
client-side
validation?
A
Yeah
but
you
have
an
issue
up
on
by
any
chance.
There's.
G
G
A
All
right:
well,
thanks,
we'll
take
a
look
at
that
anybody
have
anything
else.
A
Okay,
all
right
can
we'll
talk
offline
on
the
on
the
chat
and
thanks
everyone
for
coming
today
and
see
y'all
next
week.