►
From YouTube: Kubernetes SIG Apps 20170612
Description
Minutes and agenda are at https://docs.google.com/document/d/1LZLBGW2wRDwAfdBNHJjFfk9CFoyZPcIYGWU7R1PQ3ng/edit#
A
Welcome
to
the
June
12
2017
kubernetes,
sig
apps,
and
we
have
the
agenda
today
and
I
know
it's
been
shared
once,
but
we've
had
a
bunch
more
people
join
and
so
I'll
share
it
again
in
chat,
so
you
can
follow
along
I'll
be
leading
today.
My
name
is
Matt
Farina
and
if
somebody
would
like
to
be
a
note-taker
or
help
jump
in
and
take
notes
on
this,
it
would
be
appreciated.
A
We
can
have
one
person
more
than
one
person
feel
free,
the
more
notes
and
details
we
have
the
better
and
with
that
we
have
one
announcement,
and
that
announcement
is
that
over
the
next
couple
of
months
we're
going
to
kind
of
put
a
focus
on
charts
in
the
past,
we've
had
focuses
on
different
things:
CS
CD
tools,
things
like
that
and
charts
are
now
something
we'd
like
to
put
maybe
a
little
bit
more
of
a
focus
on
tools.
Around
charts
chart
best
practices,
things
of
that
nature.
A
The
idea
is,
we
would
like
to
help
get
charts
into
a
better
position
in
you
know
more
well-rounded,
fill
in
any
issues
and
help
them
get
a
wider
adoption.
Anything
around
that.
So
if
you
have
something
you'd
like
to
demo
in
the
next
couple
of
months,
that
you
know
will
help
with
charts
and
any
of
that
stuff.
Please
reach
out
to
us
we'd
love
to
know
what
those
details
are
or
demos
are,
so
we
can
get
them
on
the
schedule.
A
A
A
B
C
Yeah
this
is
Dan
I
guess
to
echo
claims
comment
like
please
update
those,
and
also
there
are
a
few
features.
I
can't
expansions
last
week
so,
but
they
should,
as
far
as
I
know,
I,
don't
think,
there's
any
more
that
had
extensions
branded,
so
I
think
of
it.
It
should
be
like
if
it
isn't
in
now,
I
don't
know
if
it'll
be
a
one,
seven,
so
I,
please,
okay,
thanks.
A
D
The
update
feature,
which
is
primarily
the
one
for
1.7
that
we
were
interested
in,
is
merged
controller
history
is
merge.
Daemon,
set
types
of
updates
to
leverage
controller
history,
so
demons
that
history
is
also
burst.
We
have
some
additional
PRS
for
e
and
a
little
bit
of
API
cleanup,
but
everything
is
looking
on
track
for
1.7
release.
C
D
The
last
I
saw
on
crime
jobs
dating
get
the
love
that
we
wanted
to
get
it,
but
this
release
cycle
and
I
don't
think
it
was
going
to
make
one
point:
seven
for
the
beta
yeah.
C
D
D
D
E
A
A
A
A
A
I
Okay,
can
you
guys
see
my
screen?
I
can
okay,
so
I
gave
this
a
somewhat
cheeky
title.
This
is
mostly
tongue-in-cheek
I
guess
so
I
think
like
what
what
I
want
to
talk
about
today
is
just
the
kubernetes
story
of
like
how
you
write
down
application.
So,
like
you
know,
what
container
is
running?
I
Deployment
on
kubernetes
ends
up
writing
their
own
bespoke
tool
chain
for
for
managing
and
authoring
JSON
files
and
in
trying
to
reuse
parts
of
them.
I
think
like
there
are
projects
that
have
actually
made
quite
a
bit
of
quite
a
bit
of
sense
out
of
this,
given
the
tools
that
they
had
like
obviously
helm
is
like
a
great
example
of
this,
where
they
use
go
templates
to
sort
of
get
some
degree
of
sharing
between
held
charts,
but
I
think
it's
sort
of
worth.
Stepping
back
and
just
asking
questions
like
what
what
is
this
like?
I
I
So
our
our
first
goal
for
the
project
for
Kason,
it
was
to
build
better
primitives
for
like
expressing
the
actual
kubernetes
api
itself.
So
if
you
contrast
this
approach
to
something
like
composed,
a
cube
which
is
really
taking
a
different
API
and
mapping
it
to
the
Nettie's
api,
when
we
talk
to
a
lot
of
people
who
scaled
up
their
applications,
we've
found
that
an
important
goal
for
for
people
is
just
having
good
primitives
for
just
dealing
with
the
kubernetes
api.
So
we're
not
not
writing
on
your
API.
I
The
second
goal-
that's
really
important
to
us,
is
that
it
has
to
be
compatible
its
existing
code.
In
particular,
it
has
to
be
compatible.
It's
like
the
JSON
files
that
people
have
written.
So
we
think
that
users
should
be
able
to
template
what
they
want
and
not
what
they
don't,
and,
in
particular,
we
think
they
shouldn't
have
to
like
port
to
like
Haskell
or
like
Python,
or
something
right
like
you
should.
I
You
should
have
a
choice
in
a
gradual
passport
for
templating
existing
code,
and
it
should
be
amenable
to
the
scaling
demands
that
people
have
empirically.
A
nolasco
that
is
important
to
us
is
composability,
so
different,
app
different
parts.
Each
application
should
be
able
to
be
written
independently
by
independent
teams
or
at
least
as
independently
as
possible.
I
So,
for
example,
the
logging
infrastructure
team
should
be
able
to
write
they're
part
of
a
deployment
object
and
the
asking
should
be
able
to
write
their
port
or
deployment
object,
and
it
should
be
a
single
person's
job
to
sort
of
figure
out
everything
that
those
apps
do
and
then
write
it
down
into
a
single
deployment.
Both
of
them
should
be
able
to
manage
independently
the
configurations
for
their
different
components,
and
you
should
have
good
primitives
like
so.
I
So
this
is
an
opinionated
workflow
I
think
like
if,
in
the
Platonic
ideal
of
configuration
systems,
we
might
end
up
with
a
different
thing
right
like
if,
if
it
was
mathematically
proven
that,
like
Haskell,
is
the
way
to
to
configure
kubernetes
clusters
like
that,
I
think
like
that,
if
you
don't
account
for
the
fact
that,
like
the
barrier
to
entry
for
for,
like
you
know,
ground
up,
rewrites
of
like
people's
configuration
is,
is
big.
Enough
is
like
really
bit
like
really
a
big
barrier
for
adoption.
I
This
doesn't
take
away
that
option.
Okay,
so
what
is
it
so
so
case
on?
It
is
a
JSON
it
library,
I
didn't
come
up
with
the
name
that
was
Sam
codes,
a
box.
It's
basically
JSON
it,
if
you
don't
know,
is
like
a
JSON
templating
language.
So
it's
a
strict
superset
of
JavaScript
that
our
JSON,
that
includes
things
like
variables,
functions,
I've
lost
some.
Some
nice
object-oriented
features
JSON.
It
was
an
important
choice
for
us
because
it
helps
us
accomplish
goal
two,
which
is
to
be
entirely
compatible
with
existing
JSON
configurations.
I
We
could
have
gone
with
typescript,
but
or
something
like
that,
but
the
feedback
we
got
is
like
people
were
less
comfortable
with
a
full-blown
language,
and
so
we
picked
a
language
that
strikes
a
balance
between
being
powerful
and
expressive
while
still
being
minimal
enough
set.
You
know
a
single
engineer
basically
could
write
the
whole
runtime
themselves.
I
There's
also
cooling
I,
don't
have
time
to
show
you
it,
but
we
invest
heavily
in
that
as
well.
So
before
we
go
on
to
the
demo,
I
think
that
it's
important
to
talk
a
little
bit
about
the
direction
we've
made
in
the
community
and
who
is
involved
so
in
some
way
or
another
I.
Think
of
box
bitNami
deus
and
core
OS
is
having
made
significant
contributions
to
the
project.
For
the
case
on
up
library,
box
was
the
pilot
customer.
I
They
have
about
eighteen
thousand
lines
of
configuration
code
and
for
me,
a
lot
of
the
design
decisions
that
you
will
see
in
the
demo
is
sort
of
based
on
their
use
case.
It's
a
little
bit
slanted
towards
the
maintainability
problem,
rather
than
the
starting
new
problem
that
will
change
eventually,
but
I.
I
Okay,
so
before
I
show
you
this
code,
I
do
have
to
say
like
this
is
like
super
early.
Technically,
we
call
it
beta.
I
would
say
it's
like
really
alpha,
but
but
I
think
there's
enough
here
that,
like
the
concept
is
I
hope
clear
at
least,
and
so,
if
that
looks
like
at
some
code,
so
I
have
three
demos
here:
they're
not
complicated
demos.
So
there
are
some
imports
up
here,
since
it's
noise
and
I
want
you
to
concentrate
on
like
the
actual
code.
I
I
am
going
to
scroll
down
past
them,
so
this
is
basically
a
a
mirror
of
like
the
hello
world
example.
That's
in
the
kubernetes
documentation
here
we're
creating
a
container.
So
there's
like
a
container
namespace,
we
dot
new
into
it.
We
create
a
container
called
nginx
with
this
image
tag
this
this
docker
image
and
then
the
interesting
part
about
this
first
example
is
that
we
use
this
plus
operator
to
add
container
ports
right.
So
the
container
port
is
a
name
for
it.
It's
called
HTTP
and
it
exposes
a
tee.
I
If
I
was
to
comment
that
out,
like
the
you
would
see
on
the
right-hand
side
here
that
the
ports
section
actually
goes
away.
So
there
is
a
really
strong
correspondence
between
the
kubernetes
api
like
container
dot.
Ports
is
an
actual
field
in
the
V
one
container
API
object,
so
there's
a
really
strong
correspondence
between
the
API
and
the
case
on
it
library.
We
do
take
some
liberties
with
stuff
like
new
to
make
things
more
convenient,
but
the
idea
is
that
it
should
concisely
Express
the
API,
with
modern,
like
language
tools
like
variables
and
stuff.
I
I
The
next
example
we'll
build
on
this
a
little
bit.
There's
imports
that
I'm
skipping
again
to
create
a
service
that
points
at
that
deployment.
So
this
is
still
basically
the
same
thing:
we're
creating
the
nginx
container.
There
are
some
common
labels
that
we'll
use
in
the
service
selector
and
we
make
the
port
name
a
variable
so
that
the
container
and
the
service
port
can
both
point
at
it.
I
So
the
deployment
the
deployment
object
is
still
basically
the
same
thing.
The
interesting
new
part
is
that
we
have
like
a
service
new.
It's
called
hello.
It
selects
on
these
labels,
hello
and
back-end,
and
it
exposes
service
port,
80
and
Poinsett
container
ports,
container
name,
which
is
HTTP,
and
then
we
wrap
this
all
in
a
view.
One
list
which
is
you
can
just
keep
control
a
list
of
stuff
to
the
server
and
it
will
deploy
all
of
it
at
the
same
time
for
some
definition
at
the
same
time.
I
So
so
I
think
like
what's
important
to
take
away
from
these
examples.
Is
that
that
you
understand
everything,
but
you
sort
of
understand
the
flavor
of
the
language
and
how
this
could
make
things
more
expensive
and
maintainable,
and
the
last
example
I
want
to
show
you
is
going
back
a
little
bit
to
the
first
example.
You
know
I
want
to
imagine
a
situation
where
one
team
is
writing
like
the
deployment
and
another
team
wants
to
embed
some
volume
mounts
onto
every
container
in
the
deployment
right.
So
this
part
is
still
the
nginx
container.
I
This
is
still
the
deployment,
but
the
interesting
part
is
this
plus
append
volumes
here
and
what
that
does.
If
you
go
to
this
function,
it
takes
like
a
path
that
we
want,
so
whatever
user
share
nginx
HTML
it
takes
that
and
it
will
append
it
to
every
contain
it
will
append
a
volume
amount
and
a
volume
to
every
every
content
of
volume
out
to
every
container
inside
the
deployment.
I
So,
basically
you're
going
to
ignore
here
we're
like
creating
a
volume
map
right
pointed
at
this
path,
and
then
here
we
create
a
volume
from
with
that
point
that
some
persistent
volume
claim
my
claim
one.
The
interesting
part
of
this
is
when
we're
mapping
over
to
containers.
So
basically,
the
logic
here
is
like
for
every
container,
see
I
want
to
add
a
volume
mount
with
nginx
volume,
with
this
nginx
map
that
we
created,
so
so
it
will
return
that
and
and
overwrite
the
it'll
it'll
append,
like
you
know,
the
volume
mounts
in
actually.
I
Let
me
change
this,
so
it'll
append
volume
s2
every
container
right.
So
if
I
do
is,
if
you
look
at
this
when
I
comment
this
out
and
save
it,
you'll
notice
that
this
volume
mount
here
will
go
away
and
it
does
right
and
all
we're
left
with
is
the
volume,
but
when
I
uncomment
it
like
obviously
comes
back,
and
so
what
you
can
do
is
you
can
create
like
arbitrarily
complex
deployments,
and
you
can
create
arbitrarily
complex,
predicates
and
append
volume
maps
to
that.
I
To
that
and
then
sort
of
the
app
writer,
the
app
writer
has
a
you
know:
sort
of
turing-complete
flexibility
to
to
alter
you
know
the
existing
deployment,
which
means
that
one
team
can
write
this
like
the
logging
team
or
whatever
can
write
a
volume
of
pen,
mixin
and
the
apps
team
can
write.
You
know
their
deployment
or
whatever
come
on.
So
this
is
just
like
a
simple
example.
This
all
works,
it's
in
it's
in
a
branch
in
our
repository,
it's
at
K,
sonnet
Lib.
So
this
is
the
github
address.
I
It's
just
github,
calm
/
case
on
it
/
kiss
on
it
lid.
All
of
these
examples
you
can
cube
control
directly
into
the
cluster
there
in
branch
the
branch
called
sig
axe,
so
there
are
two
branches,
one
master
and
one
sagat's
and
it's
the
last
commit.
So
it's
pretty
easy
to
find
so
I
don't
want
to
take
up.
I
want
to
leave
time
for
questions,
because
this
always
spawns
lots
of
questions.
I
know
people
have
been
typing
in
stuff
that
I
can't
have
I
can't
see
it
yet.
I
Yeah,
okay,
so
so
how
do
we
start
with
questions
actually
have
never
done
this
before?
Okay,.
A
J
A
I
Okay,
so
that's
a
good
question:
it
is
turning
complete
because
the
simple
way
you
know
is
because
you
can,
you
can
do
recursion
and
conditionals
inside
these
functions.
So
I
could
do
something
like
if
blah
blah
blah.
You
know,
then
blah
blah
blah
else
float
right.
So
so
it's
it's
definitely
turing-complete
I!
Think
if
you
ask
Dave
Cunningham
who's
the
author
of
this,
he
will
say
that
the
he
has
intentionally
limited
the
number
of
crazy
things
you
can
do
with
it.
So
it's
I
have
found
empirically
writing
real
applications
on
top
of
JSON.
I
A
J
A
I
Yeah,
so
what's
going
on
here,
is
that
my
visual
studio
code
extension
is
taking
the
JSON
output
of
the
JSON
command
line
utility
and
turning
it
into
yeah
mul,
so
so
natively
JSON
it
does
not
emit
llamo
I.
Think
at
some
point
we
will
want
to
do
that
and
I
think
at
some
point.
We
will
want
to
be
able
to
import
llamo
files
right
so
so,
like
you
could
imagine,
like
you
know
like
when
I
say
this,
this
is
like
fully
compatible
with
like
existing
JSON
code.
I
I
mean
like
we
could
take
like
a
JSON
it
or
we
could
take
like
a
deployment
JSON
file,
and
we
could
just
like
start
adding
stuff
on
to
it
and
point
and
then
like
compile
that
out
with
like
the
JSON
command-line
utility
right.
So,
for
example,
you
can
imagine
that
instead
of
having
this
here,
I
just
had
like
an
actual
JSON
deployment,
a
deployment
JSON
object
right,
I
could
then
take
this
plus
append
volumes
and
I
could
just
add
that
to
the
end
of
that,
and
that
would
work
so
so
to
be
clear.
I
What
doesn't
work
is
since
this
is
a
super
set
of
JSON,
not
a
super
set
of
yeah
Mille.
In
order
to
consume
a
deployment
object,
that's
written
in
Yama,
like
on
the
right-hand
side.
You
would
have
to
convert
that
to
JSON.
The
conversion
is
mostly
painless.
Save
a
couple
of
corner
cases:
I,
don't
it
hasn't
been
implemented
in
the
JSON,
the
JSON
compiler,
yet
like
the
JSON
command-line
utility
yet,
but
it
will
be
eventually
because
it's
important
for
us
for
a
number
of
reasons.
I
I,
don't
know
I
mean
so
to
answer
your
question
to
answer
the
question
I
heard
directly.
The
right-hand
side
is:
is
the
product
of
piping,
this
file
into
a
command
line,
the
JSON,
a
command
line,
getting
JSON
and
then
converting
that
to
Gamal
I
hope
that
answers
I,
just
sort
of
smattered
knowledge
about
this.
This
general
area
on
everybody
I
hope
that
answers
your
questions.
If
it
doesn't,
if
you
can
maybe
ask
another
one,
all.
A
Right
I
have
one
more
question:
application
I
mean
this
is
a
new
language
to
introduce
here,
and
it
looks
similar
to
something
like
JavaScript
or
some
of
the
other,
but
it's
a
new
language
and
case
in
it
is
kind
of
layered.
On
top
of
that,
if
I
understand
it
right,
do
you
have
a
tutorial
on
getting
started
or
anything
for
somebody
who
wants
to
learn
this
because
I'm
just
going
to
approach
assuming
to
go
there's
all
these
language
constructs
and
thing
I
may
not
know
for
anybody.
A
I
So
on
the
json
--it
language
has
a
lingua.
Has
a
website
called
json
org.
We
are
planning
to
write
tutorials,
that
sort
of
explain
the
core
constructs,
but
the
project
is
just
not.
We
just
haven't
had
time
right
like
them.
The
project
has
not
matured
to
the
point
where,
where
we
have
like
real
documentation
yet
so
so,
if
you're
looking
for,
like
you
know,
like
language
level
constructs
like
like
how
do
I
declare
a
function?
How
do
I
use
these
+
mix
and
operators?
I
You
know
how
do
I
do
an
if
right,
I
would
say
you
can
go
to
JSON
org
and
there's
a
tutorial
there
if
you're
looking
for
more
of
the
case
on
it,
you
know
the
introduction
to
like
the
cork
on
the
core
abstractions
of
case
on
it.
We
will
be
filling
out
case
on
SEO
dot-com
more
over
the
next
couple
of
weeks.
Right
now,
it's
kind
of
not
in
super
great
shape.
It's
in
particular
behind
this
specific
beta.
The
next
version
of
the
beta
will
come
out
soon,
so
I
would
say.
I
A
That's
good
and
one
of
the
things
that
I'll
say
he
is
just
traditionally,
you
know
giving
simple
documentation
that
somebody
who
doesn't
know
can
jump
in
and
just
get
started
without
having
to
think
or
they
can
start
to
learn.
Those
constructs
really
helps
uptake
even
in
a
novel
but
I
understand.
This
is
early
alpha.
Oh.
I
F
Alex
watching
some
of
the
commits
act
for
the
a
sauna
project
I've
gleaned,
I,
haven't
seen
it
actually
seated
somewhere.
That
looks
like
project
is
a
lot
of
code
that
generates
these
libraries,
so
I
get
the
impression
that
you're
kind
of
taking
the
kubernetes
sources
source
declarations
with
input
trying
to
produce
these
libraries
in
a
automated
fashion.
Is
that
correct
that.
I
Is
correct
mostly
so?
What
we're
doing
is
we're
not
doing
anything
as
complicated
like
crawling
over
the
go
abstract,
syntax
tree
or
anything
we're
basically
taking
the
open,
API
spec
for
kubernetes
and
we're
generating
the
core
of
the
libraries.
So
the
architecture
of
this
is
basically
like.
We
have
one
file
called
case,
dot
libs
on
it.
This
is
entirely
generated
by
code
and
there
are
some
hooks
that
let
us
customize
like
the
Constructors
like
new
in
certain
places
and
alias
operate
like
alias
different
functions
and
collapse.
Things
like
spec
on
top
of
so
this
is.
I
This
is
like
this
is
like
the
innermost
of
a
set
of
concentric
circles
of
the
distraction
right.
So
so,
on
top
of
that,
we
have
like
K
dot
lips
on
it,
which
contains
like
a
bunch
of
modifications
that
we're
currently
making
by
hand
that
make
the
whole
API
more
friendly,
like
the
map
containers
function,
for
example,
is
here
eventually,
our
goal
is
to
have
almost
all
of
this
auto-generated
and,
and
the
reason
is
because
we
want
it
to
be
a
build
failure.
I
If
the
API
changes
in
a
way
that's
not
compatible,
rather
than
leaving
something
for
people
to
come
to
find
out,
you
know
when
they're
trying
to
deploy
something.
How
do
you
plan
to
deal
with
API
extensions
and
their
party
resources
in
that
vein,
so
I
I
think
I
would
say
that
we
have
not
yet
really
thought
through
the
third-party
extensions
story.
I
know
that
Joe
Beda
is
is
his
opinion
seems
to
be
that
this
will
be
the
good
basis
of
a
toolkit
for
creating
JSON
at
libraries
for
third-party
resources.
I
What
that
workflow
actually
looks
like
is
not
yet
clear.
I
think
like
one
example
like
one
thing
we
might
do
is
like.
If
you
deploy
a
third-party
resource
in
your
cluster,
maybe
you
could
point
cube
config
at
it
and
get
a
JSON
it
library
for
all
the
third-party
resources
in
that
cluster,
but
I
mean
I,
don't
want
to
I,
don't
want
to.
You
know
over
promise
or
anything
and
say
that
we've
really
figured
this
out
because
it's
we
have
not
gotten
that
far
so.
A
A
This
is
great
conversation
and
what
I
would
ask
is
can
if
folks
have
more
questions
or
conversation
they
want
to
have.
Can
you
take
it
over
to
the
cygnets
mailing
list
yeah,
because
there
seems
to
be
some
interest
here,
so
let's
just
continue
that
in
another
capacity.
Thank
you
very
much
for
for
the
demo
and
for
being
able
to
answer
so
many
questions.
No.
K
K
Helm
for
one
thing,
has
been
great
for
developing
an
application,
putting
it
up
onto
kubernetes
and
that
kind
of
stuff.
One
of
the
troubles
that
we
had
a
dais
and
kind
of
working
around
with
draft
or
with
helm,
was
the
issue
of
developing
a
chart
with
helm
or
building
an
application,
pushing
it
up
to
source
and
then
actually
deploying
a
helm
chart
and
doing
those
bad
development
and
figuring
out
how
to
get
that
onto
the
cloud.
K
So
drop,
that's
kind
of
prefacing
and
giving
you
a
surface
area
of
where
draft
came
from
so
drafts,
essentially
is
a
developer
tool
used
to
help
develop
an
application
and
kind
of
while
your
before
you've
actually
committed
to
source
control
to
let
your
CI
air
CD
system
kick
in
and
push
your
application
out,
build
your
application
and
push
it
up
to
the
cloud.
This
is
kind
of
like
for
the
person
who's
building
on
a
fort
of
their
project
and
trying
to
build
it
up
and
just
testing
things
out.
K
K
I
will
take
that
as
a
yes
them.
So
draft
essentially
is
a
developer
tool
that
runs
on
the
cluster
and
also
there's
a
local
client
available
sold
draft
version.
I've
got
a
version
over
here.
O4O
RC
1
is
the
one
that
I
have
installed
here.
I
also
have
a
server
version,
so
a
lot
of
people
if
you're
familiar
with
tiller
and
helm.
How
that
interacts
draft
is
the
exact
same
way.
So
in
my
kubernetes
cluster
system.
K
And
I
guess
a
pod
in
there
you
can
see
that
there's
a
couple
of
drafty
pods
running
in
there
there's
three
of
them
and
then
there's
also
the
tiller
deployed
pod.
This
is
running
off
helm
version
2.4,
so
we're
up
to
what
the
latest
version
is.
I
know:
2.5
is
coming
out
sometime
this
week,
so
we're
going
to
bump
up
drafted
2.5
compatibility
as
well.
K
Well,
so
we
got
that
all
running
so
draft
how
it
works
is
that
it
takes
you
don't
have
any
you
don't
have
to
have
any
previous
knowledge
of
how
darker
works,
how
kubernetes
works
or
anything
of
that
sort.
So
we
have
a
basic
application,
there's
a
Python
app.
That
is,
if
you
look
an
app
PI,
all
it
is
is
just
like
a
little
simple,
flask
app.
K
Then
it's
just
a
hello
world
web
application
running
on
port
8080,
and
it's
got
a
requirements
file
so
typical
stuff
that
you
would
see
for
a
basic
Python,
app
server,
flash
so
to
interface
on
how
to
start
with
this
project.
You
would
go
a
draft
create
and
what
that
does
is
that
it
scaffolds
and
it
text
it
goes
through.
K
But
this
the
main
point
takeaway
here
is
that
draft
create
all
it
is,
is
just
the
scaffolding
in
front
and
it
just
creates
some
stuff
for
you
to
get
started
with
your
app
onto
the
cloud
and
then
after
that,
there's
also
a
draft
off
tamil.
It's
not
really
that
important,
but
it
just
gives
you
like
a
randomly
generated
name.
They
tells
you
what
namespace
your
applications
we
are
deployed
to
so
like
configurable
things
you
would
do
with
home.
So
then
what
you
do
after
it's
been
scaffold
is
done.
K
You
doing
a
draft
up
and
dress
up.
Essentially
it
archives,
your
current
directory,
and
it
shifts
it
up
to
the
draft
server
and
the
draft
server
pipes
back
a
bunch
of
build
information.
So
it's
piping
back
a
docker
built
on
your
local
application
and
it's
build
it.
Then
it
pushes
it
to
a
private
registry.
So
here
I
have
my
giraffe
daemon
set
up
to
push
to
kita
obeying
doubler.
K
Something
Moss
is
the
application
name,
and
so
right
now
we're
just
waiting
for
that
image
to
be
pushed
up,
and
it
might
take
a
little
bit
of
time
here,
but
once
that's
finished
essentially,
what
happens
is
that
it
will
now
tell
once
it's
pushed
up.
It
will
tell
tiller
to
install
the
chart
that
was
confirm
done
locally
and
it'll
finish
that
so,
let's
see
if
we
can
go
into
another
terminal
here
and
see
if
we'll
find
it
in
the
clock
we're
in
a
Grande's
cluster
pod.
So
this
was
an
old
application.
K
No,
that's
yeah,
it's
slack
and
zu
what
languages
are
supported
right
now.
There
are
about
six
different
languages
and
if
you
go
into
Azure
draft
I
think
it's
Python,
Java
no
Jas,
but
essentially
yeah
go
Java,
node,
PHP,
Python
and
Ruby,
and
anyone
can
create
and
create
these
packs.
I
know
someone
tried
to
create
a
typescript
pack
and
then
there's
another
one
that
someone
else's
wanted
to
build,
which
was
a
it
would
be.
K
Right
now,
honestly,
it's
not
intelligent
enough
to
figure
out
what
parts
or
ports
are
needed
to
be
exposed
right
now
it
assumes
in
the
chart
itself.
It
assumes
that
it's
going
to
be
exposed
on
port
8080,
but
the
important
takeaway
here
is
that
it's
only
being
written
out
to
the
local
file
system.
So
after
it's
actually
been
done
by
drop,
create
the
important
takeaway
is
that
if
your
app
does
not
or
if
the
in
the
future,
we
want
to
do
a
smarter
port
detection.
K
So
then
we
can
figure
out
if
the
Python
app
actually
listens
on
port
8080
or
if
it
listens
on
port,
8000
or
whatever.
If
that
doesn't
happen
to
be
true,
then
you
can
just
quickly
go
into
the
chart.
You
can
modify
it
to
whatever
your
needs
are.
You
can
add
extra
ingress
resources.
You
can
change
your
chart.
It's
just
basically
there
for
a
basic
scaffolding
to
get
you
up
and
running
and
hobbling
a
little
bit,
and
then
he
can
go
and
run
with
that
offensively.
So
all.
L
A
K
I
think
that
is
determined
by
the
PHP
docker
image
itself
and
I
think
the
docker
images
so
I'm
pretty
sure
bundles
Apache
I'm,
not
a
hundred
percent
sure
on
that,
but
that
is
fun
within
the
docker
image
itself.
So
again,
this
is
to
get
you,
scaffolding
and
bundled
up
and
ready
to
go.
If
you
don't
like
Apache,
if
you
don't
like
nginx
or
whatever
it
is,
you
can
change
the
docker
file
or
you
can
change
the
chart
to
whatever
you
need.
K
So
if
you
want
to
do
a
separate
chart,
starter
image
or
you
want
to
run
a
separate
docker
container
inside
that
pod,
that
has
Apache
and
it's
just
a
sidecar
container
and
you're
using
about
two
servers-
web
traffic-
that's
totally
fine!
So
now
just
consuming
a
little
bit
with
the
demo.
Since
it's
now
pushed
off
the
docker
image
was
pushed
to
the
registry,
and
then
it's
just
saying
this
is
deploying
to
kubernetes.
This
is
essentially
helm
taking
over.
K
K
So
if
we
want
to
I,
don't
know
if
I
have
my
ingress
set
up
correctly
right
now
for
this,
but
if
I
wanted
to
change
the
apt
up
high
to
say
instead
of
hello
world,
let's
say
hello
draft,
it
will
write
that
and
then
it'll
automatically
notice
what
we've
changed
files
in
the
local
file
system
and
it
will
rebuild
and
push
it
again.
It'll
do
that
and
what
will
happen
in
the
server
right
now
we
have
a
helm
list
of
halting
Mothe.
Add
revision,
one.
K
What
will
happen
is
that
will
push
that
new
docker
image
out
and
then
it
will
bump
up
that
revision
up
to
revision
two.
So
if
you
make
changes
to
your
home
shart,
who
would
do
a
helm
upgrade
and
it
would
change
that
to
revision
two
so
it'd
still
be
halting
Mach
dot.
Baking,
alter
comport
draft
baking,
alder
calm,
but
your
chart
would
now
be
a
revision,
for
your
release
would
be
a
revision
to
so
the
idea
there
I
think
that's
all
I
can
do
for
sharing.
So
I
can
just
answer
questions
from
here.
K
Me
know
I'm
scott
Rigby
half.
We
must
have
a
docker
image
locally
No.
So
what
happens
here
is
that
it
archives
the
current
directory
and
what
it
does
is
it
archive.
So
essentially
what
how
docker
works
is
that
it
archives
the
current
directory
skipping
out
any
files
that
are
in
docker
ignore,
and
it
sends
that
to
the
docker
Damon,
we're
essentially
doing
the
exact
same
thing,
so
we're
sending
it
to
draftee.
So
no
a
docker
image
does
not
be
required
locally.
K
To
do
this,
it
just
works
on
application
source
code,
and
this
is
not
meant
as
a
deployment
tool
for
production.
What
happens
that
this
is
just
for
code
to
commit,
and
then
afterwards,
what
you
should
have
in
place
is.
You
would
use
this
in
packaged
it
using
like
home
package
and
actually
release
a
fully
CIPD
chart.
That's
been
vetted
by
your
CI
and
your
QA
team
and
all
that
kind
of
stuff.
So.
A
So
if
I
understand
it
right,
there's
draft
that's
the
client
that
runs
on
your
local
system,
graph
D
that
runs
in
your
kubernetes
cluster
alongside
tiller,
and
so
this,
in
this
case
it
tours
up
the
local
directory,
sends
it
to
draft
D
running
in
your
kubernetes
cluster,
where
that's
where
it
creates
the
container
image
it
puts
it,
pushes
it
to
the
repo
and
does
those
things
that's
correct.
You.
K
To
do
the
docker
build
yes,
it
currently
does,
however,
I've
been
thinking
about
this
and
I
want
to
package
it
with
a
doctor
in
docker
container.
So
then
we
can
ship
a
different
docker
version
that
what's
available
on
kubernetes,
because
this
is
one
of
the
things
that
are
blocking
a
couple
of
people
is
that
they
want
to
take
advantage
of
docker
1705
capabilities,
which
are
the
image
within
image
building,
but
because
kubernetes
currently
supports
version.
K
They
would
like
to
take
more
advantage
of
the
later
versions,
so
I'm,
thinking
of
eventually
switching
it
off
from
the
host
mounted
docker
sockets
to
a
doctor
and
docker
running
inside
the
pod,
and
then
just
communicating
between
the
two
like
a
sidecar
and
if
anyone's
interested
in
the
project
and
wants
to
talk
about
the
development
or
want
to
talk
about
issues
or
whatever
else
that
they're
having
I'll
link
to
it
in
a
second.
But
it's
github.com
as
your
draft.
So
alright.
A
Thank
you.
Well,
we've
got
just
probably
over
nine
minutes
left
and
we've
got
a
number
of
projects
to
run
through
for
stand-up.
So
thank
you
very
much
for
the
demo.
This
was
useful
and
it
should
be
fun
to
see
how
this
I'm
growing
it's
used.
Thank
you
so
to
switch
back
to
stand
ups
here,
I
just
want
to
kind
of
rattle
through
some
of
these,
so
we
can
get
status
and
know
where
they're
at
and
there's
like
five
of
them.
So
let's
make
it
quick
minute
or
less
if
we
can
roll
through.
H
M
M
L
N
A
A
J
J
An
English
container
so
that
you
can
map
the
UI
and
an
ATI
on
the
same
domain
is
now
no
no
no
longer
requirement
and
you
can
configure
the
API
hostname
in
the
UI
hostname
independently.
If
you
want
to
have
them
on
two
different
host
names,
it
up
that
sticks
down
and
hopefully
a
couple
more
fixes.
The
next
week.
A
M
Sure
I
I
know
we
did
a
check
in
the
last
the
last
the
gaps
meeting
and
envelope
yeah
like
quickly
recap:
surely
what
we
got
to
get
there
so
I
have
everyone's
names
that
that
was
their
their
flat
usernames,
but
I
need
to
actually
click
on
them
to
to
know
what
their
real
names
are.
M
Sorry
about
that,
but,
oh
so
Michelle
I,
remember,
Ryan,
J,
Tom,
Davidson,
Tony,
Bellamy
and,
and
myself
were
in
that
group
and
mainly
we
Oh,
actually
I'm.
Sorry
I,
don't
put
my
videos
going
oops,
okay,
anyway,
hello,
everyone,
mainly
we
we
recapped
belt,
elastic
s,
meeting
to
start
Ryan.
M
Also
gave
gave
a
point
of
view
of
what
he
was
working
on,
which
was
pretty
similar
to
the
first,
the
first
draft
of
the
PR
in
the
charts
repo
that
I
put
forward
that
put
a
bad
that
did
a
host
amount
directly,
if
I'm,
remembering
correctly
I,
don't
know
if
Rams
on
the
call
but
correct
me
if
I'm
wrong,
but
I
believe
it
was.
It
was
mounting
directly
to
to
the
actual
deployment
object.
I
think
I
think
that's
how
he
was
doing
it.
M
What
we're
doing
is
mounting
the
kubernetes
post
pass
directly
to
the
persistent
volume
and
then
we
we
can
already
do
this
without
that
PR.
But
it's
a
little
bit
tedious,
because
if
you're
doing
this,
the
presumption
is
that
you're
doing
it
locally
and
mini
cube,
has
some
interesting,
very
specific
recipe
to
be
able
to
even
find
a
persistent
volume
to
a
claim.
But
so
I
charted
that
in
if
I
put
that
in
a
chart
template.
M
M
Think
if
you
remember
from
the
last
week
or
if
any
of
anyone
who
was
on
the
call
the
last
time
there
were
some
issues
where
you
had
to
change
the
you
know:
change
the
permissions
on
your
host
machine
for
the
CSS
and
J's
file
or
your
but
the
local,
drupal's
local
files
and
then
clear
the
registry.
But
that's
very
application
specific
and
you
have
to
do
that
whether
or
not
you're
charting
it
just
means
that
automating.