►
From YouTube: Kubernetes SIG Apps 20180709
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
The
other
announcement
is
is,
if
you
do
have
a
demo,
please
let
us
know
and
we'll
get
you
on
the
schedule,
I'm
and
talks
to
a
couple
of
folks
already.
But
if
you've
got
something
that
you
want
demoed,
let
us
know
we'll
try
and
find
a
good
week
to
get
this
going,
and
so
those
are
the
only
announcements.
Does
anybody
have
any
other
announcements
anything
they'd
like
to
share.
A
A
We
might
be
missing
our
demo
person
if
we'll
give
it
time
we'll
jump
ahead
and
if
they
show
up,
they
can
go
ahead
and
do
the
demo
afterwards,
if
not
we'll,
go
ahead
and
reschedule
that.
So
we
have
a
few
things
up
on
the
discussion
topics,
one
of
them
actually
the
first
two
are
related
to
the
application
CRD
and
the
first
one
is
the
use
of
markdown
or
common
mark
in
the
notes
field.
If
you
open
it
up,
so
there
is
a
suggestion
on
the
application
CRD
to
take
the
notes
field
and
the
notes.
A
The
title
for
the
notes
that
says
notes
contain
a
human
readable
snippets
intended
as
a
quick
start
for
the
users
of
an
application,
and
the
question
becomes
how
it
was
just
plain
text.
But
then
there
was
the
request
to
put
in
markdown
as
well
and-
and
we
requested,
if
it's
gonna
be
markdown
common
mark
common
mark
is
a
spec
for
markdown,
so
because
markdown
was
originally
written
in
Perl
and
people
took
different
derivation
and
throw
their
own
parsers,
and
you
know
splintering
and
going
in
some
different
directions.
So
there
was
github
style
markdown
there.
A
All
these
different
styles
common
mark
is
a
common
spec
for
markdown,
so
parsers
can
be
written
the
same
and
so
to
specify
that,
but
this
does
put
a
dependency
on
any
tool.
That's
going
to
interact
with
these
notes
to
have
a
common
mark
parser
in
there
in
order
to
display
whatever
the
markdown
is.
So
what
do
folks
think
about
that?.
B
A
B
A
No,
there
has
not
been
any
discussion
on
that
part
of
putting
in
common
mark
in
there
and
asking
so
I
I'm
the
one
who
asked
for
common
mark
and
the
reason
I
did.
It
is
because
other
specs
were
doing
it.
Things
like
open,
API,
I
use
this
common
mark
and
kubernetes
already
uses
open
API,
but
also
in
order
to
spark
conversation,
because
markdown
is
kind
of
a
nebulous
term.
What
you
know,
github
flavor
common
mark
whatever
and
I
wanted
to
get
the
conversation
going
around
some
of
the
specifics.
So
that's
where
that
came
from.
B
A
C
A
D
D
A
A
C
B
A
A
E
A
For
the
most
part,
yes,
but
that's
talking
about
like
the
people
who
develop
these
applications
here,
DS,
the
real
impact
is
on
the
tool.
Authors,
who's
authoring,
the
tools
that
have
to
interact
with
these,
because
if
a
tool
author
says
well
I'm
just
going
to
ignore
it
well
they'll
have
a
bunch
of
people
using
markdown,
and
then
that's
just
gonna
show
up
in
fields
all
over
the
place
right
and
that
doesn't
give
the
greatest
experience.
A
E
The
operations
that
they
want
to
implement
right,
so
a
gallery.
My
expectation
would
be
this.
If
you
didn't
Ricardo
get
application
or
you
did
whatever
again
application
and
you're
doing
it
from
the
CLI.
You
probably
render
the
entire
thing
just
the
same
way.
You
pulled
it
down
from
castle
if
you've
been
described,
that
might
get
someone
interested
right
and
then
the
choice
becomes.
What
do
you
display
in
the
describe
field?
E
You
display
the
notes:
do
you
not
play
the
notes
you
choose
to
do
markdown,
attacks
and
I
think
that
is
kind
of
up
to
the
underlying
implementer
in
terms
of
how
they
want
to
do
it
I'm,
not
sure
that,
for
this
particular
field,
if
different
implementations
choose
to
do
different
things,
and
it
would
be
like
a
general
and
fragmentation
of
the
user
experience
or
that
it
would
break
the
us
across
tools
in
a
very
meaningful
way,
but
that
was
my
opinion.
It
used
to
be
your
brother's
name.
A
A
A
All
right
well,
since
we
don't
have
too
many
opinions
on
this,
the
issue
is
up
here:
github.com
/,
kubernetes,
sig,
/
application,
and
then
it's
pulling
request
number
53.
If
folks
want
to
come
chime
in,
please
do
so
and
let
us
know
what
you
think
we'll
probably
go
off
and
try
and
ping
some
of
the
UI
authors
out
there
to
get
some
more
of
their
impression
from
some
of
the
other
pull
requests.
A
We
know
who
some
of
those
folks
are,
and
we
might
want
to
get
some
feedback
from
them
as
well,
so
I'll
try
and
do
that,
because
this
is
very
UI.
Heavy
I
get
some
feedback.
So
thank
you.
The
next
one
issue
we
had
coming
up
here
was
more
of
a
release.
Schedule
bit
was
with
the
application
CRD
we're
discussing,
actually
cutting
a
release
and
there's
been
some
discussion.
A
Should
it
be
a
v1
alpha
one
or
a
v1
beta
1
V
1,
alpha
1
is
kind
of
the
working
the
working
version
right
now,
but
a
v1
beta
one
would
be
a
little
bit
different
because
that
would
have
some
support
and
that
tends
to
be
when
people
start
picking
things
up
is
when
things
get
into
beta.
And
so
what
do
you
all
think?
Do
you
have
any
opinions
on
it?
We're
leading
v1
beta
one
for
the
upcoming
release,
but
we
want
to
hear
if
anybody
see
something
different
than
that.
A
E
It
means
we're
going
to
support
it
for
some
length
of
time
and
we're
not
going
to
get
rid
of
it
unless
we
release
a
view
1
beta,
2
or
as
yet-
and
the
implication
is
this,
but
you
do
V
1
alpha
1
that
can
pretty
much
be
deprecated
at
any
time
and
it's
effectively
like
we
make
no
promise
not
to
break
it.
B
1
beta
1
is
we
will
support
it
until
V
1,
beta
2
comes
out
and
then
for
at
least,
but
you
know,
6
months
or
so
after
the
question
still
remains.
E
E
I
mean
like
that's
what
we
did
for
the
workloads
API
in
general
right,
because
if
you
do
need
to
make
some
backward
incompatible
changes,
you
can
do
that
and
if
V
1,
beta
2
versions
or
both
versions
deprecated
the
V
1
beta
1,
then
promote
the
V
1
beta
to
the
other
side
of
that
is.
There
are
more
things
you
have
to
support,
but
there's
kind
of
a
the
mediation
there's
in
111
we're
going
to
release
version.
E
C
are
these
anyway,
so
by
112
113
by
the
time
we're
looking
at
go
G
a
this
is
earning
for
having
multiple
versions
that
should
allow
for
normalization
of
the
CRB
into
a
single
form
internally,
for
the
controller-
and
at
that
point
you
know
it's
no
different
from
under
any
other
kubernetes
resource.
So
I'm
really
not
as
concerned
about
having
to
support
multiple
versions
over
the
next
few
religious.
The
reason
to
do
it
would
be
well
an
alpha
one
would
be
if
we
really
feel
like
we
had
no
confidence
right
now.
E
We
want
to
put
out
for
something
to
people
to
test
with
a
bit
more
kind
of
reason.
I'm
getting
feedback
to
go
with
V
1
beta
1
is
that
the
open
source
API,
ok
UI,
is
putting
it
up
and
there
are
a
couple
other
people
who
I
want
to
pick
it
up
and
you
in
there.
Actually,
you
guys
know
it
would
be
a
lot
easier
for
me.
If
it
was
you
want
data,
so
I
really
have
no
objection
to
trying
to
go
with
the
b-1b.
The
1i
I'm
not
strongly
opposed
to
doing
output.
A
Think
the
bigger
thing
that
may
come
rather
than
field
changes
to
the
spec
between
versions
and
beta
versions
here,
might
be
what
we
do
with
a
controller
which
are
something
we're
still
trying
to
work
out
and
I.
Think
once
we've
got
what
we're
mostly
settled
on
fields
once
we've
finished
that
off
then
going
to
talk
about
the
controller
might
be
the
bigger
changes,
as
we
iterate
beta
versions.
E
The
interesting
thing
about
controllers
is
to
make
the
change
waiting.
You
have
to
have
a
published
behavior
that
you
modify
the
backward
incompatible
way
and,
generally
speaking,
it's
a
lot
easier
to
control
backward
compatibility
if
the
API
does
not
change
Louise
massaman.
My
experience
working
with
the
workloads
ApS
is
it's
much
easier
to
rate
people
by
doing
things
like
we're
moving
deals,
they
depend
on
or
changing
types
of
deals
with.
They
depend
on
changing
that.
E
A
A
A
A
Alright,
then
I
think
the
next
one
here
is.
There
was
and
we'll
see
whether
folks
have
input
on
this.
There
was
a
request
that
came
in
an
issue
linked
here
to
add
min
ready
seconds
to
stateful
sets
it's
on
the
kubernetes
repo.
It's
issues,
six,
five,
zero,
nine
eight
and
it's
all
about
min
ready
seconds
to
do
anybody
want.
Does
anybody
know
the
context
of
this
request?
I
know
deployments
daemon
sets
and
I.
Think
replica
sets
already
have
the
option.
It's
stateful
sets
that
don't,
but
I
haven't
actually
used
this
option
on
any
of
them.
A
Okay,
so
so
there's
a
couple
ways
to
approach
this
and
Ken.
You
can
probably
fill
in
more
details,
but
there's
the
readiness
probe
right
and
so
the
way
min
ready
seconds
works
is
after
the
readiness
probe
comes
back
is
everything's
good.
Then
it
waits
that
many
seconds
before
it
does
things
like
spin
up
another
one
right
if
you're
dealing
with
deployments
stuff
like
that,
so
the
thing
is:
is
this
something
where,
instead
of
having
qu
burn,
do
this
it's
more
appropriate
to
have
this
logic
on
the
readiness
probe?
E
For
some
strange
reason.
After
the
fact,
if
you're
deploying
the
Nativity
is
traditionally,
your
health
check
might
be
a
little
bit
more
in-depth
in
terms
of
it
is
the
application
actually
ready
to
receive
traffic?
There
are
some
other
implications
around
adding
min
running
seconds,
but
personally
I'm
not
opposed
to
doing
it.
If
people
think
it's
useful
I'm,
just
not
sure
how
useful
it
actually
is,
I
mean
it
would
be
related
with
other
people
who
want
this
feature
to
chime
in
on
the
subject
and
like
call
out
and
say
yeah,
this
looks
great.
A
E
A
E
E
As
long
as
the
default
is
consistent
with
the
existing
API
I,
don't
see
how
that
would
be.
A
huge
problem
is
basically
another
tool
that
you
could
potentially
use.
I,
don't
see
this
huge
amount
of
issue
it.
The
only
other
thing
about
it
is
I'm,
not
sure
how
precisely
bin
ready
seconds
actually
is
based
on
things
like
clocks
via
redly
and
best
it's
probably
an
approximation,
as
opposed
to
a
hard
em
run.
Timer.
A
Yeah
and
some
of
this
like
what
it's
trying
to
solve
for
in
the
example
right
is,
you
know
if
you
wanted
to
have
memcache
servers
and
replicating
give
things
enough
time
to
do
that.
It
might
be
better
to
have
something
that
doesn't
count
on
time,
because
time
could
change
depending
on
works
deployed
and
how
it's
deployed.
There
might
be
better
ways
to
actually
check
for
these
things
than
to
just
throw
a
time
on
it
and
hope.
A
A
A
There's
an
issue
open
on
it.
I
just
dropped
the
issue
into
chat.
So
if
you
build
a
tool
and
you
pull
in
kubernetes
as
a
vendored
repository
to
grab
some
of
its
internals
because
there's
something
that
aren't
in
client
go
or
API
machinery,
or
something
like
that,
you
may
now
start
running
into
problems.
This
is
new.
They
just
started
doing
this
in
111,
and
so
a
couple
of
projects
who
are
trying
to
upgrade
for
111
support
are
just
now
starting
to
experience.
A
But
I
think
the
make
scripts
may
be
basil
actually
require
kubernetes
vendor
directory
to
exist.
So
if
you're
using
a
tool
like
adapt
that
flattens
your
dependencies
to
the
top,
it
causes
things
like
the
make
script
to
start
failing.
So
you
it
becomes
a
little
bit
of
complicated
to
use,
make
and
to
pull
these
things
in
with
the
existing
tools
that
are
out
there,
and
this
would
not
work
at
all
under
Vigo,
but
kubernetes
already
didn't
work.
There.
A
A
All
right,
then,
I
guess
before
we
open
it
up
for
general
discussion
on
that
last
one
that
issue
that
we
have
there
if
anything
comes
out
of
sega
architecture,
I'll
make
sure
that
we
update
that
issue.
So
if
you
want
to
follow
that
issue,
you
can
follow
along
with
this
topic
before
we
do,
that
did
we
have
anybody
come
on
for
the
jib
demo.
A
F
A
A
It
may
just
have
to
do
with
our
sample
size
of
folks
who
are
engaged
here.
Many
of
them
are
work
at
companies.
Who've
now
created
some
of
their
own
tools.
A
lot
of
the
tools
that
I'm
familiar
with,
where
we
have
gaps,
are
more
lower
level
in
the
stack
ones,
but
the
higher
ones
are
on
application
management.
You
know,
there's
I
mean
there
are
holes.
We
haven't
talked
about
such
as
visualization
of
applications,
and
hopefully
things
like
apps
Rd,
and
things
like
that'll
make
that
easier.
A
I'm
growing
a
blank
right
now,
I
knew
of
some
others,
but
I'm
failing
to
remember
them
at
the
moment.
But
we
didn't
talk
much
about
here
and
probably
we
probably
need
to
go
out
and
ask
some
of
these
questions
in
survey
form
and
maybe
even
go
after
some
of
the
other
big
cloud
native
lists
to
see
if
we
can
get
feedback
from
there
as
well.
A
You
know
there
is
one
area
folks
are
brought
up:
the
ease
of
onboarding
to
kubernetes
right,
here's,
one,
that's
kind
of
tangentially
come
up,
and
that
is
kubernetes
okay.
So
if
you're
gonna
start
working
with
kubernetes
it's
hard
because
you've
got
to
go,
learn
all
this
configuration
right
if
I
want
to
go
figure
out
how
to
employ
something
cloud,
foundry
or
Heroku
I
can
do
it
in
a
few
minutes
and
write
a
few
lines
of
llamó
right.
You
know
you're
getting
into
the
side
cube
edge
space
here,
but
kubernetes
has
a
whole
lot
of
stuff.
A
You
need
to
know
with
context
and
details
on
all
of
this
stuff
and,
if
you're
starting
to
target
the
app
developers,
which
is
where
the
conversation
was
going,
we
just
have
a
whole
lot
of
stuff.
You've
got
to
learn
and
our
documentation
around
it.
Isn't
all
that
great
I
know
somebody
who
most
of
her
contributions
have
been
to
docs
and
they've,
been
as
she's
working
through
these
things
going.
How
do
I
do
this?
How
do
I
do
this?
A
It's
not
documented
anywhere
and
then
digging
through
code
and
specs
and
all
kinds
of
stuff
and
saying
aha
I.
Finally,
figured
it
out.
I'm
gonna
go
document
this
somewhere
because
we've
just
got
so
many
things
like
that.
We
expect
our
API
documentation
to
be
the
kind
of
thing.
That's
easy
to
get
and
people
can
go
through
and
understand
it,
but
we'd
still
have
massive
gaps
there,
and
so
that
onboarding
experience,
especially
for
app
developers,
is
complicated
and
finding
ways
to
make
that
simpler.
A
Was
that
Brian.