►
From YouTube: Kubernetes SIG Apps 20181029
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
And
my
name
is
Matt
Farina
I'll
be
chairing
today
alongside
I.
Think
ken
is
also
on
it.
He'll
be
co-chairing,
Adnan
might
join
us
as
well.
Today
we
didn't
have
any
announcements
or
any
demos.
Did
anybody
have
any
announcements
that
they
wanted
to
share?
I
guess
the
one
announcement
that
I'll
share
that
isn't
exactly
related,
but
is
big
news
and
again
is
that
iBM
has
is
in
the
process
of
purchasing
RedHat,
and
so
some
of
our
Red
Hatters
may
not
be
here
today.
I'm
sure
they've
got
lots
of
internal
things
going
on
I.
B
Have
one
if
in
this
is
Joe
Thompson
away
from
mesosphere
and
some
of
you
may
have
seen
last
week
we
released
version
112
of
DCs,
which
includes
a
high
density,
multi
kubernetes
prompt.
So
that's
definitely
worth
your
time
to
check
out
I,
don't
want
to
go
on
and
on,
but
as
far
as
we
know,
it
is
the
only
system
that
allows
you
to
run
multiple
kubernetes
clusters
sharing
bare-metal
nodes.
A
B
A
A
B
A
A
A
A
Will
take
silence
is
not
having
anything
if
somebody
does
have
something
they
want
to
talk
about,
specifically
with
developer
tools,
which
is
something
we
probably
need
to
get
more
into
in
the
near
future.
Here,
because
the
kubernetes
is
hard
thing
in
some
of
the
circles
I've
been
in
has
been
rearing
their
heads,
its
ugly
head
again,
people
are
getting
a
little
bit.
It's
a
bit
of
work,
there's
a
lot
of
platforms.
You
can
package
something
up
and
know
what
you're
doing
and
just
get
it
deployed
and
work
in
minutes
and
with
kubernetes.
A
C
There
I'm
they're,
forgetting
zoom
to
work
and
our
conference
rooms
requires
some
technical
finagling
so
including
via
non-trivial
ordeal,
so
yeah
application
sense.
So
we're
going
to
open
up
a
proposal
on
the
repo
for
how
we
think
status
should
be
computed
and
we've
kind
of
run
it
by
a
bunch
of
people
in
terms
of
from
an
architectural
perspective.
Are
we
looking
at
resources
the
correct
way
kind
of
what
we're
thinking
now
is
that,
for
the
first
cut
will
generally
support
status
for
those
resources
that
are
well
known,
like
deployment
workload,
resources,
deployment
path.
C
Don't
think
from
the
first
cut
we're
going
to
be
supporting
CR
these
that
are
aggregating
other
workloads
just
because
there
is
no
best
practice
around
it
right
now
and
I
think
that's
what
we're
trying
to
not
work
we'd
like
to
get
the
application
controller
on
this
tax
computation
in
within
well,
basically
before
the
Christmas
holiday,
we're
not
tied
to
the
kubernetes
release
cycle,
but
we're
kind
of
targeting
that
for
our
metrics
and
we're
close
to
being
ready
to
accept
feedback
on
the
community.
C
So
when
Barney
opens
the
PR
while
announcing
it
to
Gaston,
hopefully
we
get
a
good
review
on
it
and
good
feedback.
So
we
do
the
right
thing.
The
first
time
around
then
well
going
into
the
next
iteration
will
have
to
probably
extend
it
to
support
whatever
the
correct
interpretation
for
CRD
statuses,
not
all
CR
DS.
We
think
we'll
even
have
status,
because
you
can
actually
just
use
it
in
structured
data
and
doing
a
review
of
the
way
see
our
knees
are
being
used
in
the
community.
C
X
9
uncommon
thing,
but
it
also
is
very
frequently
you
used
for,
like
a
custom
workload,
controller
like
a
CD
operator
or
old
operator,
and
in
that
case
we
want
to
try
to
come
up
with
a
good
way
for
everyone
to
report
status
in
a
similar
way,
and
another
outcome
of
this
will
probably
go
back
and
review.
The
work
was
API
status,
computations
and
trying
to
make
them
a
little
bit
more
consistent
family
included,
doing
things
like
adding
conditions
for
stable
set
and
adding
conditions
for
daemon
set.
C
We
had
the
condition
field
prior
to
be
one,
but
we
didn't
actually
tell
any
conditions,
as
we
weren't
sure
about
computing
them.
We
also
had
a
long
conversation
about
whether
conditions
were
the
right
abstraction
in
the
general
case
and
decided
to
keep
them
for
deployment
and
then
attack
them
as
we've
added
them.
We
should
probably
start
adding
already
this
condition
in
the
same
way
that
deployment
has
one
is
over
thickened.
A
So
the
ultimate
outcome
is,
you
could
deploy
an
application
with
say,
maybe
something
that
is
running
as
a
stateful
set
and
something
else
that
is
running
as
a
deployment,
and
you
could
see
when
the
whole
thing
is
up
and
when
the
whole
thing
is
down
by
looking
at
the
whole
status
for
a
particular
application.
Yeah.
C
Ready
an
unready
and
there
few
other
interesting
complications
that
were
kind
of
debating
about
whether
they
make
sense.
So,
if
they're
ready,
we
just
basically
mean
that
you
can
actually
paint
traffic
and
that
your
network
is
configured
and
the
little
workload
is
up
and
functional.
Settle
might
be
something
like
think
about
how
deployment
has
men
availability
before
it
reports,
readiness,
so
settled
means
that
you're
actually
not
updating
anything
you're
at
the
right
version
of
the
application
and
you're
also
ready
to
take
traffic.
C
Trying
to
mail,
they
just
we're
trying
to
make
you
you
have
to
change,
will
reuse
it,
but
we're
trying
to
come
up
with
a
standard
way
that
everyone
can
use
it
and
report
status
and
that
part
of
an
application.
But
you
wouldn't
be
mandatory
to
opt
into
now.
Okay,
so
we're
trying
to
make
the
criteria
as
we're
trying
to
lower
the
barrier
to
entry
as
low
as
possible,
while
still
be
useful.
C
A
So
so,
if
I
understand
it
right,
there's
a
number
of
different
ways
that
status
is
being
reported
today,
it's
kind
of
inconsistent
and
so
to
build
something
that
that
reads.
Other
statuses
and
then
reports
up
an
overall
status
is
a
little
bit
difficult
to
do
to
are
the
just
the
legacy
and
consistency,
and
so
this
is
what
we're
trying
to
come
up
with
how
we
could
come
up
with
some
patterns
that
people
could
consistently
do
things
that
make
it
easier
to
do
surface
overall
status
right
right.
C
So
in
the
planet
basically
reports
its
readiness
directly.
That
has
a
condition
that
basis
is
amazing,
staples
that
use
that
they
don't
really
have
it.
There
are
ways
you
can
tell
what
rating
this
means.
I.
Think
semantically.
There
was
general
agreement
on
when
a
deployment
is
ready.
Staple
says
probably
a
little
bit
harder,
because
so,
if
I
mean
the
Indians-
and
you
can
do
is
say
like
it's
ready,
if
all
the
replicas
are
running
and
fully
available.
But
the
question
is:
is
that
the
right
semantics
and
then
for
diamond
set?
Is
readiness
really
I?
C
B
C
C
B
C
I
can
see
is
that
our
readiness
threshold,
but
ideally
I,
mean
only
I'm,
not
sure
we're
talking
about
it.
We
were
just
really
sure
that,
like
there's
a
huge
benefit
to
being
able
to
say
that
something's
ready
when
it's
under
a
replicated,
because,
ideally
in
the
decorative
intent,
is
five
times
you
want
all
of
them
to
be
ready
right
and
for
the
reporting
of
status.
It's
not
like
hydrating
this,
where
it
affects
the
network,
for
it's
really
just
communicating
the
end
user.
That
is
ready.
C
There's
people
says
they
tend
to
be
much
smaller
than
deployments
anyway
and
daemon
sets
can
launch
a
lot
of
things
in
parallel
and
they're
only
kind
of
sensitive
to
the
number
of
nodes
in
the
cluster,
as
opposed
to
being
sensitive
to
like
actual
replication
configuration
and
the
users
bechler
different
types.
The
toyman's,
the
one
relating
this
threshold
would
make
the
most
sense,
and
we
don't
do
it
there.
Well.
B
I
threshold,
maybe
not
for
daemon
set,
because
that
is
kind
of
all
or
nothing,
it's
like
an
any
or
none,
but
I
have
seen-
and
this
is
arguably
kind
of
an
abuse
of
the
daemon
set,
but
I
have
seen
daemon
sets
that
are
fronted
by
a
service.
So
as
soon
as
the
first
daemon
set
pod
comes
up
that
it's
effectively
ready,
there's
nothing
stateful
about
it.
It's
just
for
whatever
reason.
The
decision
was
made
that
this
needs
to
run
on
every
mode,
but
as
long
as
any
of
them
are
we're
ready
to
accept
what.
C
But
there
is
so
the
readiness
semantics
that
we
have
for
deployment
is
slightly
different
right,
like
what
we're
saying
is
not
just
ready
to
accept
work
but
fully
replicated,
meaning
that
we
can
accept
work,
the
level
that
the
user
has
requested
and
it
gets
even
more
complicated.
Given
now
the
now
that
we
have
auto-scaling,
yeah
or
so
maybe
even
reviewing
deployment
is
a
worthwhile
exercise
in
determining
if
readiness
for
deployments
still
have
the
semantics
that
we
wanted
to
give
in
genius
in
the
ecosystem
and
how
it
allows.
Maybe
we
need
a
cigarette
eNOS.
A
We
don't
need
any
more
sakes,
let's
be
honest,
we've
got
it,
but
yeah
figuring
out.
Readiness,
though,
and
I
think
the
application
controller
place
to
do
it,
because
we're
gonna
be
looking
at
readiness
across
a
bunch
of
things
both
built
in
controllers
and
well
CRTs
and
see
ours.
And
so
it's
a
nice
place
to
kind
of
look
at
it
here
and
just
see
what
the
mess
is
and
maybe
try
to
plot
a
pragmatic
path
forward
to
help
us
make
this
simpler
or
to
try
to
deal.
A
A
C
Drop
shoots
hands
back
to
back,
probably
and
the
one
that
was
the
one
that
basically
leave
already
definitely
released
and
then
I
want
to
update
to
the
latest
version
of
Ruby
builder
yeah.
There's
an
issue
for
that.
Yeah
I
think
there's
someone
who's
trying
to
get
a
PR
together,
but
they're
struggling
a
little
bit
and
they
kind
of
help
point
them
in
the
right
direction.
So
hopefully
we
can
get
that
PR
into
okay.
A
C
And
then
one
other
thing
that
Center
me,
someone
had
a
report
that
they
were
trying
to
use
the
application
CRD
with
the
new
set
based
selectors,
and
they
were
having
issues
with
it,
but
I
wasn't
able
to
reproduce
it.
I
was
curious
if
anyone
else
is
in
that
problem,
my
intuition
is
that
they
said
there's
that
base
selector
up
in
a
way
that
didn't
match
the
actual
things
they
were.
Selecting.
A
C
I
tried
to
the
seemed
to
work
for
me
and
in
theory,
and
use
the
label
selector.
That
is
basically
the
same
as
everyone
else
which
would
support,
set
and
legacy
based
selection,
I.
Think
kind
of
the
other
feedback
we
got
was
that.
Why
are
we
using
the
at
label?
And
it
was
basically
that
was
a
recommendation
that
that
aspect
working
group
came
out
with
labeling
applications
in
the
consistent
ways
would
be
usable
across
the
whole,
so
yeah.
A
A
All
right,
I
think
silence
is
nothing
and
then
the
last
one
was
one
that
I
just
added
here
earlier
today,
all
right.
So
so
one
of
the
one
of
the
selling
points
that
a
lot
of
folks
have
for
kubernetes
is
the
idea
of
portability.
I
could
take
my
applicable
Nettie's
application
and
I
can
run
it
in
AWS.
I
can
run
it
in
gke.
I
can
go
run
it
and
aks.
You
know,
I
can
move
it
all
over
the
place.
I
could
use
digital
ocean.
A
I
can
move
it
all
over,
but
the
reality
is
is
a
lot
of
folks
use
services
right,
they're,
not
gonna,
run
their
own.
My
sequel,
amore
adv
cluster,
a
lot
of
times,
I
use
it
databases
as
a
service
and
just
query
it
and
get
their
API
endpoints
and
be
ready
to
go.
Well
all
the
vendors
have
their
own
api's
for
all
of
these
things
and
they
even
have
certain
nuances
to
them
as
well
in
how
some
of
this
works.
So
what
do
you
know
and
end-users
want
to
be
able
to
port
their
things?
A
Portability
is
one
of
the
selling
points
that
people
are
excited
about
with
kubernetes,
because
you
can
port
it,
but
we
don't
really
have
that
and
for
a
while
quite
some
time
we've
had
the
kubernetes
Service
Catalog
is
an
incubator
project
working
to
try
to
do
this
and
it's
been
the
vendors.
How
trying
to
come
together
to
work
this
out.
But
unfortunately
it's
not
doing
that
well
these
days
and
it's
not
where
it
needs
to
be,
and
you
don't
really
have
good
support
across
clouds,
and
so
we
don't
really
have
that
portability
and
I.
A
C
People
on
it
right
like
that.
Lee
has
done
OnStar,
so
in
order
to
provision
credentials
and
coordinates
for
you
to
find
the
services
that
existed
off
cluster
that
ran
in
people,
things
like
databases
or
Redis
or
any
of
those
things
they
came
up
with
the
service
program
and
the
service
worker
model
we
have
is
almost
in
the
kubernetes
service
broker
is
almost
the
exact
same
thing
and
the
PGI
model.
A
C
A
C
Is
but
the
problem
that
I
think
conceptually
people
kind
of
had
to
do
is
that
it's
so
pieces
are
very
decorative,
but
the
workflow
for
actual
provisioning
and
binding
is
very
imperative
right
and
then
there's
like
no
automatic
way
to
get
coordinates
injected
into
your
stuff.
There's
no
automatic
way
to
get
secrets
and
credentials
injected
into
your
stuff.
It
just
the
experience.
A
C
C
A
Released
this
and-
and
it
kind
of
goes
this
way-
and
it's
built
on
operators
right,
but
it's
AWS
only
and
if
I
remember
right,
AWS
never
really
got
involved
in
the
the
Service,
Catalog
and
I
understand
vendor
motivations
and
things
like
that.
But
this
is
AWS
only
which
comes
back
to
okay.
Now,
you've
got
a
more
kubernetes
native
way,
maybe
to
get
AWS
things,
but
you
don't
have
a
vendor
neutral
or
a
way,
because
I
can't
take
those
same
here
at
ease
and
port
them
to
gke.
A
I
can't
take
those
same
CR,
DS
and
important
to
adjourn
because
it's
not
going
to
work
there.
This
is
AWS
specific
and
that
gets
us
back
into
the
the
vendor
lock-in.
The
specific
vendor
world,
where
maybe
kubernetes
can
run
my
custom
workloads.
But
if
I
go
for
a
sass,
even
the
common
ones,
I'm
back
into
my
single
vendor,
which
which
actually
removes
the
portability
idea,
which
is
what
a
number.
C
Of
the
critical
feedback
that
I
saw
given
to
this
project
was
that
great
another
way
to
lock
me
and
now,
even
though
my
portable
stuff
I'm,
locking
the
AWS
banks
but
I,
don't
feel
like
that,
was
the
developers
intention.
He
just
wanted
to
make
it
easier
for
people
to
turn
up
stuff
and
use
it
inside
of
their
clusters
and
to
be
there
like
for
dynamo.
There
is
no.
If
you're
using
dynamo
were
like
cloud
BigTable
or
one
of
the
services
that
only
exists
on
a
particular
vendor
implementation.
C
There
is
no
notion
of
what
portability
might
look
like,
but
wonder,
and
it
might
be
interesting
to
me
at
least,
is
for
there
are
some
basic
fundamentals
that
kubernetes
abstracts
for
networking
computer
storage,
extending
that
to
some
other
one,
the
minerals
that
are
generally
ubiquitous
provider.
Implementations
might
be
useful
thing
to
do
so.
I
can
imagine
a
single
CRD
that
represents
interaction.
C
Well,
single
set
of
CR
needs
that
implements
interaction
with
object,
storage
because
object,
storage
is
fairly
ubiquitous
and
even
for
on-prem
installations,
you
can
get
like
Dell
or
EMC
appliances
that
pretty
much
implement
every
compatible,
API
for
object,
storage
or
or
particularly
our
DBMS,
is
there.
There
may
be
some
features
that
you
can
get
on
one
cloud
that
you
can't
get
on
another,
but
all
of
those
support
and
manage
offering
for
turning
up
a
relational
database
management
system.
So.
A
Yeah
and
quite
honestly,
a
lot
of
what
people
do
is
the
standard
shared
services,
my
sequel,
complaint,
API
usage,
that
is
still
huge
right
and
there's
a
bunch
of
services
that
are
similar
between
them
and
so
I'm.
Just
wondering
how
do
we
first,
how
much,
if
you're
a
non
vendor
person?
How
much?
If
is
this
a
big
deal?
If
you're
not
have
one
of
the
major
clouds
to
be
able
to
have
this
I.
A
Yeah,
but
at
the
same
time,
folks
want
a
certain
amount
of
what
I'm
curious
about.
Is
the
amount
of
portability
folks
want
in
being
able
to
run
this?
Because
if
you,
a
lot
of
folks
want
to
use
this,
ask
you
know
and
I
understand
it
may
not
be
the
cloud
native
way
to
do
it,
but
what
you
end
up
doing?
Is
you
end
up
punting
in
the
operations
to
somebody
else?
A
If
I
don't
have
to
operate
my
own,
my
sequel
that
have
to
worry
about
that
I
can
focus
on
my
own
business
logic,
which
is
what
a
lot
of
folks
want
to
do,
so
they
punt
to
SAS
services
that
scale
have
teams
of
people
behind
them
and
they
just
pay
a
little
itty-bitty
cost
to
get
in
on
this.
It's
it's
the
economies
of
scale
and
right.
So
it's
not
always
just
about
okay.
Well,
there's
more
cloud
native
way
where
I
can
run
it
myself.
Part
of
the
whole
idea
is
I.
A
Don't
want
to
run
it
myself
because
I
want
somebody
else
to
deal
with
that
problem
and
I
just
want
to
deal
with
the
working
service,
and
so
and
a
lot
of
these
are
common
services
across
different
providers.
They
may
have
some
nuances
to
them.
A
lot
of
folks,
don't
even
take
advantage
of
those
they're
just
going
with
the
basic
straight
away
that
they
could
run
themselves.
A
They
just
don't
want
to,
and
so
I
think
that
there
is
a
space
in
the
portability
realm
that
folks
are
gonna
want
now
for
a
lot
of
the
newer
stuff
we're
getting
away
from
that,
because
a
lot
of
the
open
specs
were
years
ago,
and
we
don't
see
as
many
open
specs
these
days.
A
lot
of
it
is
more
I'm
gonna
release.
My
API
and
it'll
be
different
from
everybody
else's.
A
E
I
think
useful
too
yeah
I
mean
my
previous
comments
were
mostly
concerning
very
specific
things
like
that.
I
would
agree
right,
but
the
the
other
thing
I
wanted
to
add,
and
that's
the
last
thing
that
we
go
and
say
here,
is
that
it
may
be
a
good
idea
to
you
know,
consider
to
narrow
down
the
scope
to
yet,
as
we
mentioned
object,
storage
and
perhaps
something
that
that
is
my
sequel
like.
E
So
you
know
having
a
a
few
options
where,
where
you
could
get
creating
my
sequel
database
in
Google
cloud
and
Amazon
and
see
how
those
not
match
up
and
having
a
is
all
very
simple
sorry
that
lets
you
do,
that
would
be
nice
and
I.
Don't
think
it
requires
a
an
open
service
broker
and
if
obstruction
so.
C
C
A
One
of
the
things
that
caught
my
attention
in
this,
though,
is
with
the.
If
I
understand
it
right,
and
somebody
can
correct
me
if
you're
using
Service,
Catalog
and
open
service
broker,
it
required
the
providers
to
have
the
server's
open
service
broker
api's.
To
do
things
right,
so
AWS
would
have
to
go.
Do
those
api's
and
Google
would
have
to
go?
Do
those
api's
and
insurer
would
have
to
go?
Do
those
api's
and
digitalocean
would
have
to
go
to
those
api's
and
Alan
Baba
would
have
to
go.
A
Do
those
api's
and
it
required
the
providers
to
be
in
on
that
game
and
I
know
Google
and
Microsoft
with
Azure
they
they
went
in
and
they
invested
a
lot
into
this,
but
I
don't
think
AWS
did
if
I
remember
right,
and
that
was
a
pain
point.
I
think
it
might
work
now,
but
finding
Doc's
and
getting
this
stuff
really
up
and
running
over.
There.
A
C
But
you
can
easily
implement
the
Service
Catalog
for
a
cloud
provider
as
a
shim
and
just
have
a
mediation
layer
that
manages
the
interaction
for.
Let's
say
you
want
to
do
it
on
AWS
for
s3,
for
instance,
right.
Okay,
there
was
no.
If
the
community
wanted
to
implement
it,
the
service
providers
wouldn't
really
be
able
to
stop
them,
or
you
know
it
just
look
at
the
matter.
You
may
individually
want
to
invest
in
an
implement
approver
for
their
particular
car.
That's
kind
of
up
for
them,
but
if
it
can,
he
really
wanted
it
bad
enough.