►
From YouTube: Kubernetes SIG Apps 20181008
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I'm,
a
recording,
okay,
welcome
everyone
to
the
October
8th,
SiC
apps
meeting.
My
name
is
Adnan
Abdul
saying
I'll
be
chairing.
Has
freed
us
today,
I'm
pasting,
a
link
to
the
agenda
document
here
that
we
use
and
you
can
follow
along
to
the
agenda
here.
So
we
don't
have
any
announcements
start
off
of
today.
So
unless
anyone
else
has
anything
they'd
like
to
to
shout
out
about,
we
can
move
on
to
our
first
demo.
B
You
see
the
presentation
we
can
yeah
so
basically
now
to
not
exist.
Communities
very
first
shot
in
detritus
of
this
issue.
So
let
me
describe
it
to
be
the
problem.
She
almost
the
I
cities
are
basically
the
standards
right
now
and
system
rolling
up
many
times
a
day
for
small
organization,
not
to
say
for
large
organization
with
microservices
architectures.
There's
a
lot
of
deployments
true
today.
B
Don't
doubt
the
quick
enough,
because
a
human
it's
to
watch
human
needs
to
react
and
to
make
a
decision
on
that
find
the
previous
image
with
the
deployment
again,
and
it
can
take
time,
money
and
downtime
if
something
goes
wrong
during
a
manual
process.
So
is
vertical.
Basically,
it's
watch
matrix
using
the
matrix
to
define.
B
Currently
it
supports
star,
grab
and
ate
a
dog,
but
basically
only
have
to
buy
one
function
that
returned
with
a
not
issued
a
new
matrix
is
a
qiyamah,
and
that's
he
reverting
to
the
previous
version.
We
can
mark
to
watch
multiple
deployments
in
multiple
namespaces
and
sold
on
their
configuration,
and
you
configuration
down
using
scaredy.
B
In
order
to
preserve
it,
80s
see
you
bye,
bestie,
going
down
we're
using
connotation
on
the
deployments,
themself
or
state
managing
we're.
Not
we
are
watching
for
how
long
we
are
watching,
because
we
don't.
We
don't
want
to
watch
forever.
We
only
want
to
watch
for
5
minutes,
10
minutes,
maybe
an
hour,
so
it's
also
defined
by
the
user.
B
Once
the
timeout
is
once
the
watch
period
is
expired.
We
are
not
longer
working
what
we
we
support,
what
I
call
the
cascading
deployments,
meaning
if
you
deploy
you're,
doing
the
deployment
and
start
watching
it
for
an
hour
during
that
hour,
we
are
redeploying
because
someone
has
a
new
future,
so
we
support
it.
We
stopped
watching
the
previous
deployment
and
start
watching
the
new
one
and
it's
run
in
the
cube
system.
Namespace.
B
This
is
an
example
of
the
CID
and
basically
for
watch
period
for
how
long
good
minutes
to
watch
what
is
the
metric
soft
and
it's
time
to
run
over
later
dock
and
in
each
namespace
that
we
wanted
to
watch.
You
provide
a
list
of
Department
the
name
of
the
deployment,
the
name
of
the
metrics,
it's
a
different
syntax
for
stunt
driver
and
for
a
dated
or
the
deployment
we
put
in
a
net
metric
name
that
suit
stated
on
and
what
is
the
patient
and
basically
that's
it.
B
B
B
B
B
C
C
B
C
Yeah
I
could
see
this
being
useful
for
things
other
than
the
deployments,
although
I
do
see
it
useful.
There
may
be
even
in
grouping
of
collection
of
assets
around
an
application,
because
sometimes
you'll
have
maybe
a
config
map
or
a
secret
and
a
deployment,
and
maybe
some
other
things,
I'll
group
together
in
an
application
rather
than
a
single
asset.
But
these
are
just
ideas.
B
A
All
right,
so
we
can
move
on
to
the
discussion
topic,
so
we
have
to
work
like
workloads,
API
discussion
today.
There's
one
thing
on
this
list,
but
if
anyone
was
if
anyone
was
to
bring
up
anything
else,
we
can
also
discuss
that
so
there's
daemon
sets
upgrades
and
downgrades
are
broken
and
it's
linked
to
an
issue
here.
Someone
someone
have
any.
C
D
D
If
you
upgrade,
and
then
you
create
a
demon
set
and
then
downgrade
your
demon
set,
will
probably
do
a
rolling
update,
it's
rolling
up
data
is
enabled
and
it
would
cause
all
the
demon
sets
to
do
it
simultaneously.
So
what
we're
looking
at
right
now
is
trying
to
get
agreement
to
try
get
this
change
to
work
in
a
way.
That's
word
compatible
so
that
if
you're
downgraded,
it's
not
going
to
be
in
your
aspect
and
that's
that's
kind
of
woods
in
play,
so
that's
not
out
in
the
wild.
E
Again,
I
was
trying
to
look
into
that
as
well.
I
think
there
might
be
other
cases
where
this
can
be
broken
than
just
down
great
upgrade,
because
if
you
are
so
you're
doing
rolling,
update
and
having
old
master
controllers,
a
new
API
server
I
think
you
can
just
do
an
18
controller
revisions
or
something
like
that,
because
you
send
the
request
without
that
it
will
default
to
two
and
it
won't
match
like
the
new
controller.
D
D
D
D
Thing
that
I
I
don't
think
it
would
actually
cause
any
strange
behavior
anywhere
else,
but
do
not
backward-compatible
change
either
way.
So
there
is
a
way
that
we
think
we
can
do
it
in
a
backward
compatible
way
and
we
still
see
if
we
can
get
I
believe
it
features
primarily
crudely
oriented
getting
node
to
agree
to
change
their
their
wording
in
their
implementation.
So
basically
it
defaults
to
not
being
anything,
and
that
indicates
enabled
and
then
just
change
the
wording
so
instead
of
enable
it
will
be
disabled,
and
then
you
set
it
to
false.
E
D
F
D
D
F
D
D
Well,
the
test
failure
that's
presenting.
There
is
less
of
a
problem
to
end-users
in
the
scenario
we're
talking
about
where
it
would
actually
potentially
hurt
them
right.
So
the
test
failure
is
caused
by
something
different.
Basically,
it's
just
it
can
no
longer.
The
test.
Controller
revisions
on
the
upgrade
was
on
the
test
avail,
which
is
what
kind
of
reads
the
concern
in
the
first
place,
but
ultimately
that
behavior
isn't
going
to
be
highly
disruptive
to
it.
Users
clusters
on
upgrade
what
would
be
like
it?
D
You
shouldn't
see
any
problems
until
you
actually
update
the
demons
and
that
wouldn't
be
a
problem.
You
would
just
be
rolling
forward
with
an
update
to
your
design.
If
you
upgrade
and
then
downgrade
you
get
unintentional
glowing
updates,
which
is
an
actual
problem,
would
probably
be
a
problem
for
most
users,
not
that
down
raids
are
something
that
we
see.
A
lot
is
just
that.
It's
a
feature
of
the
software
that
shouldn't
general
work
and
for
people
who
do
manage
our
grantees
clusters,
and
you
know,
do
potentially
have
a
need
to
downgrade.
D
Okay,
one
other
thing
I
wanted
to
talk
about
is
potentially
cron
jobs
to
GA
like
what
do
we
think
about
doing
this.
E
E
I'm
not
sure
if
machi
is
here
today,
but
I
bet
he
would
have
commands
there
because
he
had
clothes
of
bug,
reports
or
performance
issues.
He
heard
about
using
cron
jobs.
So
I
can
definitely
let
him
know
about
that.
But
last
time
I
talked
to
him.
He
said
this
business
to
be
rewritten
to
share
information
before
it
can
go
to
GA
because
they
are
issues
with
it.
D
Okay,
I
kind
of
agree
agree
with
that
with
moving
in
the
one
thing.
That's
not
clear
to
me
is
for
the
API
perspective,
I
think
we're
okay
with
the
API
and
don't
see
a
lot
of
requests
to
change
it
other
than
the
one
media
requests
had
time
zones
of
court
which
got
kicked
at
this
big
architecture
and
we
fundamentally
decided
we
are
not
going
to
do
that
because,
basically,
we
don't
really
carry
a
database
VI
on
a
database
around
and
every
kubernetes
distribution.
D
It
kind
of
changes,
the
meaning
of
what
it
could
Rene's
distribution
is
actually
to
make
the
eye
perspective
them
I,
don't
see
a
lot
of
problems
or
issues
or
things
that
we
should
be
adding
or
removing
prior
to
take
it
into
GA.
The
shared
Informer's
is
more
of
a
stability
thing,
so
I
think.
Maybe
we
should
prioritize
in
the
near
future,
getting
it
implemented
just
so
that
we've
I
mean
we're
going
to
once.
That
goes
in
we're
going
to
want
to
soak
back
for
at
least
or
at
least
your
suit
before
calling
nga.
D
D
Yet
to
begin
with,
with
their
team
short
cycle,
you
know
it's
going
to
come
right
on
the
heels
of
112
and
there's
been
a
lot
of
updates
to
ones
well
in
that
release
process
need
we
don't
touch
it
in
q4,
but
I
think
by
q1.
We
want
to
be
thinking
about
a
plan
going
forward
to
get
it
implemented
if
we
want
to
GA
in
2019,
and
it's
kind
of
the
last
thing
on
the
workloads
API,
that's
still
in
beta
and
at
least
from
what
I've
seen
in
the
community,
it's
actually
more
popular.
D
G
E
D
Right
I
think,
if
we're
gonna
put
here
in
for
reason
and
any
cash
is
always
a
risk,
so
squeaky
nighted
under
users,
heads
good
and
I-
don't
know
about
that,
but
I'd
rather
add
the
cash
first.
If
we're
gonna,
do
it,
it's
not
I,
don't
see
a
reason
not
to
you
shared
warmers.
If
we
think
we
can
do
at
this
table
way,
I
think
we
can
I
think.
E
D
But
we
haven't
call
that
yet
that
would
be
fine,
but
I.
Don't
think
people
are
generally
comfortable
doing
that.
We
want
to
add
shared
informers
good
at
accession
letter
between
the
controller
and
the
API
server
distributed
systems
in
general
and
kubernetes
and
specific
caching
layers
are
hard
super
for
we
call
it
GA
and
tell
the
users
that
ok,
it's
done,
it's
ready
like
you,
can
make
a
rock
you
can
depend
on
it.
D
We
what
we
probably
want
to
see
it
soaked
with
the
sheriff
warmers
for
each
one
release,
if
not
to
just
to
make
sure
everything
is.
You
know,
ready
to
go
the
way
I
would
plan.
It
would
be
something
more
like
if
we
can
get
the
shared
informers
in
in
q4
q,
one
let
it
soak
for
one
quarter
at
a
minimum
as
still
in
Vega.
D
D
We
don't
do
it
stable
enough,
then
you
slide
it
one
release
so
plan
it
for
two
out
and
you
can
move
it
forward
in
the
event
that
you
know
you
just
hit
a
house
of
all
art
on
the
first
go-around
when
you
put
the
Sheraton
form
risen
in
the
event
that
you
take
two
quarters
to
do
it.
That's
fine,
too,
if
you
promise
for
two
quarters
and
hit
in
one,
no
one
will
be
upset
with
you.
You
promise
for
two
and
hit
it
into
you
know
you
you
held
up
your
end
of
the
bargain
anyway.
C
That
makes
we've
got
a
path
for
it.
I
think
the
important
thing
is
really
a
path
for
it,
because
cron
job
has
been
sitting
around
for
quite
a
while
without
having
a
path
towards
a
ga
release
and
if
we
kind
of
track
where
we're
actively
working
for
it
towards
it,
I
think
that
alone
will
make
people
happy
yeah.
D
It
has
been
very
busy
doing
other
things
across
the
project,
so
there's
been
a
little
bit
of
that
there.
It's
been
good
enough
that
people
haven't
been
complaining
about
it.
Usually
we've
got
other
issues
with
job
and
object
quota
that
taking
more
time
than
cron
job,
so
I
mean
I,
guess
why
and
then
we
know
what
we
need
to
do.
I
just
don't
think
we
formalized
that
in
the
issue,
instead,
here's
the
tracking
plan
for
GA
and
then
just
gone
board
with
that.
D
If
the
only
the
only
thing
I
don't
have
a
good
grip
on
and
I
think
my
jacket
might
have,
a
better
grip
on
is
doing
thing.
The
API
is
exactly
where
it
needs
to
be
like
I
feel
good
about
it,
but
I
was
like
other
feedback.
I
haven't
got
any
negative
feedback,
but
I
would
like
to
have
a
more
thorough
contender's
of
that
before
we
say.
Okay,
we
ran
out
of
the
motive.
E
D
I
mean
you
shouldn't
feel
pressured
to
be
the
only
one
that
will
ensure
absorbers
if
he's
busy
doing
other
things
that
are
important,
we
can
probably
get
some
other
cycles
from
the
community
in
order
to
help
it
granted
is
tricky.
So
it's
not
something
that
we'd
like
to
put
out
like
a
request
for
help
for
your
first
contribution,
but
I
think
we
can
find
somebody
in
the
community
who
can
help
move
it
forward.
D
D
C
Does
bring
up
a
kind
of
an
important
thing,
though,
of
how
we
build
up
more
people
who
are
capable
of
contributing
to
the
controller's,
in
particular,
in
this
case,
like
if
I
went
to
look
for
a
documentation
on
what's
going
on
with
Sheridan
formers,
how
they're
used?
How
they're
being
added
is
there
much
written
so
that
somebody
who
even
wanted
to
understand
what
was
going
on,
maybe
not
even
write
it
could
dig
into
those
details?
C
D
That
being
said,
but
there's
lots
of
batches
with
the
caching
and
how
it
actually
works
and
which
ya
uses
pointers
everywhere,
but
because
golang
doesn't
have
hunts
players,
you
can
mutate
them
and
if
you
do
take
them,
you're
probably
gonna
cause
a
clash.
There
there's
just
lots
of
little
batches
that
probably
aren't
necessarily
well
documented
or
made
particularly
explicit,
and
we
might
be
able
to
do
better
there,
but
I
think
one
of
the
things
that
is
positive
as
kudi
builder
and
operator
kit.
D
There's
very
people
all
over
time
and
they're
a
little
bit
more
complicated
to
get
into
just
because
of
some
of
the
nuances.
I,
don't
know
how
for
a
new
person,
I
can
ask
some
of
the
newer
guys
you
enjoy
like
how
approachable
are
they
haven't,
seen
people
on
board
and
struggle
insanely,
but
I
mean
those
are
people
who
are
dedicating
a
large
portion
of
their
time,
just
understanding
how
they
work.
D
You
I,
don't
know
how
approachable
it
is
for
somebody
who
went
to
be
a
novice
who's
like
I
just
want
to
come
in
and
make
a
couple
of
contributions
that
might
be
a
lot
harder
and
I'm,
not
sure
the
only
real
person
or
a
dead
next
person
I'm,
not
sure
how
we
can
make
that
easier.
But
the
urban
suggestions.
C
Yeah-
and
some
of
it
is
just
folks
understanding,
what's
going
on,
even
without
being
able
to
do
with
the
concepts
like
I'm,
going
through
API
machinery
code
and
looking
for
documentation
or
anything
on
this.
That
explains
how
all
of
it
is
intended
to
work
and
that's
an
area.
That's
a
shortcoming,
although
this
is
something
I'll
probably
carry
over
to
the
contributor
experience
folks,
but
the
lack
of
conceptual
documentation
to
help
people
wrap
their
minds
around.
What
should
be
happening
makes
it
difficult
for
new
folks
to
come
in
well,
be.
D
D
Machinery
is
still
moving
extremely
fast,
so
you
know
there
seems
to
be
even
when
they
try
to
document
what's
going
on
it
changes
so
rapidly.
It's
hard
to
keep
up
like
going
back
to
years.
There
were
no
custom
resource
definitions.
There
were
third-party
resources,
then
there
is
disappear
and
then
we
added
extension
API
servers
which
really
changes
the
way
the
API
server
works,
because
now
I
can
communicate
with
other
API
servers
to
forward
stuff
through
they
actually
end
up
breaking
some
things
and
controller
manager
and
interesting
ways
which
we
have
subpoena.
D
Then
we
added
CR
DS
and
got
rid
of
TP
RS,
like
all
of
these
fundamental
architectural
changes
in
how
the
machinery
actually
functions.
But
this
on
going
and
continuously
server-side
apply
is
going
to
be
as
using
dry
run,
is
a
huge
thing:
they're,
just
taking
off
large
large
chunks
of
scope
and
changing
the
way
things
work
so
rapidly
that
documenting
it
is
probably
hard
and
like
even
just
staying
up
the
date
and
saying
conceptually
aware
of
what's
going
on.
There
is
kind
of
difficult.
C
A
D
D
Team
I
was
thinking
else,
I'll,
say
I'll,
say
one
thing
for
ops
for
in
general,
we're
pretty
good
about
addressing
broken
things
and
keeping
our
time
screen
and
when
they're,
not
green
understanding.
Why
so
we
haven't,
we
haven't
done
a
lot
of
this
needs
like
major
attention
externally,
because
we've
been
focused
primarily
on
core
stability
more
than
we
have
around
new
features
for
a
while
now,
so
we
dedicate
most
of
our
effort
there.
D
D
So
Ivan
tell
you
some
stuff
that
we've
been
kind
of
working
on,
so
one
thing
we've
been
talking
about
a
lot
is
how
to
correctly
interpret
application
status
and
we're
working
on
implementing
a
controller
that
computes
application
status
based
on
the
types
of
things
that
the
application
selects
so
we'd
like
to
get
back
done
in
the
near
term,
we're
not
tied
to
kubernetes
releases.
So
we're
not.
D
We
don't
have
to
like
put
it
into
it'll,
114
or
113
plan,
but
that's
something
we
like
to
get
done
and
we've
been
talking
a
lot
with
the
API
Machinery.
Ask
lifecycle
guys
trying
to
get
some,
so
we
know
what
it
what
it
means
for
a
deployment
to
be
ready.
Basically,
we
have
good
notions
on
what
it
means
for
readiness
for
a
staple
set
and
for
a
demon
set
and
for
a
PI
and
for
a
replication
controller,
but
CRTs
are
kind
of
what
target.
D
Actually,
the
community
is
kind
of
moving
towards
they're,
encouraging
things
to
be
outside
of
core
and
use
custom
resources
to
represent
their
state.
So
the
API
machinery
guys
are
trying
to
come
up
with
some
best
practices
about
how
status
should
be
represented
inside
a
custom
resources
in
order
to
communicate
workload.
D
Readiness
and
we'd
like
to
adopt
that
as
well,
because
we
are
strong
believe
as
that
as
kubernetes
evolves
and
as
we
work
towards
a
more
stable
core
and
experimentation
and
custom
resources
and
add-ons
we're
going
to
see
more
nor
see,
RDS
and
interoperate
incorrectly
with
applications
is
probably
going
to
require
some
level
of
standardization
around
communicating
as
eSGR
G's
into
the
application
as
well.
So
that's
what
we're
kind
of
talking
about
now
would.
D
We're
making
a
rollup
right
now,
so
you
roll
the
status
up,
potentially
aggregating
and
primarily
using
conditions
right
like
you,
so
it
is
okay,
though
it's
mostly
deprecated,
because
we
don't
want
people
to
do
state
machines
has
kind
of
been
the
thinking
behind
it.
Conditions
are
generally
always
okay,
so
we're
thinking
of
readiness,
as
its
represented
in
terms
of
applications
as
being
a
condition
that
is
incorporated
into
the
status
and
in
the
same
way
of
deployment
or
around
a
cassette,
can
incorporate
multiple
conditions
simultaneously.
D
They
communicate
them
more
in
depth,
meaning
of
like
a
rollout
or
whether
it's
ready.
We
would
like
to
do
the
same
thing
in
the
application.
Now,
the
other
consideration
which
we'll
probably
talk
more
inste
gas
about
later,
would
be
doing
when
an
add
conditions
like
deployment,
has
the
staple
set
and
diamond
set
so
right
now,
I
can
get
the
well
not
on
readiness
in
the
scope
of
employment,
its
availability,
the
availability
of
the
deployment
it
can
generally
be
assessed
by
looking
directly
at
the
availability
condition
on
the
deployment
object
itself.
D
That's
a
pattern
we
want
to
support.
We
might
want
to
add
those
types
of
conditions
to
staple
set
and
diamond
set
as
well
now.
This
can
be
done
in
a
backward
compatible
way,
because
basically,
status
is
really
struggle
on
the
fly
and
the
constraints
for
new
Canadian
status
throughout
releases
are
actually
reserved
and
mutating
spec.
D
We
added
conditions
at
a
type
to
staple
set
and
even
set
prior
to
be
one
because
we
anticipated
at
some
point
we
would
add
some
conditions
to
them
right
now.
We
don't
populate
them
at
all,
so
we'll
probably
have
a
proposal
in
the
near
future.
After
having
some
more
talk
with
API
machinery
about
potentially
adding
some
conditions
to
those
API
objects
as
well.
D
And
that
communicates
to
the
application
controller
from
the
perspective,
the
installation
tool,
whether
it's
been
completely
assembled
right,
so
that
simply
when
whatever
is
assembling,
the
application
says:
okay,
you're
done
we're.
Actually
my
people
says
nothing.
Your
assembled,
if
you
say
in
progress,
is
communicating
now
I'm
still
assembling
you
if
it's
successful
or
remove
it.
Okay,
you're
done
that's
a
respect.
It
should
be
communicated
by
the
controller
back
to
any
client
via
the
status
field.
The
applications
here,
okay,.
G
D
D
You
could
use
that
to
go
detect
the
component
directly,
but
it's
not
going
to
do
a
complete
fool
of
aggregating
all
status
fields
of
every
object,
because
if
the
event
we
got
there
was
that
that's
too
much
and
it's
kind
of
like
not
the
kubernetes
api
style,
but
you
sure
didn't
you
communicate
that
something's
broken
communicate,
but
something's
available
and
get
coordinates
to
go,
find
it
again.
We
don't
copy
the
entire
sky.
Skill
of
that
object
into
the
application.
Okay,.
G
D
F
Actually
we
covered
most
of
the
ground
here
as
I've
updated
in
the
talk
it's
more
of
it's
a
very
short
timeline.
We
only
have
about
six
weeks
for
future
planning
and
coding,
so
the
feature
freezes
on
15th
of
November
so
and
we're
starting
our
in
a
feature
collection
starting
today,
so
I
don't
see
any
of
any
enhancements
plan
from
apps
on
the
spreadsheet.
Yet
so,
if
you
guys
want
to
go
ahead
and
add
anything
that
you
think
you
is
doable
in
the
113th
timeframe,
please
go
ahead
and
do
that.