►
From YouTube: Kubernetes SIG Apps 20181105
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
everyone
to
the
November
5th
the
gaps
meeting.
My
name
is
Adnan
Abdul,
say
and
then
I'll
be
posting
as
frutos
today,
I'm
gonna
paste
the
link
to
the
agenda.
If
you
want
to
follow
along-
and
the
first
thing
we
have
today
is
an
announcement.
So
if
you're
planning
to
go
to
cube
Khan
in
Seattle
in
December,
you
should
look
into
booking
your
hotel
now.
I
think
Matt,
correct
me
if
I'm
wrong,
but
all
the
conference
hotels
are
booked
out
at
this
point.
Yes,.
B
I
believe
so
I
looked
into
that
earlier
or
I
should
say
middle
of
last
week
and
I
think
they
were
all
booked
up,
but
there
are
other
hotels
within
the
same
distance,
so
get
it
before
those
are
booked.
I
want
to
say
this
coop
con
is
gonna,
be
the
biggest
yet
and
I've
heard
numbers
that
I
want
to
wait
to
hear
because
it's
it's
many,
many
more
than
I've
been
at
a
conference
at
so
it's
gonna
be
huge.
A
So
we
don't
have
any
demos
today,
so
we'll
skip
straight
to
discussion
topics
here.
If
anyone
wants
to
do
a
kind
of
op
I'm
impromptu
demo
later,
when
we
get
to
the
open
discussion
that
would
be
it'll
be
cool,
so
discussion
topic,
we
have
Q
Khan
Shanghai,
coming
up
and
I
think
I'm
just
grab
a
dates:
I
think
it
was
November,
13,
yeah.
A
B
A
B
A
B
B
It's
where
the
sig
is
reviewing
and
debating
and
whatnot
anything
that
has
architecture
changes,
though
API
changes
or
major
architecture
changes
does
need
to
go
to
sig
architecture
and
be
reviewed
so
like
in
this
case
we're
talking
about
a
new
pod,
restart
policy
and
that's
what
I
was
hoping.
Ken
would
be
here
for,
but
he's
having
issues
getting
here
today
to
discuss
in
debate,
because
the
pod
restart
policy
it
be
a
new
API
property
and
a
new
function
would
be
additive
and
therefore
needs
an
API
review.
B
Once
it
gets
into
an
implementable
state
like
the
kep
is
agreed
to
and
the
sig
has
signed
off
on
it,
then
it
moves
into
in
progress
where
it
can
go
and
be
implemented,
and
once
it's
implemented
there
is
another
column
here
completed
closed
and
that's
where
it
would
go
to
and
so
right
now
we
have
a
number
of
caps
in
flight
and
sig
review
things
we'd
like
to
discuss
here
because
they
have
to
do
with
workloads
and
some
of
them
are
other
places
like
you
can
see.
There's
garbage
collection,
stuff,
sidecar
containers.
B
B
Let's
see
so
here's
the
cap
and
the
gist
of
it
if
y'all
haven't
read
it
yet
is
to
add
a
pod
restart
policy.
So
right
now,
containers
are
restarted,
but
you
won't
get
things
such
as
an
init
container
happening
with
that,
and
so
the
idea
here
is
to
add
a
another
restart
policy
in
addition
to
the
ones
that
are
already
there
to
restart
the
whole
pot.
If
a
container
crashes.
B
A
B
The
kubernetes
needed
the
application
built
right
in
to
it
and
I
think
it
actually
takes
a
minute
to
load,
because
it
was
one
of
those
that
had
so
much
discussion
that
it
kept
causing
github
to
crash
and
for
quite
a
while.
We
couldn't
even
close
it
because
it
gave
us
an
error,
a
server
error,
because
it
was
too
many
comments
too
much
content.
B
So
this
should
be
closed
and
in
fact
it
is
closed,
but
I'm
not
sure
the
right
way
that
Brian
and
Jace
are
handling
all
that
tracking.
At
the
moment,
okay
I,
don't
know
if
it
goes
to
close
completed
or
whether
it
just
gets
removed
from
the
board
I'll
get
with
chase.
This
we
can
find
out.
Okay
sounds
good.
B
And
I
don't
have
much
more
to
talk
about
here.
Just
can
usually
drives
the
workloads
API
stuff,
which
is
what
this
week
is,
is
all
about.
Workloads
API
thinks
unless
there's
open
discussion,
there
are
some
other
things.
There's
the
sidecar
containers,
which
I
think
we've
talked
about
here.
Garbage
collection.
Did
we
talk
about
this
one
Edna
I
didn't
remember.
We.
B
But
the
general
idea
here
is
is
to
add
new
ways
to
garbage
collect
objects.
So,
for
example,
say
you've
got
secrets
that
are
being
used
that
have
been
left
around
for
months
and
months.
There's
no
automated
way
to
garbage
collect
those
or
the
top
level
objects
that
don't
have
owner
references,
there's
no
way
to
say
garbage
collection.
B
Get
these
and
quite
often
stuff
gets
left
around
for
folks
in
some
situations,
and
so
the
idea
here
is
to
add
mechanisms
to
say:
go
garbage
collect
this,
like
maybe
it's
no
longer
attached
to
a
pot
anymore
garbage
collected
when
it's
not
needed.
You
know,
after
so
long
a
period
of
it
not
being
used
and
there's
some
possible
issues
with
that.
But
if
folks
are
interested,
please
go
dig
into
this,
but
the
whole
idea
is
giving
new
ways
to
garbage
collect
stuff.
B
And
there's
a
lot
of
detail
in
there,
implementation
detail
and
cases,
and
even
some
discussions
around
rough
edges
and
proposed
changes
to
this,
and
the
reason
it
comes
up
on
our
board
is
it
is,
let's
see
well,
it
is
owned
by
cig
a
Penn
machinery.
We
are
participating
cig
because
it
impacts
the
workloads,
API
and
I.
Believe
everything
in
this
would
be
opted
into
wouldn't
make
an
API
change
out
of
the
game.
B
That's
that
one
we've
discussed
sidecar
containers
do
remember
this
one:
the
the
identify
API
usage
patterns,
which
applied
and
throw
ecology
every.
Do
you
remember
this
one
I
go
over
it
again
like
we
probably
need
to
revisit
this
one
across
the
board.
But
the
idea
here-
and
this
is
not
workloads-
API
specific-
is
to
try
to
figure
out.
It
was
by
hippie
hacker
trying
to
figure.
A
B
B
A
B
A
C
So
this
is
cleanup
work.
This
is
making
sure
that
we
are
positioned
to
be
able
to
turn
those
off
by
default.
We
would
want
to
start
pre
announcing
that,
probably
in
1:13
say,
the
target
for
the
beta
API
is
no
longer
being
served,
is,
for
example,
115
similar
to
we.
What
we
did
with
that
CD
to
you
know.
D
C
Pad
in
the
release
notes
multiple
releases
in
advance.
My
intent
would
be
to
no
longer
serve
those
by
default
in
115.
But
obviously,
if
you
had
data
in
a
TV
in
those
versions,
we
would
still
be
able
to
read
that
from
a
TV.
So
there
would
there
would
be
no
migration
impact
and
I
would
anticipate
there
being
a
way
to
turn
those
back
on
if
at
least
for
a
few
more
releases.
C
So
we're
just
trying
to
move
the
ecosystem
towards
the
GA
versions
and
communicate
that
well
make
sure
that
everything
that
we're
demonstrating
and
documenting
is
is
using
the
supported
GA
versions
of
those
things.
So,
if
you're
interested
jump
into
that,
PR
I
think
at
AG,
Matt
and
Ken,
if
you
can
review
or
nominate
a
reviewer
that'd
be
great
because.
C
D
C
D
C
D
C
C
D
Link,
if
you
give
me
a
link,
I'll,
take
a
look
as
soon
as
possible
to
get
it
off,
but
my
only
concern
is
so
I
mean
technically
we
deprecated
this
a
while
ago.
If
we
do
this
at
the
same
time
that
we
removed
the
dependencies
in
Kuby
cuddle,
do
we
break
our
version?
Sku
compatibility
between
the
client
and
yes,.
C
A
E
Yeah
give
me
a
second
to
get
in
context.
I
just
walked
into
the
room
noise.
E
D
Looking
at
this
kemp
versus
the
sidecar
kept,
my
personal
preference
is
to
go
with
sidecars,
because
that
seems
to
be
the
most
common
use
case
and
the
semantics
of
that
seem
to
be
more
well
defined.
My
only
concern
with
this
one
is
that
I
can
imagine
some
rather
strange
behaviors.
That
would
be
unanticipated
and
the
other
thing
is
even
if
we
want
to
do
this.
This
is
almost
more
so
I
mean
from
our
perspective,
it's
pot
and
workload
API,
but
the
reality
is
most
of
the
work
to
do
this
would
be
inside
of
couplets.
D
C
B
Do
this
with
sidecar
containers,
then
what
would
you
do
is
a
sidecar
container
to
handle,
say
the
case
we're
using
and
knit
container
to
set
up
another
container
and
then
that
container
crash
is
and
now
what
do
you
do
with
it,
because
you
can't
just
restart
that
container,
necessarily
that
the
in
it
container
going
off
in
all
cases?
How
would
you
use
a
side
card
for
that
situation?.
D
B
B
B
D
D
B
B
D
B
D
Oh
wait:
this
is
okay,
this
is
the
option.
This
is
going
to
take
the
pot
level
up
and
try
to
do
the
whole
thing,
so
the
advice
we've
been
given
on.
That
is,
if
you
really
need
this
and
run
them
as
separate
pods
to
manage
the
life
cycle,
that's
the
easy
one
right
like!
So,
if
you
have
a
bunch
of
any
containers
that
are
gonna
run
with
the
particular
container
use
two
pods.
If
you
need
to
manage
the
life
cycle
that
independently.
B
B
B
So
the
example
is
right:
let's
see
if
I
get
the
scenario
right
here,
the
on
failure
and
always
restart
policies
effectively
manage
the
lifecycle
of
the
containers
of
a
pod.
The
support
for
multiple
containers
in
a
pod
also
enable
better
modularity
and
separation
of
concerns
between
different
containers.
It
also
promotes
looser
coupling
between
components
and
knit
containers
provide
some
additional
support
for
modularity
and
looser
coupling
for
the
functionality
of
initializers
of
the
pot.
B
It
make
it
possible
to
separate
the
initialization
from
the
rest
of
the
pod
to
enhance
both
modularity
as
well
as
security,
but
both
the
on
failure,
as
well
as
the
always
reached
our
policies,
restart
the
individual
containers
in
question
and
not
the
whole
pod.
This
is,
for
the
most
part,
desirable,
even
optimal.
However,
there
are
scenarios
some
documented
in
this
issue.
B
So
if
the
deployment
has
a
container
in
it
and
it
restarts
and
the
whole
pod
doesn't
restart,
but
it
relied
on
an
init
container.
What's
the
alternative,
the
unique
container
didn't
restart
so
the
deployments
not
going
to
come
back
up
in
its
right
state,
because
the
whole
pod
needs
to
be
restarted
in
this
setup.
D
So,
like
demon
sets,
in
particular
jobs
in
particular,
there's
a
lot
of
scenarios
where
you
have
a
particular
pod
that
isn't
the
main
container,
and
you
want
to
manage
the
lifecycle
of
that
container
with
respect
to
the
the
main
container
right,
even
for
deployments.
This
is
a
case
and
we
don't
really
have
any
type
of
order.
Termination
where
you
can
say:
okay,
well,
this
pods
a
sidecar
if
it
dies,
don't
take
the
main
container
or
this
containers
a
sidecar
if
it
dies,
don't
restart
this
one.
D
There's
life
cycle
termination,
semantics,
we're
actually
saying
something
is
a
sidecar
has
meaning.
We
have
a
couple
of
open
issues
surrounding
that.
Looking
at
this
issue,
the
guys
or
gal
is
I
mean
they
simply
using
the
in
a
container
for
what
it
was
intended
for
it,
they're
pulling
down
something
and
then
formatting
it
into
the
file
system
of
the
pod
itself.
C
C
B
C
D
D
D
If
they
were
using
a
volume,
I'm
assuming
they're
doing
something
like
an
empty
dirt
or
they
need
to
refresh
every
time
it
starts
because
it
expires.
It's
kind
of
big
on
that
on
what
that
means.
But
in
any
case
the
general
pattern
we
see
here
is
what
Brian
likes
to
call
entry
point
Sh,
where
you
wrap
your
effectively,
whatever
you're
doing
for
your
main
process,
you
exec
out
to
that
from
a
shell
script
and
whatever
you
need
to
do
prior
to
it.
You
do
inside
of
the
script.
D
So
in
this
case,
like
you'd,
have
a
bash
script
that
pulls
down
this
container
and
then
stores
it
prior
to
launching
the
main
process,
and
every
time
you
restart
the
pod
that
entry
point
script
gets
called
and
actually
would
go
through
this
flow,
so
I
mean
like
the
the
solution
to
this
guy's
there.
This
guy
or
gals
problem
would
be
effectively
write
a
script
that
does
this
inside
of
the
pods
entry
point
execs
out
to
whatever
you're
trying
to
run,
and
it
should
just
work.
It's
like
a
per
container
life
cycle
hook,
yeah
pretty-pretty.
D
Can
also
do
it
that
way,
you
can
use
a
pre,
execute
script
or
a
post
execute,
but
those
are
well
I
think
I
think
more
if
I
can
actually
say
that
to
be
true,
but
probably
you
could
do
that
or
just
wrap
the
entry
point
in
something
that
does
it
periodic,
or
does
it
prior
to
launching
the
process?
It
wasn't
really
a
use
case
for
any
containers.
D
Okay,
so
the
question
is:
do
we
went
in
it
containers
to
be
rerun
that
way
and
they
again
I
just
kind
of
feel
like
with
sidecars
I,
understand
how
it
works
and
it
wouldn't
affect
the
general
semantics
of
what
a
pod
and
is
today
I'm
running
in
it
container
is
a
day.
It
doesn't
change
any
event
with
this
we're
kind
of
changing
the
API
and
interface
to
be
something
fundamentally
different.
It's
sort
of
risky,
but
again
I'm
not
diametrically
opposed
to
it.
D
I,
just
don't
see
what
the
necessity
is
and
like
it
doesn't
seem
like
it
solves.
So
sidecars
are
something
that's
frequently
requested
and
solve
a
bunch
of
actual,
like
real-world
use
case
problems
with
people
like
okay,
I
can't
actually
do
this
because
it
falls
over
you're
encouraging
needs
to
use
mutating
webhooks
to
inject
pods
into
my
jet
containers
into
my
pods
for
sidecars.
Like
that's
a
scenario
you
say
you
want
to
support
and
you
want
to
be
able
to
support
these
applications
where
the
lifecycle
I
mean
they
weren't.
They
were
legacy.
D
B
How
about
we
do
this,
because
the
question
we're
getting
hung
up
out
here,
so
the
proposal
basically
says:
if
a
container
fails,
you
can
have
a
restart
policy
to
restart
the
whole
pod
rather
than
just
the
container.
That's
what
the
proposal
here
is
right
and-
and
the
scenario
like
the
documented
example
is
one
we're
calling
into
question
as
a
motivation
right.
D
B
B
Mean
but
no
sorry
I
thought
her
name.
Was
it
wasn't
Janet
it's
let's
go
back
and
ask.
B
D
B
If
anybody
here
has
a
more
I,
don't
have
a
motivation
for
this
at
the
moment,
because
I
haven't
run
into
a
problem
because
I'm
just
entering
pointer
volumes,
other
things
that
have
not
caused
this
to
be
a
problem
for
me.
But
if
there
are
motivations,
please
post
them
here,
because
I
know
that
would
be
helpful
for
folks.