►
From YouTube: Kubernetes SIG Apps 20191202
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
everyone
to
sick
apps.
My
name
is
Adnan,
am
to
the
Hussein
and
today
is
the
2nd
of
December.
So
this
is
our
first
meeting
back
from
cube
con.
We
have
Matt
Farina
and
janicoo
eye
on
co-hosts
and
we
don't
have
any
announcements
today,
so
we'll
jump
right
into
the
discussion.
So
the
first
thing
here
is
the
data
protection
working
group
so
Shingon
day,
for
you
want
to
take
this
yeah.
B
C
D
C
B
B
So
now
that
warning
snapshot
support
is
going
beta
in
window
17,
it's
a
good
time
to
look
at
what
other
functionalities
that
are
missing
in
kubernetes,
so
that
we
could
provide
data
protection
support.
So
you
thought
is
a
good
time
to
form
this
data
protection
working
group
to
promote
this
feature
in
communities.
B
So
we
listed
some
of
the
potential
topics
that
we
can
discuss
in
this
working
group
so
extract
a
snapshot
data
without
creating
a
new
volume
data
populate
er
refers
to
the
ability
to
create
a
DB
C
from
a
external
data
source
such
as
a
s3
object
or
git
repo
retrieve
this
between
two
snapshots.
This
is
very
important
for
efficient
backups
consistency
groups,
the
ability
to
take
crash,
consistent
stashes
across
multiple
volumes.
This
is
something
we
have
been
discussing
in
six
storage:
application,
consistent
snapshot.
B
There
is
already
a
cap
by
Iman
which
is
being
reviewed-
data
protection
policy,
for
example
the
ability
to
set
schedule
for
periodic
backups,
so
the
supporting
group
would
be
a
place
to
promote
cross,
seek
collaboration
and
the
stakeholder
six
will
be
sick,
apps
and
six
storage.
I
think
there
will
be
features
that
are
developed
under
both
six
we'd
like
to
hold
the
regular
zone
meetings
to
discuss
this
issues,
the
Dave.
Do
you
have
anything
to
add.
B
C
Sorry
I
had
to
find
my
mute
button.
It's
hiding
somewhere
yeah,
so
I
think
this
is
a
really
good
time
for
us
to
start
bringing
this
together.
I
think
the
goal
here
is
to
bring
all
the
different
stakeholders
together,
which
currently
we've
got
a
number
of
DP
vendors
backup
vendors
talking,
but
we
also
need
to
bring
in
application
vendors,
more
of
the
storage,
vendors
and
end-users
as
well
to
see
what
needs
there
are.
B
E
So
this
is
neeraj
Talia
from
Caston
raid,
been
in
previous
conversations
with
jiaying,
Dave,
etc,
and
given
the
fact
that
we
do
work
on
the
sock
abilities,
I
think
this
is
actually
a
good
idea
versus
vendors,
doing
one-off,
so
doing
something
very
customers
specific
to
just
their
products.
So
I
would
be
in
favor
of
this.
A
B
F
F
D
C
Go
ahead
and
start
with
was
the
incremental
backups
I
really
want
to
take
a
holistic
view
of
this
because
there's
you
know
the
DP
space
is
kind
of
large
and
I
think
we
need
to
kind
of
go
top-down
as
far
as
what
it
is
exactly
that,
what
problems
are
we
actually
solving
for
people
and
making
sure
that
it
all
fits
together,
because
it's
kind
of
been
doing
its
piece
by
piece
and
I'm,
not
sure
all
the
pieces
actually
fit
at
this?
For
them?.
F
F
C
F
Not
yeah
I'm
all
for
it
right.
First
of
all,
don't
get
me
wrong
and
just
the
for
me
it's
like,
since
we
was
getting
started.
We
have
I
think
seen
any
Dave
discussions
me
as
well
during
coop
car,
and
we
could.
We
collect
a
whole
bunch
of
positive
feedback,
and
people
want
this,
but
what
I
am
afraid
offers
this?
Did
it?
It's
really,
as
I
mentioned,
is
really
a
big
big
topic
there.
I
I
have
in
the
working
group
focus
on
this.
Probably
is
the
best
end-to-end
beginning.
A
And
this
will
be
part
of
the
requirements
of
forming
the
working
group.
So
one
thing
that's
interesting
about
working
groups
is
that
generally,
then
they
need
to
have
an
exit
effectively,
so
some
goals
that
will
need
to
be
reached,
and-
and
so
as
far
as
as
far
as
coming
up
with
the
Charter,
we
need
to
define
what
those
calls
would
be
for
the
working
group
to
be
disbanded
in
the
future.
G
G
We
want
to
work
in
the
space
and
as
a
first
effort,
we
want
to
outline
kind
of
the
exact
features
we
think
we
want
to
build
or
the
direction
we
want
to
take
or
what
we
think
is
most
important
and
have
that
be
step
one
and
then
go
and
then
take
each
one,
and
you
try
to
come
up
with
the
timeline
and
of
how
you
address
in
the
order.
It's
not
a
computable
approach
to
me.
C
Yeah
I
think
at
this
point
one
of
the
challenges
is
just
getting
the
people
involved.
That
should
be
involved
with
this,
because
I
think
there's
a
lot
of
people
outside
the
kubernetes
community
who
are
starting
to
become
part
of
the
community
but
aren't
really
there
yet.
So
that's
that's
kind
of
the
first
challenge
is
getting
getting
is
having
a
structure
where
we
could
actually
organize
people
and
say
hey.
C
B
So
I
just
want
to
mention
that
the
container
tops
the
list
here.
A
lot
of
those
have
already
been
discussed
right
so
to
look
at
the
consistency
groups,
that's
already
designed
in
progress,
application,
consistent
snapshot.
It
was
already
a
cap
from
that
data
populated,
that's
also
something
that
has
already
been
brought
up.
We
just
want
to
go
back
and
talk
about
that
again.
So
it's
not
like
everything
here
are
brand
new
topics,
but
of
course
there
are
a
lot
of
things
we
wanted
to
do.
G
And
we
might
be
back
was
kind
of
like,
maybe
so,
instead
of
focusing
on
just
what
you
can
please
focus
on
what's
addressed,
what's
not
addressed
and
what's
most
important
to
the
end
user
and
then
just
on
and
then
the
mayor
again.
We
want
to
do
who
and
the
priority
about
what
we're
kind
of
flow
out
of
and
yeah
you're
kind
of.
It's
not
it's
not
a
story
about
like
tying
up
things
that
have
already
been
done
and
like
just
identifying
the
gaps
as
much
as
telling
the
whole
story
about.
G
C
That
sounds
great,
so
I
guess
we
will
go
ahead
and
we'll
talk
about
this
some
more
at
this
big
storage
meeting
and
we
will
I'll
put
together
a
mailing
list
and
start
inviting
people
to
join
it
and
we'll
work
on
a
charter
and
we'll
schedule.
An
initial
meeting
of
the
working
group
I
think
we
can
just
go
ahead
and
start
scheduling,
working
group
meetings
even
before
we
have
like
the
official
charter
right,
yeah.
G
G
Like
the
the
thing
about
having
a
sponsor
in
working
group,
you
just
want
to
if
you're
gonna
go
through
the
process
to
start
one,
you
just
want
to
write
one
outline.
Why
think
this
needs
to
happen
as
a
working
group
and
I
think
one
of
the
things
that's
enough
slide
that,
basically
you
want
to
do
some
work
across
SIG's
that
may
affect
built-in
controllers
and
entry
api's.
G
F
We
had
a
couple
of
discussions
again
from
Google.
We
were
closely
on
six
storage
I.
Think,
one
of
the
reason
why
I
put
my
initial
view
out
is
Shean
be
released.
A
lot
of
projects
over
here,
I
think
will
go
metal.
Go
commercial
go
over
here
is
to
get
people
aware
of
this
thing
going
and
also
they
are
pieces.
We
do
need
from
seek
apps
to
make
the
whole
data
protection
thing
happening
right.
F
I
totally,
get
that
previous
creep
back
about
figuring
out
the
gaps
and
in
these
thoughts
is
gaps
and
then
bring
people
together
does
probably
the
best
approach
I
can
think
of
in
and
I
will
continue
to
work
on
this
and
they
were
as
well.
In
terms
of
six
storage,
we
had
a
couple
discussions
there
on
this
topic
as
well
as
I
mentioned
before
this
has
been
discussed
heavily
during
cope
calm
as
well
as
offline.
So
this
go
ahead.
This
I
think.
G
I
was
involved
in
some
of
the
great
early
discussions
around
this
as
well.
I.
Think
yeah,
like
I,
just
wanted
like
we're,
not
saying
like
yeah.
Let's
go
do
this
and
then
side
is
like
what
are
you
guys
talking
about
right,
so
I
mean
you're
gonna
plan
to
go
present,
that
at
six
storage
and
that'll
be
good
and
then
a
lot
of
the
people
who
are
actually
storage,
vendors
and
implementers
of
the
underlying
storage
systems
that
are
popular
to
sit
underneath
kubernetes
are
six
storey
members,
primarily
right
so
having
their
education
to
get
something.
G
H
B
D
Do
we
at
this
time
have
any
form
of
cross
matrix
of
what
is
supported
in
the
various
releases
of
kubernetes
through
CSI
and
by
the
various
vendors
who
have
CSI
plugins
and
potentially
back
in
storage
systems?
Yeah.
My
intent
in
asking
that
question
is
to
identify
where
the
gaps
are
where
the
overlaps
are.
D
B
Yeah
so
right
now
in
kubernetes
on
either
what
in
snapshot
support
and
then
it
was
also
a
cloning
support,
so
I
think
right
now
most
drivers
have
supported
that,
so
that
actually
is
already
available
in
the
CSI
Doc's.
We
have
that
document,
so
we
could
go
through
that
I
find
all
of
those
jobs
that
support
snapshots
other
than
that
we
don't
have
any
other
features
in
CSI
or
in
six
storage.
That
is
specific
for
this.
B
So
I
can
I
can
follow
that
to
you
and
then
you
can
take
a
look
and
see
if
there's
anything
else
that
you're
looking
for.
Then
you
want
to
compile,
but
I
think
one
thing
that
we
make
we
wouldn't
need
to
find
out
is:
let's
say
if
we
want
to
add
change,
block
tracking
right-
that's
probably
not
supported
by
every
brand,
though
so.
That
type
of
thing
would
be
good
to
find
out.
You
know
which
underscores
that
which
does
not.
G
Right
I
mean
they're
kind
of
two
strategies.
Right
like
one
is
you
go
for,
so
let's
assume
that
you're
going
to
add
extension
functionality
into
the
CSI
specification
right.
You
can
either
support
basically
the
common
subset
across
all
vendors,
which
makes
it
pretty
easy
to
implement,
but
also
gives
you
the
lowest
common
denominator
in
terms
in
terms
of
features
or
yeah.
B
G
Old
functionality,
optional,
so
is
what
you
probably
don't
want
to
do,
is
all
of
a
sudden
makes
somebody
non-compliant
with
the
CSI
just
because
their
particular
system
doesn't
implement
right.
So
you
can
have
like
various
profiles
of
compliance
or
optional
features.
Inside
of
the
specification,
if
you
should
choose
to
extend
it
and
yeah
we're.
C
C
Difficulties
right
now
so
I'm
on
the
VMware
side
right,
so
the
current
snapshot
API
is
very
much
based
on
the
cloud
model
like
the
AWS,
/
Google
cloud
model,
and
it
really
doesn't
work
for
the
on-prem
model,
and
so
we've
got
to
be
able
to
support
both
of
these,
and
you
know
the
features
we're
asking
for
like
incremental
backups,
for
example.
That's
one
thing
that
the
cloud
providers
probably
are
going
to
do
for
a
whole.
Watts
gonna
be
a
big
thing
for
them.
If
they
even
choose
to
support
it.
B
F
F
G
C
So
in
the
case
of
a
cloud
snapshots,
which
is
a
EBS
snapshot,
that's
actually
something
you
can
rely
on
as
a
backup
because
it
moves
it
off
the
primary
storage
and
into
s3
under
the
covers
on
prim,
for
example,
I
can
have
either
snapshot
when
we
take
a
snapshot
that
resides
on
the
primary
storage
if
the
primary
storage
is
lost
or
the
volume
actually
is
deleted,
the
snapshots
all
go
away,
so
obviously
not
actually
backups
for
us.
Yeah.
C
C
G
C
That's
one
of
the
outcomes
of
this
I
mean
we
really
have
to
have
a
description
of
what
data
protection
means
and
kubernetes
what
it
means
to
you.
For
example,
as
a
DP
vendor
I
mean
I,
have
a
hundred
and
50
backup
partners
at
VMware
I'm,
going
to
start
bringing
these
people
in
and
explain
to
them
here.
Do
this
and
you
can
work
with
kubernetes
right
and
you
know,
I
can't
do
this
one-on-one.
I
J
J
G
Might
also
want
to
consider
embedded.
It
would
be
ideal,
probably
as
an
outcome
for
end-users.
If
the
underlying
the
model
that's
presented
to
the
end-user
is
uniform
across
the
cloud
and
across
on-prem
installations
available,
like
I,
mean
the
win
for
a
lot
of
people
aside
from
developer
velocity
with
going
to
kubernetes
and
containers
is
workload
portability,
so
you
can't
have
a
completely
it
probably
wouldn't
be
great
for
them.
G
E
I
think
we
need
to
be
cognizant
that,
while
all
environments
might
have
the
same
api's,
the
actual
performance
of
sled
api's
might
be
very
different.
Because
a
lot
of
these
things
for
efficiency
and
performance,
you
really
depend
on
the
underlying
storage
system.
There's
not
much
you'll
be
able
to
do
it.
Some
of
the
API
levels
come
as
a.
C
Haven't
solved
that
problem
more
than
just
the
network
infrastructure,
though
I
mean
it's
it's
things
like.
How
did
you
actually
do,
for
example,
cloning
and
stuff,
so
I
think
if
we
start
to
lay
these
out
as
here's
use
cases
for
these
things,
here's
what
the
performance
is
going
to
be
expected.
Here's
what
we
expect,
how
often
they
expect
these
calls
be
made
and
in
what
circumstances?
C
That's
going
to
help
storage
vendors,
for
example,
decide
how
to
implement
things
like,
for
example,
in
our
cloning
mechanism,
will
actually
do
a
complete
clone
every
time
or
copy
all
the
bits.
We
did
that,
because,
when
we
designed
this
thing
like
four
years
ago,
the
idea
of
doing
a
link
clone
was
not
really.
We
can
do
this,
but
it's
weird
in
our
system,
and
so
we
didn't,
but
when
we
start
using
this
as
the
way
to
extract
data
from
a
snapshot,
it's
a
requirement
that
we
needed
that.
D
A
K
K
We
got
a
few
requests,
also
internally
here
at
Google,
about
being
able
to
somehow
override
that.
So,
if
you
should
be
able
to
say
for
all
workloads,
if
there's
no,
if
that's
not
an
explicit
pdb,
assume
that
at
least
one
needs
to
always
be
available
or
like
Maxon
available
can
be
25%
percent
like
that
there
was
a
discussion
or
there's
an
issue
on
this
from
a
couple
years,
back
a
lot
of
back
and
forth
on
this.
So
I
was
interested
in.
If
this
is
something
we
think
might
be
worth
looking.
D
K
K
G
G
G
Is
different
from
that
captured
with
pod
disruption
budget
and
that
it's
okay,
that
for
those
two
things
to
be
separate
and
that
you
can
specify
up
I
just
Russian
budget
in
order
to
do
it,
I
think
what
you're
proposing
is
different
and
that
you're
saying
like
okay,
we
do
have
this
policy
mechanism
for
disruption,
tolerance,
but
it
only
works
on
a
per
workload
basis
and
that's
cumbersome
the
mooks
users.
We
might
be
able
to
do
better
and
have
a
higher
level
policy
mechanism
that
specifies
general
disruption,
tolerance
for
categories
of
workloads,
potentially
across
spaces.
K
K
K
M
M
G
There
I
remember
a
kind
of
a
similar
conversation
on
that
bed
with
replicas
set
and
staple
size.
One,
but
I
thought
the
outcome
there
is
that
like?
If
you
have
something
of
size,
one
you
can't
in
any
like,
if
you
have
one
of
something
given
the
kind
of
nature
of
infrastructure,
you
can't
expect
high
availability
right,
like
you,
have
to
you've
effectively
built
a
system
that
does
not
tolerate
disruptions.
So
I
don't
know
who
like
what
how
we
can
help
you.
M
G
That
that's
the
downside
right
like
you,
would
block
any
dream
these
in
cordons.
Anybody
who
wanted
to
build
a
positive
infrastructure
as
a
service
that
provided
an
auto
upgrade
feature
basically
can't
do
anything,
and
then
honestly,
you
can't
upgrade
your
node
yourself
without
disrupting
that
work.
Look
right
so,
like
you,
don't
something
that
men
only
doesn't
tolerate.
Even
a
plan
disruption.
That's
not
there's
not
a
lot.
We
can
do
that.
That's
gonna
help
you
you've.
G
M
G
G
Then
the
semantics
aren't
tied
to
something
that
the
users
expect
today
right
the
expectation
to
be
set
up
front
for
what
the
behavior
should
be
so
I
mean
I'm,
not
entirely
sure
what
this
looks
like,
but
I
think
Morgan's
put
a
lot
more
thought
into
it
than
the
rest
of
us
probably
have
at
this
point.
So
I'm
kind
of
curious
to
see
what
he's
got
to
say.
K
Yeah,
so
the
disruption
with
when
we
just
one
replica
it's
tricky,
if
you
can
specify
default,
that
would
be
something
you
can
opt
into.
So
if
you
don't
do
anything,
it'll
still
be
like
it
is
today,
but
or
we
can
have
some
thought
on
that
and
put
that
into
a
cap,
and
we
can
continue
this
Gardner
yeah.
M
K
K
K
They
still
see
here
that
there
are
situations
where
we
can
get
into
weird
situations.
It's
currently,
if
you
have
like
you,
can
obtain
pod
star
class
looping
and
haven't
been
available
a1,
and
if
that
happens,
none
other
parts
can
be
terminated,
even
if
it's
10
parts-
and
we
only
need
one
so
perfectly-
is
like
what
we
shall
look
at
the
rules
for
what
is
when
is
it
safe
to
terminate
a
pod?
Even
if
there
is
a
pdb,
especially
around
crash
sleeping
pods
Cassandra?
That
can
block
raining.
K
I.
Think
part
of
the
difficulty
here
is
that
the
rules
in
the
disruption,
controller
and
the
eviction
API
are
slightly
different,
a
4-part
to
be
considered
healthy,
but
a
disruption
controller.
It
needs
to
be
running
and
ready,
but
the
rules
for
when
it's
safe
to
evict
as
much
it
is
sort
of
right.
Now
any
try
to
be
succeeded
or
failed,
so
there's
a
gap
there.
L
Yeah
this
is
Michael
Gugino
I'd
like
to
chime
in
on
this,
so
I've
been
working
on
the
patch
set
in
question,
and
so
my
point
of
view.
I
work
on
cluster
API,
part
of
said,
cluster
lifecycle,
and
so
that's
we're
trying
to
add
or
remove
nodes
completely
automate
it,
and
so
this
is
a
the
pending
issue,
something
that
we
hit
in.
L
L
So
we
did
some
back
and
forth.
I
have
a
couple
patch
chats
out
when
you
decide
it
well
in
the
case
of
pending,
we
can
always
pretty
much
delete
that
one,
because
that
one
was
never
ready.
It
was
never
healthy
and
all
you
have
to
do
is
make
sure
nothing
has
mutated.
The
pod
and
the
means
my
enforcing
resource
version,
unbelief.
L
So
that's
the
other
patch
say
in
context
of
ignoring
pod
disruption,
budgets
and
what
it's
appropriate.
My
preference
would
be
to
never
ignore
positive
rupture
budgets.
We
have
other
mechanisms
in
place
to
do
that,
such
as
you
can
just
delete
the
fog
I've
got
a
couple
patch
sets
out
to
modify
the
drain
library
to
skip
eviction
and
enforce,
delete,
and
things
like
that.
L
M
I'm
not
sure
if
checking
the
put
status
is
the
correct
approach
here.
Maybe
something
and
I'm
checking
the
availability,
but
that
seems
a
bit
racy
when
I
start
thinking
about
it,
I
would
have
to
think
about
it
more.
Maybe
it
wouldn't
be
helpful,
like
I
saw
this
PR
just
now,
so
if
you're,
opening
a
PR
just
CC
community
SiC
ups
VI
reviews,
so
people
get
notifications
for
it.
K
L
Think
that
pretty
much
covers
it,
you
know
it's
just
trying
to
enable
automated
remediation.
So
there's
no
things
blocking
drain
because
we
still
want
to
respect
peds.
That's
that's
the
goal,
but
there's
gonna
be
pods
and
weird
states
and
there's
a
reason
that
we're
trying
to
get
rid
of
that
particular
node
and
that's
why?
If
the
recipe
D
B's
are
good,
then
we
should
just
delete
that
pod
and
just
trying
to
pretty
much
sort
out
what
the
preferred
user
experience
is
there,
but
I
think
we
covered
it.
A
L
I
M
I
A
A
O
Hello,
this
is
Chris
Vignola
and
I
would
like
to
ask
a
question.
I
work
with
a
team
that
has
built
an
open-source
project
called
Application
Navigator
that
makes
use
of
the
application
CRD
to
do
some.
Various
visualization
and
other
related
things
and
I
was
wondering
if
this
body
would
be
interested
in
seeing
a
demo
of
that
at
some
point
in
the
future
meeting
and
if
so,
how
to
go
about
getting
that
on
the
agenda
very
much
so
yeah.