►
From YouTube: Kubernetes SIG Apps 20190318
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
there
doesn't
seem
to
be
any
announcements
for
today,
we'll
start
with
our
close
discussion
topics
just
to
let
anybody
who
is
your
solely
for
that
purpose,
participate
and
move
on
a
bit
like
to.
So
there
are
two
tests
that
are
currently
open:
staple
set
volume,
resize
and
maps
and
available
for
staple
set.
Neither
of
these
are
actually
merged
and
I
think
probably
they
should
be,
even
though
some
of
them
have
particularly
the
not
the
volume
resize
has
some
outstanding
items.
A
We've
agreed
to
discuss
it
at
a
minimum,
so
it
seems
like
we
should
urge
it
at
this
point
and
continue
resolving
that
until
we
get
into
a
implementable
base,
there's
even
some
criteria.
There
there's
some
things
that
we
see
added
to
it,
but
if
nobody
has
in
the
objections,
I'm
gonna
try
to
move
forward
with
pushing
that
to
be
emerges
accepted
her
yeah
I
think
accepted
is
the
state
where
we're
continuing
discussing
and
the
Mike's
proposal
for
maximum
available
in
staple
sets.
A
Again,
it
still
has
some
open
issues
and
I'm
not
quite
sure
we're
at
a
place
where
it's
ready
to
be
implemented
yet,
but
I
think
it's
ready
to
be
accepted
at
this
point
like
we
shouldn't,
merge
it
and
then
work
out
the
rest
of
the
issues
from
there.
In
my
opinion,
just
because
it
seems
like
there's
enough
momentum
that
people
are
interested
in
it.
We're
definitely
interested
in
discussing
it,
as
evidenced
by
the
number
of
comments
on
it,
and
it
does
seem
like
a
valuable
feature
there.
A
Faster
updates
for
staple
sets
is
something
that
people
generally
wanted.
Something
we've
been
hesitant
to
do
because
they
don't
want
to
break
the
semantics,
but
it
was
also
something
prior
to
be
one
that
we
said
we
would
consider
doing
later,
so
will
I
think
to
move
or
with
that.
If
people
don't
have
any
objections,
there
are
some
test
failures,
but
they
mostly
look
to
be
assigned
at
this
point.
So
there's
nothing
super
critical
there.
A
Anything
else.
There's
been
some
discussion
about
volume,
provisioning
for
daemon
sets
now
that
we
have
local
PVCs
and
we
have
this
new
way
for
daemon
sense
to
use
local
storage
other
than
host
pack
that
hasn't
evolve
din
to
a
cap.
As
of
yet
I,
think
six
storage
is
kind
of
like
thinking
about
it.
If
that
evolves
into
account
will
present
a
year
and
there's
other
talk
and
six
storage
about
volume,
snapshot
and
the
interaction
with
applications.
A
So
in
talking
to
some
of
their
members,
I
was
considering
discussing
with
this
SIG's
supporting
kind
of
a
joint
effort
to
make
sure
that
application
cui
asking
a
nun.
Quiesce
ting
has
well
supported
inside
of
the
storage
snapshot.
This
would
be
out
of
tree
work
and
snapshots
were
proposed
to
be
done
in
tree,
but
say
architecture
thought
that
generally
speaking
and
could
be
done,
how
to
train
so
should
be
so
that's
the
path
they're
taking
they're
asking
for
input
on
and
how
they
should
interact
with
controllers,
how
it
interacts
with
pods
and
workloads.
A
So
there
may
be
some
effort
to
be
done
there
jointly
with
a
storage
and
other
news
for
joint
work.
We
have
the
sidebar
caps,
still
open
and
I,
don't
see.
I
haven't
been
able
to
find
work,
progressing
on
that
I
want
to
go
and
chase
back
down.
I
think
side.
Cars
are
valuable
for
securely
for
servicemen
type
technologies,
where
you're
doing
automated
sidecar
injection
via
mutating
web
hosts
controlling
the
lifecycle
and
being
able
to
support
applications
that
are
just
developed
with
Bopper
and
you're,
going
to
have
a
sidecar
injected.
A
B
C
B
A
A
A
Maybe
the
PRV
worth
you
in
a
branch
in
the
PR
okay
issued
soon,
but
failing
that
I
think
will
ask
the
group
for,
and
someone
else
is
going
to
have
to
take
over
the
other
thing
we
were
also
kind
of
looking
at
as
we
need
to
do
something
with
PD,
be
in
the
same
vein
that
should
new
GA
Suen
as
well.
They
think
that
we
might
be
even
easier
because
I
don't
think
we're
going
to
change
the
API
when
we
promoted
the
GA,
but
we
should
have
a
plan
for
that.
At
least.
D
D
Do
Jerry
excellent
yeah,
we
don't
be
already
cool,
so,
let's,
let's
go
ahead
and
jump
into
what
the
what
does
crossplane
project
is-
and
this
is
pretty
this
is
a
pretty
quick
set
of
slides
here
and
then
we'll
poke
on
some
things
there
on
the
running
system.
So
you
know
we
can.
We
can
make
this
interactive
if
you
have
a
question,
but
it
goes
pretty
quickly,
so
we
might,
it
might
be
best
to
all
questions
to
the
end,
but
we'll
see
all
right.
So,
let's
first
talk
about
what
crossplane
is.
D
We
already
got
a
good
introduction
there.
They
kind
of
captured
the
essence
of
it.
So
really
you're.
The
core
fundamental
of
what
crossplane
is
accomplishing
here
is
that
it's
a
multi
cloud
control
plane,
so
it's
an
API,
a
set
of
controllers,
etc
that
spans
across
cloud
providers,
regions
on-premises.
You
know
a
lot
of
different
disparate
environments.
D
Those
resources
for
optimized
for
a
number
of
different
features
that
we'll
get
into-
and
you
know
the
model
here-
is
also
a
separation
of
concerns
between
the
developers
that
want
to
deploy
their
workload
or
their
application
and
administrators.
That
are,
you
know,
operating
the
the
various
environments
that
are
being
using.
So
it's
an
open
source
project.
It's
community
driven
extensible,
you
know
Apache
2.0
license
and
the
community
is
growing
there.
So
you
know
we're
very,
very
happy
to
have
people
come
on
in
I.
D
Don't
think
that
I
need
to
tell
the
gaps
folks
too
much
about
why
multi-cloud
would
be
an
interest,
an
interesting
thing,
but
for
me
just
the
the
real
key
part
there
is
that
it
grants
you
that
power
of
choice
you
can
make
when
you
know
your
application
can
run
in
a
portable
way
in
many
different
environments,
you
get
to
make
good
decisions
about.
You
know
what
features
or
you
know,
facets
or
attributes
you
want
to
optimize
on.
Do
you
want
the
cheapest?
You
want
to
know
geographical
region
or
the
governments
of
compliance
etc.
D
As
we
talked
about
crossplane
it
that
the
multi-cloud
control
plane,
the
cross
plane
is
it's
based
on
the
kubernetes
engine,
so
the
kubernetes
api
server
and
etsy
deeds
for
sis
things
you
know
set
of
controllers,
customer
resources,
etc.
So
what
that
does
you
know
this
declarative,
API
based
on
the
urban
eddies
engine?
What
that
enables
is
you
know,
integration
with
this
reading,
all
the
rich
set
of
tools
in
the
kubernetes
ecosystem.
You
know
Cube
control,
it's
so
with
the
custom
resources
it
slides
right
on
in
there.
D
For
you
know,
a
first-class
experience
and
other
tools
like
compose
or
whatever
you
want
to
use.
You
know
to
interact
with
this
declarative
API
tools
that
you're
used
to
in
the
koreahas
ecosystem
would
work,
and
so,
if
you
think
that
kubernetes
did
an
amazing
job
at
learning
a
lot
of
lessons
from
what
what
it
takes
to
orchestrate
containers
and
we
think
that
those
lessons
apply
to
you-
know
portable
workloads
and
other
resources
as
well.
D
So,
as
we
talked
about
before,
there's
you
know
a
set
of
custom
resources,
that's
been
in
the
control
plane
in
that
could
there
will
be
available
for
you
to
use
now
these
model.
You
know
cloud
provider,
resources
like
RDS,
Amazon,
RDS,
database
or
a
you
know,
Google
cloud
sequel
or
Google
gke
cluster.
You
know
all
these
certain
managed
services
and
infrastructure
that
you
find
in
the
cloud
providers.
We
have
to
see
our
DS
to
match
those
as
well,
and
then
you
know
a
set
of
custom
controllers
that
will
be
sitting
in
reconciliation
loops.
D
D
So
in
the
middle
there
is
what
we
could
say
is
cross
plane
that
control
plane
is
comprised
of
the
you
know:
API
machinery,
so
the
API
server
and
SCD
for
persistence,
and
then
we've
got
A's,
neither
that
set
of
controllers
and
a
scheduler
as
well
that
are
all
running
and
Reconciliation
loops,
and
so
on
the
left
side
of
this
diagram.
You
know
we
have
all
these
the
various
tools
in
the
ecosystem
that
we're
talking
about.
D
We
have,
you
know
the
set
of
cloud
providers
and
all
of
their
managed
services
that
you'd
be
able
to
interact
with
or
provision
or
manage,
configure
it'll
control,
the
life
cycle
of
them
basically,
and
in
addition
to
the
big
public
cloud
providers,
there's
also
a
set
of
interesting,
independent
cloud
offerings,
you
know,
cockroach
and
or
Redis
labs,
etc.
You
know,
those
are
all
interesting
as
well
and
you
know,
could
be
provisioned
and
managed
their
life
cycles
as
well,
using
this
control
plane
that
spans
across
cloud
providers,
independent
cloud
offerings,
on-premises,
etc.
D
So
here's,
where
I
think
it's
real
interesting,
is
this
whole
concept
of
portability
that
we've
been
talking
about
working
with
portability
is
a
big,
a
big
goal
of
this
project.
It's
so
the
way
that
cross
plane
is
doing.
This
is
in
a
very
familiar
way
that
you're,
probably
already
used
to
for
what's
done
with
volumes.
So
with
you
know
volumes
you
can
have
a
persistent
volume
claim
and
a
storage
class,
and
you
know
you
that
enables
the
ability
for
a
a
particular
storage
request
to
be
fulfilled
in
a
general
way.
D
You
know
as
a
inducer
that
wants
to
deploy
an
application.
They
use
the
storage.
You
don't
need
to
know
if
it
is
a
Google,
persistent
disk
or
if
it's
you
know,
Amazon
EBS
or
you
know,
ACEF
volume
from
Brooke.
You
know
you're
just
expressing
that
your
pod
needs
some
storage
to
consume
in
this
volume.
Abstraction
handles
that
for
you
and
provides
that
storage
tier
to
your
pod,
so
in
a
very
similar
way.
D
What
we
have
here
for
workloads
is
all
sorts
of
other
types
of
resources
you
know,
or
can
now
be
brought
into
this
environment
and
managed
in
a
similar
way
such
that
your
application
doesn't
need
to
know
what
environment
it's
going
to
run
in.
It
could
be
an
amazon's
cloud
or
it
could
be
in.
You
know
Microsoft
Azure
or
it
could
be
on-premises.
You
write
your
application
once
and
it
should
run
anywhere
in
a
portable
way.
So
a
little
bit
more
details
about
that
separation
of
concerns.
You
know
that
I
was
getting
into
there.
D
You
know
like
similar
to
a
PVC
persistent
volume
claim
in
a
storage
class.
We
have
this
concept
of
resource
classes
and
resource
claims
and
we'll
see
that
in
the
demo
here
in
just
a
second.
But
basically
what
that
enables
is
that
it
does
a
developer.
You
just
have
to
define
your
workload
in
a
general
way
and
not
know
any
specifics
and
then
an
administrator
for
your
environment.
That
knows
all
the
different
cloud
environments
that
that
are
available
to
your
organization
and
they
can
start
plugging
in
some
of
the
specifics
about.
D
You
know
what
cloud
it
should
run
in,
or
you
know
what
size
machine
instance
to
use
whatever
it
may
be,
but
the
key
is
here
that
as
a
developer
or
a
you
know,
author
of
the
application,
you
don't
need
to
know
those
specifics
and
last
part
here
is
the
scheduler.
So
when
you
have
a
control
plane
that
is
spanning
across
all
these
different
environments,
you
then
can
start
making
some
intelligent
decisions
about
where
these
workloads
should
run
and
where
the
resources
should
be
scheduled
or
placed
as
well.
D
So
you
can
start
optimizing
for
cost
or
availability,
or
maybe
one
of
the
cloud
providers
has
some.
You
know
interesting
innovation
or
features
set
that
you
want
to
take
advantage
of
their
best
performance,
or
you
know
all
these
different
things.
You
can
start
you
can
use
to
inform
the
placements
of
where
these
resources
would
go
so
similar
to
the
kubernetes
scheduler
that
decides
which
node
a
pod
should
run
on
this
workload,
scheduler
in
the
in
the
multi
cloud
control
plane.
D
Here
we
can
make
decision
about
what
cloud
or
what
environments
like
on-premises
a
workload
should
be
scheduled
to
be.
Can
you
can
use
some
of
those
familiar
concepts?
You
know,
affinity
and
taints
toleration,
selectors
and
all
that
sort
of
stuff
to
start
influencing,
where,
in
your
multi
cloud
environment,
an
application
should
should
run
and
since
that
applications
written
in
a
portable
way,
you
know
that
can
that
environment,
to
the
specific
place
that
the
application
ends
up
running
could
be
changed
dynamically
at
runtime.
D
So
what
I
want
to
show
you
here
is
the
experience
of
what
I'm
talking
about
here,
bring
all
this
stuff
together.
So
on
the
left
and
right
side
of
my
screen
here.
Hopefully
this
is
big
enough.
Make
it
a
little
bit
bigger.
We
have.
We
have
workloads,
so
this
particular
workload
that
I
have
here
is
going
to
run
a
WordPress.
You
know
blog
content
management
system
that
needs
to
have
a
couple
things:
a
couple
of
resources
underneath
it
in
order
to
allow
WordPress
to
run
one
of
those
is
my
sequel
database.
D
It
needs
some
persistence
in
order
to
store
you
know,
posts
and
comments,
and
users
etc
in
that
database.
So,
in
this
workload
here,
what
we're
saying
here
is
that
we
need
a
my
sequel
instance.
So
this
is
a
claim
similar
to
persistent
volume
claim.
This
is
a
my
sequel
claim
and
we
say
we
have
here
what
what
storage
class,
what
resource
class
will
be
used
to
fulfill
this,
my
sequel
claim
and
then
inside
of
the
workload
we
you
know,
we
have
that
set
of
resources
that
it
needs.
D
The
you
know
the
my
sequel
database
and
we
also
have
you
know
because
influence
what
cluster
it
will
end
up
running
in
you
know
a
payload
here,
which
is
a
deployment
that
a
particular
you
know
container
and
an
interesting
part
here.
Is
you
know
this
WordPress
app?
It
needs
some
information
about
some
of
the
resources
that
got
spun
up
for
it,
so
that
my
sequel
database.
We
need
some
way
to
be
able
to
inject
into
this
WordPress
application.
The
content
started
be
like
we're.
D
The
were
my
sequel
database
is
running,
what
the
credentials
are
for
it,
etc.
So
this
is
a
probably
a
concept.
That's
you
know.
People
are
familiar
with
as
well.
So
when
the
word,
my
sequel
database
is
dynamically,
brought
up
in
response
to
this
WordPress
requesting
in
a
general
way
that
it
needs
my
sequel.
The
controller
that
spins
up,
that
my
sequel
instance
inside
of
your
cloud
of
choice,
also
saves
its
credentials
and
endpoint
information
into
secrets,
and
so
you
can
then
just
reference
those
that
credential
information
in
your
pod.
D
D
We
have
a
kubernetes
cluster
claim,
and
so
this
is
a
general
abstraction
of
a
kubernetes
cluster,
and
you
know
we
say:
what's
particular
the
storage
class
or
resource
class
will
be
used
to
fulfill
this
kubernetes
cluster
claim
request,
and
then,
let's
take
a
look
here
at
what
these
resource
classes
look
like.
So
you
know
this
is
our
separation
of
concerns
where
the
administer
the
developer
has
expressed
their
general
request
for
a
kubernetes
cluster
and
in
my
sequel
database
and
the
administrator
behind
the
scenes,
just
like
with
storage
classes,
fills
in
the
details
of
you
know.
D
What
does
it
mean
that
you
want
a
kubernetes
cluster?
What
is
what
does
that
mean?
Okay,
well
I
in
my
environment,
here
I'm
gonna
give
you
or
use
the
gke
provisioner
to
that
will
bring
a
gke
cluster
for
you
and
it's
going
to
use
this
machine
type
and
it's
gonna
be
in
this
zone
similar
for
my
sequel.
D
So
we
do
this
for
other
types
of
resources
as
well
like
Redis
and
object,
storage
and
a
whole
number
of
other
ones
in
the
works,
but
so
now
that
we've
kind
of
taken
a
look
at
what
this
experience
looks
like
of
defining
a
workload,
what
its
resources
are
in
a
general
way
and
then
a
the
administrator
being
able
to
specify
through
storage
classes-
or
you
know
some
of
the
specifics
of
the
environment
to
what's
how
these
requests
should
be
fulfilled.
Let's
take
a
quick
look
at
it
running
now.
D
So
I
started
this
up
at
this
big
getting
in
the
meeting
here,
so
that
it
would
be
or
right
before
the
meeting.
So
you
know
all
these
like:
bringing
up
a
gke
cluster
and
my
sequel.
Databases
inside
of
GCE
takes
minutes
at
a
time,
so
I
started
it
before
the
meeting
began.
But
let's
take
a
look
here
at
you
know,
give
what
I'm
gonna
say
is
skewed
control.
D
You
know,
give
me
all
the
resource
classes,
and
so
we
have
here
is
those
two
resource
classes
that
we
look
like
some
load
of
storage
classes
where
for
a
cluster
and
my
sequel
and
what
provisioners
should
be
used
to
fulfill
those
requests?
What
the
reclaim
policy
is,
you
know
all
those
sort
of
things
that
are
familiar
to
storage
classes,
that
we
already
know
and
now
we're
gonna
look
at
what
a
kubernetes
cluster
claimed-
and
you
know
resource
looks
like
so
right
here.
D
What
I've
done
is
that
said,
you
know,
give
me
all
the
kubernetes
clusters,
which
is
the
same
thing
as
a
claim,
a
PPC,
and
so
we
see
here
that
we
have
one
claim
for
kubernetes
cluster.
That
has
been
bound
using
the
this,
this
class,
this
resource
class
to
a
particular
cluster,
a
cluster
by
this
name,
and
then
I,
say.
Okay,
give
me
the
specific
GK
clusters
and
that
it
you
know,
just
like
a
persistent
volume
has
been
bound,
is
well
to
the
general
kubernetes
cluster
instance.
D
So
we
have
a
general
an
abstract
in
a
concrete
kubernetes
cluster
gke
cluster
that
are
bound
together
through
dynamic
provisioning,
and
one
of
the
last
things
to
look
at
here
is
similar
to
what
we
just
saw
for
kubernetes
clusters.
The
same
thing
happens
for
my
sequel,
where
we
have
a
general,
my
sequel
claim
that
has
been
bound
to
a
specific
cloud:
sequel
gingy
inside
of
GCP
database
instance,
so
those
two
are
bound
together,
just
like
you
know,
PVCs
and
Peavey's,
and
then
let's
finally
look
at
the
workload
so
the
workload
itself.
D
Let
me
make
some
room
here,
so
the
workload
itself.
We
can
see
that
it's
in
the
running
state.
We
see
that
the
scheduler
has
picked
this
particular
gke
cluster
for
it
to
run
on,
and
we
can
also
see
that
you
know
it's
got
an
IP
address
that
we
can
access
this
workload
on.
So
if
we
click
that,
then
we're
taken
to
that
WordPress
application-
that
was
dynamically
provisioned,
along
with
all
of
its
resources
inside
of
GCP,
because
of
the
particular
resource
classes,
that
the
administrator
created
that
specify
all
the
environment
details
all
right.
D
So
that
is
the
demo
there
and
then
you
know
just
a
couple
links
here
about
you
know
where
you
can
find
us
on
github
and
slack
and
Twitter,
and
all
that
jazz,
since
it
is
a
community
driven
project
and
we
are
open
to
discussions
and
new
folks
coming
in
at
any
time.
So
that's
all
that
I
needed
or
wanted
to
share
here
and
I'd
be
happy
to
have
some
other
folks
on
the
crossplane
team
here
as
well.
Nick
cope
in
Assam
tomorrow
here
as
well.
D
A
Thank
you
for
showing
this
office
was
really
nice
to
see.
I
had
a
question
so,
for
instance,
the
resource
of
the
database
or
even
resources
are
kubernetes
clusters.
There's
a
difference
in
feature
sets
across
clouds.
So
typically
only
nineteen
people
take
this
approach.
There's
some
trade-off
between
allowing
for
deep,
specific,
customization
or
features
that
are
offered
by
an
individual
cloud
provider
and
kind
of
choosing
the
lowest
common
denominator.
That's
implemented
across
the
board.
How
did
you
come
to
a
balance
between
those
two
kind
of
extremes?
A
D
So
I
think
that
the
first
way
to
look
at
that
is
that
you
know
the
the
resources
that
we're
starting
to
abstract
here
are
ones
that
are
open
source
projects
that
have
kind
of
standard
protocols.
You
know
my
sequel,
Postgres,
you
know
kubernetes
the
object,
storage
in
the
form
of
s3.
You
know
all
the
cloud
providers
have
registered
presentations
as
well
for
for
caching,
but
yeah
you
do
get
into
more.
You
know
interesting
situations
where
a
particular
cloud
provider
might
have
a
particular
feature
that
you
know
you
really
really
need
in.
D
In
that
case,
you
know
it
it's
a
common,
it's
a
common
situation
where,
when
you
try
to
do
a
portable
abstraction
across
a
lot
of
different
specific
implementations
that
you
know
to
make
it
completely
portable
across
those,
you
know
you
end
up
with
the
set
of
common
functionality
there,
and
you
know
if
you
have
a
particular
like
a
very
specific
feature
set
that
you,
you
know,
really
need
to
be
taking
advantage
of
that
isn't
available
in
the
other
cloud
providers.
Then
you
know
this.
This
particular
you
know
completely.
Portable
definition
of
those
isn't
going
to.
D
You
know.
Take
care
of
surfacing
that
specific
feature.
You
know
that
you
may
have
a
dependency
on
I
was
listening
to
something
from
Tim
Hakan,
describing
the
same
sort
of
situation
for
I,
think
it
was
either
ingress
controllers
or
for
load
balancers,
and
they
you
know
how
they
ended
up
with
you,
know,
lowest
kind
of
common
denominator
and
then
speed.
You
know,
specific
features
were
put
into
annotations
and
that
wasn't
a
very
scalable
model
as
well.
So
I
think
that
there's
some
work
to
do.
D
E
Just
one
one
one
other
thought
here
is
that
we
do
it
with
a
separation
of
concern.
We
have
what
the
developer
or
application
owner
sees.
We
separate
between
that
and
what
the
infrastructure
are
on
our
season.
Typically,
a
you.
You
can
be
as
specific
as
you
want
on
resource
classes
and
from
the
infrastructure
owner
perspective.
E
The
common
common
surface
area
that
we
try
to
capture
is
from
a
developer
an
application
order
perspective
the
two
come
together
so
right
so,
and
so
what
you'll
find
is,
if
you
using
something
like
Redis
or
MySQL
others,
the
biggest
variance
between
the
cloud
providers
is
around
hosting
and
features
that
are
available.
There
was
a
posting
and
scaling
compliance
and
security
from
a
developer
perspective.
The
surface
area
seems
to
be
more
common,
especially
from
a
how
an
application
consumes
a
resource,
so
we're
taking
advantage
of
that
property.
So.
A
Oh
check
see
if
there's
any
questions
in
chat
up
there.
Okay,
so
we've
got
another
20
minutes
left.
Does
anyone
have
any
other
discussion
topics
they'd
like
to
bring
up.