►
From YouTube: Intro: Apps SIG - Adnan Abdulhussein, Bitnami
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Intro: Apps SIG - Adnan Abdulhussein, Bitnami
Join Kubernetes SIG Apps to learn about the areas of our focus, what we are working on currently, and how you can get involved. Veteran SIG Apps members will be on hand to help answer questions.
To Learn More: https://sched.co/Grd3
A
Okay,
I
think
we'll
go
ahead
and
get
started
so
welcome
everyone
to
this
session
on
cig
apps,
community
sig
apps.
My
name
is
Adnan
Abdul,
sane
and
I
make
up
a
third
of
the
sink
apps
chairs,
so
along
with
Matt
Farina
from
Samsung,
who
is
at
cube
con,
but
I.
Don't
think
he
could
make
it
to
this
session
this
morning,
but
he
is
around
and
he
will
be
able
to
see
gaps
deep
dive
later
on
as
well.
A
A
So
just
a
show
of
hands
how
many
people
have
been
to
a
cig,
apps,
intro
or
deep
dive.
Previous
cube
cons,
cool
just
100
couple,
actually,
how
many
people
have
been
to
see
gaps,
meetings
before
ok,
cool
yeah
so
sounds
like
there's
a
few
people
who
who
know
about
cig
apps
how
many
people
is
it
their
first
time
they
check
me
out
so
gasps
cool,
that's
quite
a
few
people,
that's
great
so
hopefully,
I
won't
bore
a
lot
of
you
who
kind
of
already
know
what
the
gaps
is
all
about.
A
But
I'll
give
a
get
brief
kind
of
introduction
to
see
gaps
and
I'll
give
an
update
to
kind
of
what
we
covered
since
the
last
coupon
and
some
of
your
treatments
that
we've
made
and
then
I
also
point
out
some
other
things
that
are
going
on
and
like
the
deep
dive.
That's
gonna
happen
later
this
week
and
also
give
you
some
pointers
on
how
you
can
get
more
involved.
A
So
cig
apps
is
basically
a
place
to
go
and
discuss
how
people
are
deploying
and
defining
applications
in
kubernetes
and
our
kind
of
core
demographic
is
the
kind
of
application
developer
or
the
application
operator.
So
we
kind
of
center
around
their
experience
of
using
cuban
at
ease.
So
this
means
two
things
we
own
code
in
in
both
core
and
external
in
core.
A
A
So
in
our
meetings,
we'll
have
again
we'll
have
the
application
developers
and
operators,
but
we'll
also
have
core
contributors
to
kubernetes
they're
able
to
give
answers
on.
You
know,
what's
going
on
with
daemon
set
or
what's
going
on
with
stateful
set
and
you'll
be
able
to
go
and
get
those
questions
answered
there.
A
A
Like
verbal
survey,
you
know
to
find
out
how
many
people
going
to
keep
con
from
the
group
or
things
like
that,
and
then
we'll
typically
have
some
demos,
so
typically
one
or
two
demos,
like
10-minute
demos,
on
new
projects
in
the
ecosystem
or
updates
to
existing
projects
and
then
they'll
be
shared.
Your
discussion
topics
and
right
now
we're
kind
of
alternating
between
discussions
around
the
workloads,
API
and
then
the
other
week.
We'll
have
discussions
around
developer,
tooling
and
again
in
the
ecosystem.
A
Then
there's
the
replication
controller,
which
is
a
legacy
API
I
am
really
you
should
be
moving
to
deployment,
and
so
you
shouldn't
really
be
using
that
anymore
still
in
in
the
core
p1
api
legacy
and
then,
if
you're
running
stateful
workloads.
So
things
like
databases,
you
probably
want
to
use
staple
set
and
what
you
get
with
staple
set
is
you
have?
A
A
That's
specified
interval
so
similar
to
to
cron
and
then
finally,
it
is
daemon
set
and
demon
set
is
used
to
run
a
certain
number
of
pods
to
schedule
pods
on
every
single
node
in
the
cluster,
and
this
is
useful
for
things
like
Network
add-ons,
so
I
think
you
know
things
like
we
would
would
use
this
to
make
sure
that
there's
a
spot
on
every
single
node,
you'll
notice
that
all
of
these
workloads
have
a.
If
you
can
see
my
mouse
here,
but
they
all
have
a
group
at
a
version.
So
you'll
probably
you've
seen
this.
A
A
A
Some
of
the
things
that
are
that
we've
worked
on
the
TTL
controller
for
jobs,
garbage
collection,
so
this
allows
you
to
specify
a
time
to
live
for
certain
resources
right
now,
I
think
it
only
works
with
jobs,
and
it's
alpha
and
112,
but
it
effectively
you
allow
you
to
garbage
collect
resources
that,
after
some
amount
of
time,
then
as
the
daemon
set
is
shifting
to
use
the
default
scheduler.
So
historically
de
teaming
set
basically
takes
care
of
sharing
pods
itself,
which
means
it
can
take.
A
Take
advantage
some
of
the
things
that
the
v4
sheduled
it
has,
and
it
has
its
own
kind
of
separate
code
base
for
that
so
swishing
to
Depot
scheduler
is
going
to
make
make
reuse
a
lot
easier
there
and
that's
beta
in
112,
optional
service
environment
variables.
So
this
is
a
flag
that
you
can
specify
in
the
pod
spec
to
disable
the
environment
variables,
the
the
depot
environment
variable.
So
the
dr.
A
link
environment
variables
that
show
up
and
that's
going
to
be
it
available
available
or
is
available
now
I,
guess
in
113,
and
then
there's
also
this
provisional
proposal
going
on
right
now
on
sidecar
containers,
and
this
is
about
managing
the
lifecycle
and
dependencies
of
containers
within
a
pod.
So
you
can
mark
certain
containers
as
side
cars
and
then
the
scheduler
will
make
the
controller
will
make
sure
that
the
pot
that
I
mark
this
aside
cars
will
start
up
before
the
main
pods.
That's
why
domain
containers?
A
A
B
A
A
So
going
out
of
core
here
and
talking
about
some
of
the
other
things
in
the
ecosystem,
so
obviously
I'm
sure
many
of
you
have
heard
of
helm.
Helm
is
no
longer
part
of
the
gaps.
I
think
it
was
still.
It
was
very
recent.
The
helm
is
now
moved
to
a
top
level
project
under
the
CNC
F.
They
have
separate
meetings,
every
Thursday,
I
think
at
9:30
a.m.
PST
yeah.
A
A
Where
maintain
is
are
responsible
for
their
own
chart
repositories,
but
there's
still
one
place
where
people
can
go
and
discover
and
find
charts
from
different
repositories.
So
that's
the
idea
with
the
launch
of
the
helm
hub
and
we
think
will
be
a
lot
easier
for
people
to
to
go
and
find
what
they
need,
but
not
b-before
chopped
meat.
It
is
not
to
be
kind
of
pushed
back
by
this
bottleneck
or
held
back
by
this
bottleneck
of
trying
to
get
things
updates
and
new
chars
into
stable.
A
A
That's
that's
the
current
plan.
There
isn't
I,
don't
think,
there's
anything.
It
isn't
I
think
so.
The
plan
is
to
post
the
launch
of
the
helm
hub
to
work
on
documentation
and
to
make
it
easier
to
run
up
your
own
chart
repositories.
Cuz,
that's
one
of
the
biggest
hurdles
of
this
is
that
the
stable
repository
is
easy.
I
can
just
send
a
PR
and
and
I
know
it's
gonna
appear
in
that
repository.
How
do
you
go
and
stand
up
your
own
chart
repository
is
it
is.
A
So
yeah
so
definitely
check
out
the
helm
hub
and
in
chart
museum
again
I'm
kind
of
just
reiterating
what
was
said
in
the
keynote
I
think
that
the
0.8.0
was
released
there
and
in
cube
apps,
which
is
another
project
which
is
an
application
dashboard
that
uses
that
makes
it
easier
to
install
ham
charts
in
your
cluster
just
release.
100.
A
Things
we've
worked
on
documentation
wise.
We
came
up
with
a
set
of
guidelines
around
what
labels
should
be
used
within
your
resources
to
help
group
those
resources
and
make
it
easier
to
find
what's
running
in
your
cluster.
So
here's
an
example
of
case
where
you,
you
might
have
a
wordpress
application.
That
includes
a
database
or
like
my
sequel
and
resources.
A
A
A
So
this
was
done
as
part
of
a
working
group
that
spanned
a
couple
of
different
people.
Couple
of
them
I
think
a
few
different
SIG's
as
well.
Ccc
gaps
was
a
big
part
of
it,
but
and
I
think
it
was
actually
the
first
working
group
to
close
as
well.
So
it's
the
AB
definition
working
group.
This
was
the
output
of
it
good
success.
We
actually
closed
the
working
group.
A
A
That
would
be
a
breaking
change,
because
some
people
might
be
relying
on
those
older
labels
so
and
also
it
will
change
the
way,
deployment
tool
reference,
because
I
think
some
of
those
labels
are
used
to
actually
reference
like
the
pods
that
a
deployment
is
part
of
and
things
like
that,
the
selectors.
So
that
might
be
a
breaking
change,
but
hopefully
we
can.
We
can
slowly
migrate
on
today,
I
think
a
few
charts
in
stable
are
using
these
new
labels.
So
it's
just
been
a
slow
rollout.
A
So
I
briefly
mentioned
the
applications,
the
Audion
controller,
and
the
idea
of
this
project
is
basically
to
define
a
common
API
to
to
understand
what
an
application
is
within
communities
and
I
should
be.
I
should
make
clear
here
that
the
application
CID
is
not
going
to
be
responsible
for
deploying
applications,
it's
more
just
to
give
a
representation
of
what
applications
are
installed
in
the
cluster.
So
it's
kind
of
metadata
cluster
controller
is
responsible,
is
going
to
be
responsible
for
gathering
kind
of
uncreated
Health
metrics
about
the
about
the
application
and
also
manage
garbage
collection.
A
A
The
CID
has
a
bit
more
information
that
the
labels
don't
really
cover
and
I
think
both
kind
of
can
be
used
together
to
to
gather
information
about
what's
going
on,
but
to
give
an
example:
okay,
yeah
well
just
just
to
give
an
example,
I
think
like
some
of
the
things
that
on
that
can't
be
really
represented
in
labels.
Things
like
icons,
which
would
be
like
kind
of
a
base64-encoded
icon
string
and
there's
a
few
other
things
like
the
like
being
able
to
check
the
status
of
of
the
application
as
a
whole.
A
C
A
So
the
main
thing
here
is
that
the
application
kind
is
not
responsible
for
actually
managing
the
application.
It's
more
just
metadata,
so
it
wouldn't
it
wouldn't
do
any
any
sort
of
operation
at
all.
But,
for
example,
the
sed
operator
could
also
deploy
a
an
application
instance
resource
to
for
you
to
know
that.
There's
an
STD
cluster
running
in
inside
his
environment-
that's
been
created
by
the
sed
operator,
so
the
sed
operator
will
still
do
the
deployment
and
the
management
of
that
cluster,
but
you'd
be
able
to
get
information
about
it
using
the
application
resource.
A
That's
a
good
question:
I!
Don't
think
that
the
service
castle
has
been
discussed
in
in
the
scope
of
the
application
C
or
D
yet,
but
it
is,
but
it
you
can
go
over
to
this
repository
and
I.
Think
that'd
be
a
great
question
to
ask
there
because
it
yeah
I
think
that's
a
good
point
to
be
interesting
to
see
what
applications
are
connecting
to
what
services
from
from
the
Service,
Catalog
and
I.
Think
that
could
be
an
extra
some
build
in
the
spec
to
denote
that
say,
I!
A
Think
if
you,
if
you
go
to
this
repository
and
and
create
an
entry
there
or
if
you
just
come
on
one
of
the
cause
and
ask
that
question
I,
think
that'd
be
great
and
actually
speaking
of
the
service
catalog,
that's
a
great
segue
to
the
portable
service
definitions,
which
is
another
proposal.
That's
in
in
progress
and
the
idea
of
the
portable
service
definitions
is
essentially
to
to
create
a
common
API
for
requesting
and
consuming
external
services.
Like
my
sequel
and
the
main
way,
this
differs
from
the
service.
A
Catalog
is
that
this
is
more
about
making
sure
the
way
you
request
an
external
service
and
consumer
in
kubernetes
is
portable.
So,
whilst
the
Service
Catalog
could
be
a
back-end
for
actually
going
and
provisioning
that
external
service
there's
currently
no
way
to
ensure
that
the
credentials
that
we're
going
to
get
back
into
that
into
the
secret
that
service
catalog
crates
for
you
are
going
to
be
portable
between
different
cloud
platforms.
A
So
we
have
portable
service
definitions,
we're
going
to
come
up
with
a
spec
for
particular
services,
because
you
need
to
have
that
knowledge
about
a
particular
service
to
be
able
to
understand
what
make
sense
and
what
that
format
should
look
like.
And
the
idea
is
to
create
kinds
for
each
other
services
so
that
people
can
request
them
in
a
common
way
and
the
service
catalog
could
indeed
be
a
back-end
for
for
those
definitions.
A
Yeah,
it's
kind
of
similar
to
DA.
It
can
also
be
kind
of
similar
to
I,
guess,
persistent
volume,
claims
and
storage
classes,
and
that
idea
right
so,
like
you,
don't
have
to
know,
what's
what's
actually
going
and
provisioning
that,
but
you
care
about
the
interface
that
you'll
then
connecting
to
with
your
applications,
so
I
see
it
kind
of
that
level,
but
also
the
cloud
events,
the
abstraction
make
sense
as
well.
There
yeah,
but
again
this
is
so.
This
is
a
cab
that
just
got
merged.
A
If
you
haven't
seen
the
new
enhancements
repository,
it's
just
cubed
at
ease,
enhancements
and
under
kept
sig
apps
you'll
find
order,
enhancements
proposals
for
sig
apps
and
the
portable
service
definitions
is
it's
here.
So
if
you
have
comments
or
feedback
on
that,
this
would
be
the
place
that
you
can.
You
can
go
and
send
a
PR
to.
A
All
of
the
great
tools
have
been
created
in
the
ecosystem.
We'd
love
to
talk
about
them.
If
you're
working
on
something
that
is
related
to
C
gaps,
please
come
and
demo
it
we'd
love
to
see
it
we'd
love
to
help
you
get
more
feedback
and
help.
You
drive
the
features
that
you
need
to
develop
it,
and
once
again,
if
you
want
to
get
involved,
sigit
deep
dive
is
Thursday
at
11:40
a.m.
so
I'll,
be
there
Matt
Farina
will
be
there
a
bunch
of
us.
A
The
gaps
veterans
will
definitely
be
there,
so
you
can
come
and
ask
those
questions
and
just
get
involved
outside
of
cube
con
when
you
all
go
back
home,
we'll
be
skipping
next
Monday's
meeting
cuz
we'll
be
kind
of
recovering
from
key
bats
that
are
so
from
Q,
Khan,
I'm
sure,
but
the
Monday
after
that
actually
know
we'll
be
signing
up
again
in
January
because
we'll
be
hitting
Christmas
in
New
Year's.
So
I
think
the
next
meeting
will
be.
A
A
A
The
benefits
of
using
the
application
CRD,
so
if
you
want,
if
you
as
a
cluster
operator
or
application
operator,
want
to
get
an
overview
of
what's
running
inside
your
cluster,
then
the
application
kind
gives
you
that
that
ability-
and
you
can
also
see
kind
of
at
a
glance
the
health
of
each
application.
So
it's
kind
of
this,
this
abstract
view
of.
What's
of
what
applications
are
running
in
your
cluster,
that's
the
idea.
A
Right
now,
yeah
right
now,
right
now,
you'd
have
to
do
that.
All
you
include
the
application
kind
as
part
of
your
your
application
in
the
future
we'd
like
to
see
tools
actually
adopt
the
kind
itself
and
actually
generate
a
for
you
and
two-point
for
you
so
right
now
it
is.
It
is
a
little
painful,
but
yeah
I
think
I,
think
helm
and
humphrey'd
its
proposal
to
do
a
doctorate
application
kind.
A
So
the
application
kind
will
be
will
be
the
API,
but
then
each
application
running
in
your
class.
There
will
be
an
instance
of
that
application
kind.
So
within
that
there
could
be
labels
that
differentiate
kind
of
different,
different
roles
that
that,
like
so
you
say,
you
have
the
same
WordPress
application,
but
one
has
one
has
a
different
name
or
you
know
a
different
label
to
to
say
what
kind
of
tear
or
and
and
then
also
they'll,
be
kubernetes
namespaces
involved
as
well.
So
there's
there's
different
levels
at
which
we
can
kind
of
isolate
that
metadata.
C
A
A
That
would
be
great
too
great
to
have
you
on
or
maybe
just
discuss
for
you
I,
don't
to
put
you
on
the
spot,
but
like
sounds
good,
but
yeah
it'd
be
great
to
see
since
Ali's
start
hearing
about
what
people
are
thinking
around
it
and
because
that's
really
at
the
stage
we're
at
we
just
want
to
get
feedback
and
try
to
understand
a
bit
more.
What
problems
this
could
solve.
C
A
Yeah,
exactly
that's
a
good
point:
I,
don't
I
can't
remember
if
there
was
some
discussion
around
whether
it
should
be
set
as
the
owner
or
if
it
should.
Actually,
yes,
so
I
think
the
controller
would
pick
up
and
use
the
selector
based
on
the
label
and
and
set
it
as
an
owner
for
each
of
those
resources.
I
think
that
was
the
plan,
so
so
yes,
in
that
case,
we
get
the
cube,
panties
garbage
collection
out
of
the
box
when
we're
not
can
gets
to
you
yeah.
B
A
C
A
No,
so
it
kind
of
made
the
other
way
around
said
it
so
helm
or
cube
CTL
would
actually
go
and
deploy
the
application
for
you.
That's
that's
the
idea
right
now
again,
you'd
have
to
go
and
create
the
application
client
manually,
but
how
great
could
go
and
spec
out
the
application
kind
for
use
of
that,
so
that
you
don't
see
so
you
would
include
include
as
part
of
your
Yoda
plan.
A
C
A
So
so
yeah,
so
when
you're,
when
you're
building
an
operator
with
with
Cube
builder,
you
could,
you
could
have
so
cute
builder-
could
also
spec
out
the
application
kind
for
you
or
your
operator.
Whenever
there's
an
instance
of
the
the
kind
of
your
operator
in
play,
then
the
operator
could
also
go
and
create
the
application
kind
for
you.
So
you
have
this
other
place.
Where
you
can,
we
can
get
metadata
about
the
application,
so
they
kind
of
they
kind
of
interact
with
each
other.
A
B
A
A
Yeah
so
I
think
I
would
still
be
part.
That
would
still
be
the
concern
of
the
the
tool
that's
being
used
to
deploy
the
application,
so
I
would
keep
CTL
or
helm
or
or
cute
convey,
go
or
case
on
it.
Some
some
of
these
tools,
because
they
will
actually
be
doing
the
rollout
and
it's
appointment
of
those
things.