►
From YouTube: CNCF SIG App Delivery 2021-03-03
Description
CNCF SIG App Delivery 2021-03-03
A
A
A
B
C
C
C
D
D
D
D
Oh
okay,
so
you
are
actually
midday,
probably
for
you
lunch
time,
it's
no!
It's.
D
Oh
wow,
I
didn't
realize
that
austria
is
that
much
ahead,
I'm
in
austin,
texas,
okay
yeah.
So
it's
it's
just
about
10
o'clock
morning.
Right
now,
so.
E
A
E
B
F
No,
I'm
actually
not
with
the
conveyor
team,
I'm
with
the
kuwait
team,
but
we're
just
kind
of
visiting.
We
haven't
been
on
us
on
a
meeting
for
a
while.
We
actually
presented
last
summer
we're
just
kind
of
getting
re-engaged
with
you
guys
again.
So
we're
just
here
to
listen.
A
Okay,
perfectly
fine.
I
just
want
to
ensure
that
people
get
a
chance
to
to
present.
Today
we
actually
picked
a
good
day
because
we
are
pretty
much
packed.
Not
just
on
would
never
call
them
boring
updates,
but
not
just
an
update
but
actual
project
presentations
today,
which
is
great
and
given
that
yeah
the
agenda
is
pretty
packed,
I
would
propose
we
jump
right
in.
A
If
there
are
anything
I
don't
want
to
discuss
field
or,
basically
to
add
them
to
the
agenda
for
presentations.
We
usually
try
to
keep
them
to
15
to
20
minutes
and
okay.
We
have
a
few
plus
here
and
and
litmus
chaos,
so
my
proposal
because
I
already
see
them
here.
I
would
like
to
start
with
the
litmus
chaos
project,
update
and
probably
then
also
the
next,
that
they
want
to
apply
for
incubation.
A
I
think
it's
good
to
see
what
happened
in
this
project
and
we
can
guess
who's
here
from
litmus
chaos,
like
your
background
on
chaos
native
here
so
litmus
case
in
the
case
engineering
framework,
but
I'd
like
to
directly
pass
over
to
them
and
give
them
the
stage
to
give
us
a
short
update
again.
Try
to
keep
it
to
like
15
minutes.
Roughly,
that's
fine.
G
All
right
I
just
need
to
run
through
then
then
I
really
thought
20
but
I'll
I'll
try
to
make
it
in
between.
Thank
you.
Oh.
G
But
so
we
will
survive
if
it's
20,
that's
fine!
All
right!
Thank
you
very
much,
hello,
everyone,
hi
boys.
So
basically
we
are
presenting
here
because
we
applied
for
moving
to
incubation
a
couple
of
meetings
ago.
We
did
discuss
here
and
and
as
part
of
that,
I'll
provide
a
quick
project,
snapshot,
update
and
also
what
we
have
done
at
the
last
nine
months
and
where
we
are
in
terms
of
the
project
itself.
G
E
G
The
pr
we
applied
this
around
november
cubecon
timeframe
and
after
that
we've
been
a
bit
busy,
really
walking
through
the
litmus
2.0
and
also
there
was
a
big
community
event.
So
this
is
a
good
time
to
come
back
and
present
and
on
the
project
snapshot
update
earlier
I
mean
maintainers
remain
same.
Intuit
continues
to
be
active,
amazon
is
a
doc
maintainer
and
the
primary
sponsor
is.
G
We
were
a
team
under
my
data
and
recently
we
spun
off
the
project
or
the
team
to
start
as
chaos
native
with
the
goal
to
really
focus
on
litmus.
So
that's
good
news
from
at
least
my
time
was
divided
a
little
bit
on
two
cncf
projects.
Now
it's
litmus
right
apart
from
that
happy
to
state
here
that
apart,
we
took
the
stats
from
dell
stats
cnc
of
dev
stats,
a
lot
of
good
contributions
that
we
received
from
other
companies.
G
G
That's
the
state
and
we
also
maintain
a
list
of
adopters
formally,
we
encourage
whoever
is
using
litmus
to
come
and
tell
or
fill
up
this
application.
So
far,
this
is
the
list.
Recently
we
got
a
teleco
reference
as
well.
Orange
has
been
pretty
active
in
terms
of
using
litmus
for
this,
for
their
chaos
needs.
They
also
presented
recently
in
a
conference
an
attack
into
it
was
there
earlier
recently
kept
in
cube,
flare
with
scale
orange
octato
were
added,
and
there
are
many
other.
G
To
take
a
reference,
if
required
by
cncf,
as
they've,
been
using
in
production
and
in
other
forms
as
well,
so
that's
a
quick
update
on
who's
using
and
in
terms
of
the
stats
itself.
We
have
added
about.
One
of
the
things
that
we
continue
to
add
is
the
new
experiments,
that's
the
purpose
of
creating
a
chaos
hub,
and
we
don't
want
to
get
the
core
team
to
write
all
the
experiments
and
it's
almost
impossible
right
with
the
growth
of
kind
of
growth
we're
seeing
everywhere.
G
So
that's
working
well
and
in
the
meantime,
we
have
actually
concentrated
on
building
the
project
itself
to
have
a
super
solid
foundation
for
chaos
at
scale
right.
So
we
see
the
product
project
adoption
slowly
increasing
with
more
slack
members
joining
and
asking
questions.
That's
a
proof,
and
there
are
totally
new
contributors
of
70
plus
that
were
added
since
the
sandbox,
and
we
also
have
defined
many
operational
six.
But
four
are
kindly
I
mean
kind
of
working
well
and
the
community
meetups
continue
to
happen
driven
by
us.
G
But
what
I
primarily
wanted
to
share
is
there
are
two
other
meetups
that
are
started
by
the
community
members
themselves.
So
that's
a
sign
of
you
know,
adoption
in
different
geographies
as
well
before
I
have
one
slide
or
two
slides
on
litmus
2.0
or
the
work,
but
these
are
kind
of
notable
features.
We
did
improve
our
cs:
e2e
pipelines
to
deliver
patches
past
and
also
on
monthly
cadence.
We
never
missed
a
release.
So
far.
G
We
almost
are
releasing
a
patch
release
every
month,
apart
from
the
main
release,
and
we
did
make
our
architectural
goal
our
design
goal
of
delivering
litmus
portal.
It
is
a
huge
step
towards
chaos
for
multi-tenant
and
chaos,
declarative
chaos
for
torx
cloud
ecosystem
and
also
we
didn't
want
to
stay
at
experiment
level.
We
want
to
go
to
chaos,
scenarios
we
integrated
with
argo
workflow,
there's
a
lot
of
feedback
on
you,
don't
define
what
chaos
studies
that
it
should
be.
G
I
will
define
myself
so
we
introduce
probes
a
lot
of
community
work
and
one
of
the
other
main
things
that
we
did
is
observability
improvements
right.
A
lot
of
chaos
can
happen
in
different
forms,
but
whoever
is
affected.
They
should
be
able
to
know
what
happened
when
right
was
this
a
problem
because
chaos
was
introduced
or
was
it
a
national
thing?
G
So
the
context
of
chaos
introduction
should
be
captured
in
the
monitoring
system,
so
we
did
define
that
and
a
lot
of
namespace
ownership
issues
came
in
and
we
did
now
litmus
can
run
within
a
namespace
and
if
developers
are
sharing
a
large
cluster
and
their
own
only
namespaces,
they
can
manage
chaos
within
that.
G
Kind
of
a
quick
overview
and
litmus
6
primarily
encouraged
by
other
projects
like
meshary
and
cncf6
itself.
There
are
a
lot
of
community
called
questions
where
I
want
to
contribute
here
there.
My
interest
is
in
observability,
so
we
defined
six
within
litmus
and
right
now
the
documentation
stick
is
working
very
well.
There
were
two
contributors
who
are
driving.
G
Needs
of
litmus
similarly
deployability
to
contributors
are
coming
and
they
manage
their
own
sake.
So
that's
also
working
well
and
we
continue
to
do
sick
testing,
sick
orchestration,
etc,
etc.
As
you
can
see
that,
apart
from
chaos
native
team,
there
are
other
members
who
are
actually
chairing
this
right.
So
we
want
this
sick
to
be
chaired
by
someone
other
than
chaos
native
team,
so
that
we
get
more
natural
feedback
and
roadmap
reviews
and
in
terms
of
the
contributing
arcs
we
took
it
from
devstat.
G
Apart
from
my
data,
which
is
chaos
native
now,
you
can
see
that
intuit
and
reduce
telecom,
red
hat
and
some
microsoft
itself.
Some
of
this
are
pretty
active
in
contributing
overall,
even
though
we
took
top
18,
but
from
sandbox
we
got
70
unique
contributors.
G
So
that's
a
good
news
to
show
that
the
project
is
actually
being
contributed
to.
This
is
another
stat
that
we
wanted
to
present,
as
you
can
see
that
the
yellow
line
is
a
number
of
works,
there's
a
raise
in
october
because
of
the
october
fest
other
than
that
github
stars
are
generally
linear.
We
keep
getting
a
few
stars
here
and
there
and
the
pr
issues
is
actually
going
down.
G
That's
really
means
that
we're
working
hard
to
close
the
issues
right
so
there's
a
lot
of
work
that
we're
doing
to
fix
and
we're
closing
many
of
them
towards
lithuanian
students.
G
So
these
are
the
notable
contributors
given
15
minutes
time.
I'll
just
rush
it
through
primarily
red
hat
and
orange,
are
contributing
a
lot
to
litmus
probes,
the
definition
of
them
definition
of
steady
state
and
how
to
introduce
chaos
on
cloud
platforms
from
kubernetes
itself.
So
that's
some
of
the
ideas
that
they
brought
in
and
the
ideas
to
actual
code
conversion
has
happened
and
it
is
a
feature
already
now.
G
These
are
the
integrated
integrations
one
of
the
things
that
we
believe
for
the
uptake
of
litmus
is
how
well
these
are
integrated
into
various
acd
tools,
because,
generally
before
chaos
comes
in,
the
teams
are
already
using
the
tool
right.
So
we
started
working
with
the
orgo
workflow,
it's
not
a
tool,
but
you
know
we
wanted
to
use
that
to
define
chaos
workflows,
but
spinnaker
captain
gitlab
github.
These
are
the
four
cacd
tools
and
many
are
in.
G
Just
waiting
for
some
community
members
or
us
to
prioritize
them
and
cubot
is
another
integration
done
by
red
hat,
where
they
wanted
to
introduce
for
a
non-kubernetes
target
cubewatt
vms.
So
that
went
well
and
we
also
have
started
contributing
into
the
cnf
performance
test.
Suite
and
octato
is
a
developer
cloud
or
kubernetes,
so
they
they
get
it
ready
environment
and
also
when
they
merge
the
code,
they
can
run
chaos
with
litmus.
So
that's
a
good
use
case
to
develop
on
the
community
events.
G
E
G
Youtube
primarily
targeting
how
to
make
it
easy
and
also
interactive
tutorials
on
catacota,
we
did
orchestrate
a
good
chaos
panel
discussion
involving
our
own
users,
project
user,
as
well
as
users
from
the
other
community,
and
we
did
have
a
great
event
which
was
primarily
helped
by
litmus
team.
As
volunteers
chaos
carnival
apart
from
chaos
native
team,
six
litmus
users
came
and
presented
how
they're,
using
litmus
in
various
forms.
We
did
have
two
boot
camps,
we're
also
gearing
up
towards
the
community
bridge
programs.
G
Jsox
google
zoomed
out
docs
docs,
is
something
that
we
keep
a
very
high
importance,
and
another
good
news
is
that
the
cross
project
collaboration
is
happening
primarily
with
argo,
workflows
and
captain
where
you
know
we
keep
seeing
how
to
do
chaos
with
the
continuous
delivery
and
that
stuff.
So
these
are
some
of
the
slides
I'll
quickly
run
through
from
continuous
solutions.
This
links
are
given
here.
G
The
video
links,
the
network,
chaos,
how
it
was
orchestrated
by
continuous
solutions
using
late
mass,
and
this
is
how
red
hat
is
using
probes
chaining
of
the
probes
to
define
steady
state
and
do
cube
what
chaos.
This
was
the
talk.
G
Captain
integration
by
eorgan,
which
was
a
an
awesome
well-received
one,
they've,
been
using
it
in
continuation
and
also
from
iig
michael,
came
and
talked
about
how
they
use
aws,
eks
chaos
using
litmus
at
scale
and
for
their
general
application
and
because
scaling
of
upright,
because
whether
that
will
work
or
not
when
some
chaos
was
there
and
one
of
the
main
things
that
we
added
is
the
gitops
integration
into
chaos
right.
This
was
one
of
the
features
that
we've
been
trying
to
work.
G
How
can
we
scale
chaos
itself
at
larger
systems?
So
we
kind
of
have
front-end,
githubs
came
back
in
githubs
and
we
did
demonstrate
it
can
work
with
any
chaos.
G
Sorry
any
get
off
stools
like
argo,
cd
or
flux,
and
also
spinnaker,
but
we
did
demonstrate
this
and
it
is
coming
out
in
2.0
beta
in
about
a
couple
of
weeks
from
now
and
octato
demonstration
was
good.
These
are
I'm
just
putting
them
here
as
a
proof
of
utmost
users
coming
in
and
speaking
at
a
public
event.
This
is
a
telco
reference
where
they
have
publicly
demonstrated
how
they
have
used
litmus
for
their
telco
platform.
G
Chaos
needs
that's
about
various
community
updates
and
the
progress
that
we
made
in
the
project,
and
we
are
certainly
proud
of
the
architecture
built
up
in
the
last
one
year
and
one
of
our
goals
before
even
we
started
announcing
this
litmus
as
a
project
for
public
consumption.
G
Back
in
2018,
we
had
kept
certain
goals
for
chaos,
engineering
right
we
want
to
make
them
really
fit
into
the
cloud
native
goals.
These
are
some
of
the
goals
that
we
defined.
Actually
speaking,
githubs
was
added
last
year
because
of
the
rise
of
githubs,
so
there
were
four
principles
that
I
added
two
years
ago
in
a
cnc
blog,
so
the
principles
are
basically
the
chaos
engineering
should
be
open
source.
The
experiment
should
be
community
collaborated
for
managing
the
life
cycle
of
chaos
itself.
G
G
You
need
to
have
open
observability,
you
don't
want
to
get
locked
in
into
a
particular
observability
system.
So
with
this
with
the
goals
we
work
towards
litmus
to
route
zero,
the
last
one
here
has
been
fantastic,
so
I
would
like
to
state
here
that
we
actually
have
achieved
the
feature
complete
state
for
all
of
this.
The
first
three
are
pretty
much
in
usage:
get
option
open,
observability.
We
are
about
to
release
it's
been
tested
in
some
form,
but
this
year
I
think
you
know
that's
the.
G
That
we
are
willing
to
take
right
overall,
2.0
really
means
litmus
has
changed
from
being
a
tool
for
a
single
user
to
execute
chaos,
experiments
to
a
kind
of
a
tool
set
for
teams
where
who
are
operating
across
cloud
environment,
to
execute
chaos,
workflows,
not
chaos,
experiment,
chaos,
workflows
and
highly
scalable
cloud
native
environments,
and
why
we
state
that,
because
we
did
a
lot
of
work,
as
I
mentioned,
workflows
and
there's
chaos,
portal,
guitars
integration,
chaos,
analytics,
observability,
steady
state
definition
through
probes,
vm
chaos
and
name
space
chaos.
G
So,
with
all
this
together,
litmus
is
more
formidable
to
be
used
by
larger
enterprises
which
are
already
in
use.
I
would
say
so.
This
is
again
a
repetition
of
that
experiments.
We
believe
will
become
a
commodity,
it's
more
chaos
scenarios
and
it
is
more
from
per
user
to
teams
per
cluster
to
a
multi-tenant
cross-cloud
system
per
organization.
You
need
to
manage
litmus
at
organization
level.
Just
like
you
do
ops,
a
single
source
of
truth,
where
you
keep
all
the
configuration
in
one
place
for
all
the
teams.
G
Together,
you
can
manage
chaos
in
the
same
way:
chaos,
experiments
or
workflows.
I
would
say
earlier
we
used
to
have
all
the
experiments
are
put
in
one
public
hub,
but
now
teams
can
have
their
own
private
hubs
because
they
develop
their
own
experiments.
They
want
to
manage
within
the
organization
and
some
they
cannot
stream,
but
some
they
they
tune
or
write,
and
then
they
keep
it
litmus
will
work
very
well
in
terms
of
orchestration
with
private
chaos
hub
earlier.
It
was
cli
only
mainly
for
the
observed
probability
purpose
and
is
of
use
purpose.
G
We
brought
in
gui
as
well
and
gitops,
which
will
inherently
help
with
scalability
and
management,
and
then
the
primary
new
feature
is
really
the
probes.
It
has
got
a
lot
of
attention
from
the
community
where
they
will
be
able
to
declaratively
define
what
they
think
is
a
steady
state
of
the
system
before
introduction
of
chaos
and
hypothesis
definition
right.
It
differs
from
each
application
each
team
using
the
same
application,
so
they
are
completely
at
their
flexibility
levels
to
go
and
define
the
chaos
steady
state,
sorry.
G
So
what
is
ahead
for
us
litmus?
We
are
willing
to
write
more
or
planning
to
write,
more
documentation,
socialize
with
github's
tools,
and
we
are
trying
to
get
more
experiments
which
are
application
specifics
to
be
contributed
to
to
the
chaos
hub.
While
we
continue
to
solidify
or
make
the
foundation
stronger
and
in
the
short
term,
getting
2.0
out
getting
it
used.
Listening
to
the
community
is
our
goal,
but
in
the
midterm
next
two
quarters.
G
We
want
to
get
more
chaos
types
like
grpc,
io
chaos,
and
we
also
wanted
to
introduce
the
rust
library
for
sdk.
G
There
have
been
some
interest
from
community,
so
we
will
be
working
on
that
and
other
than
that,
we
would
like
to
work
with
as
many
tools
as
possible
in
the
cnc
ecosystem
and
also
with
other
cnc
of
projects
right
so
we'll
see
depending
on
the
bandwidth.
A
Present
yeah
thanks,
that's
a
great
update.
I
remember
the
very
first
presentation
you
made,
I
think
before
a
year
ago,
so
on
1.0
versus
2.0,
so
twitter
is
already
released
and
how
many
of
your
users
would
you
is
it.
G
Right
2.0,
I
mean
around
three
months
ago:
there
is
a
master
branch
that
we
created
right,
so
2.0
users.
There
are
multiple
large
users
who
are
using
portal
already,
I
would
say
about
five
to
ten
percent
users
are
using
and
we're
going
to
announce
to
r0
beta
march
15th
and
then
couple
of
months
from
then
it
will
go
into
this
one.
So
this
is
basically
the
entire
community
is
used
to
a
certain
way
of
using
a
litmus
right,
and
we
don't
want
to
just
change
them.
A
Maybe
especially
for
your
incubation
proposal,
because
we
had
a
similar
discussion,
also
with
with
the
flux
team
or
like
also
between
flux,
one
and
flux,
two,
which
I
think
is
actually
pretty
natural-
that
software
evolves,
especially
in
these
scenarios,
just
be
clear
on
what
the
transition
looks
like
and
especially
when
the
uc
wants
to
talk
to
users.
I
think
it's
also
good.
If
you
have
some
that
might
already
be
using
2.0,
because.
A
Some
of
these
things
of
going
parallel,
like
a
project
maturing
to
like
your
2.0
release
and
the
incubation
proposal,
but
from
a
validation
perspective,
it's
saying
kind
of
hard
because
you
talked
to
the
1.01
users
only,
and
you
have
to
see
okay,
how
the
people
are
migrating
over
just
check
for
us
sure.
That's
some
input
for
the
for
the
incubation.
G
C
A
B
Yeah,
so
the
gitops
integration
alloys
is
more
about
reacting
to
changes
in
application
on
the
cluster.
So
we
have
an
event
tracker
on
the
cluster
that
can
basically
pick
up
changes
made
to
an
application.
The
application
changes
itself
can
be
forced
by
any
top,
storing
like
flux
or
argo
cd.
B
That
can
be
a
cause
for
a
new
experiment
to
be
triggered
on
that
application.
So
it's
a
way
of
verifying
if
a
new
application,
a
change
in
the
application
actually
is
good
for
it
is
it
resilient
for
the
system,
so
we
can
verify
that
with
an
experiment
that
we
can
trigger.
That's
one
part
of
the
guitar
story
and
the
other
part
is
the
chaos
experiments
themselves
or
the
workflows
that
omaha
mentioned
can
be
stored
in
gate
and
synced
on
the
leftness
portal.
B
B
I
I
hope
that
answers.
A
Yeah,
I
think
that
that
clarifies
I
think
it's.
It
also
helps,
obviously
for
release
validation
and
things
that
we
do
as
well.
I
want
to
open
it
up
for
two
other
people
before
we
jump
to
the
next
presentation
already
asked
a
lot
here.
H
Just
to
comment
just
as
always,
very
impressive
what
you
have
done,
both
with
litmus
and
the
way
that
you
have
explained
where
you're
at
and
what
you're
doing
with
the
incubation
proposal.
So
thank
you
great
presentation.
D
D
D
D
D
D
Yeah,
I'm
just
trying
to
okay.
There
you
go
the
zoom
link,
there
was
creating
progress,
okay,
so
I'm
founder
of
cloud
arc
and
at
clouder
we
have
built
or
have
been
building
q
plus
to
solve
multi-tenant
application
stacks
problem
of
how
to
create
multi-demand
application
stacks
on
kubernetes.
D
So
at
cloud
arc
we
have
been
working
with
startups
and
enterprises
who
are
essentially
looking
for
help
to
set
up
their
kubernetes
clusters,
and
by
that,
what
I
mean
is
the
platform.
Engineering
teams
are
looking
help
looking
for
help
to
really
make
sure
that
their
cluster
is
usable
across
different
workloads,
for
example,
one
startup
that
we
are
working
with.
They
want
to
host
mongodb
as
a
service
on
kubernetes,
and
they
want
to
create
these
mongodb
stacks
per
tenant.
D
Similarly,
there
is
another
startup
who
is
working
with
moodle
and
want
to
create
moodle
stacks
for
tenet.
Another
enterprise
is
building
a
browser
as
a
service
on
top
of
kubernetes
and
across
all
these
all
these
teams.
What
we
have
seen
is
the
the
main
requirement
is
that
they
have
a
helm
chart
of
their
application
package
and
from
that
hen
chart
they
want
to
now
create
multiple
instances
per
tenant
and,
for
example,
in
this
slide.
What
I'm
showing
is
the
wordpress
helm
chart
and,
let's
say
for
different
tenants.
D
So
when
working
with
these
teams,
what
we
have
seen
is
the
challenges
that
platform
engineering
teams
typically
face
is
how
do
you
really
isolate
the
various
resources
across
different
tenants?
You
know
meaning
that,
for
example,
in
the
case
of
moodle
as
a
service,
what
that
startup
wants
to
do
is
they
want
to
run
a
moodle
stack
for
one
tenant
on
one
worker
node
versus
for
another
tenant?
They
want
to
deploy
segregate
it
on
a
different
worker
node
or
for
the
the
team
that
is
building
browser
as
a
service.
D
For
that
browser
instance
for
each
tenant
separately
and
the
predominant
way
that
this
is
typically
done
today
is
through
some
convention
like
labels,
when
the
platform
team
and
their
consumers
will
agree
upon
certain
labels
and
then
the
the
ask
will
be
that
in
these
helm,
charts
or
whoever
is
deploying
the
application,
they
make
sure
that
the
right
labels
are
used
or
write.
D
Labels
are
defined,
but
that
is
not
such
a
straightforward
problem
to
really
check
if
the
helm
charts
are
including
the
right
labels
or
even
if
you
can
apply
the
labels
on
every
resource
that
creates,
gets
created
as
part
of
the
helm
chart,
because
custom
resources
and
hand
chat
can
create
custom
resources
include
custom
resources,
and
there
is
no
way
to
know
today
what
all
sub
resources
are
getting
created
by
that
operator
who
is
managing
that
custom
resource?
D
So
essentially,
what
it
comes
down
to
the
problem
is
really
the
platform
engineering
teams
today
face
is:
how
do
you
really
apply,
define
and
enforce
tenant
level
policies,
for
example,
are
deploying
separate
stacks
on
separate
nodes.
How
do
you
track
consumption
metrics
for
cpu
memory
storage
network,
and
how
do
you
just
visualize
the
tenant
level
resource
topologies,
which
is
the
graph
of
all
the
resources
that
are
created
as
part
of
a
particular
tenants
stack?
D
So
we
are
addressing
this
problem
through
qplus
and
our
basic
idea
is:
let's
wrap
an
api
around
helm
chart
and
this
api
will
basically
provide
a
control
point
for
for
the
platform,
engineering
teams
to
really
define
and
enforce
these
kind
of
policies,
and
it
will
also
provide
them
a
way
to
expose
only
those
things
that
they
want
to
expose
to
the
to
the
end
user.
D
D
So
that's
the
crux
of
q,
plus,
where
it
you
give
it
input
as
a
helm,
chart
and
then
define
policies
and
monitoring,
and
then
q
plus
will
generate
a
api
for
you
and
then
you
can
create
instances
of
that
api
to
create
actually
the
the
stack
for
every
tenant.
D
It
consists
of
two
components:
there
is
a
one
component
which
we
call
as
crd
for
crds,
which
is
essentially
a
top
level
crd
called
resource
composition,
using
which
you
can
create
whatever
crd,
that
you
need
for
a
particular
platform
service.
So,
for
example,
in
this
picture
we
have
resource
composition
used
to
create
wordpress
service,
crd
mongodb
service,
crd
and
so
on,
and
then
from
those
you
can
actually
instantiate
an
instance
of
that
crd
to
create
an
application
stack.
So
that
is
one
part
of
two
plus.
D
The
other
part
is
a
set
of
cube,
cuddle
plugins,
which
allow
you
to
visualize
the
runtime
graph
of
all
the
resources
that
are
created
as
part
of
an
application
step.
D
So
we,
the
the
main
components
of
cid
for
crds,
are,
as
I
said
earlier,
there
is
the
resource
composition
as
the
custom
resource,
the
top
level
custom
resource,
and
then
we
also
have
resource
policy
and
resource,
monitor,
custom
resources
and
the
the
the
main
work
that
is
done
by
qplus
is
done
through
a
mutating
webhook
in
a
custom
controller,
and
there
is
another
module
which
actually
does
the
work
of
deploying
him
charts.
We
we
use
and
we
depend
on
him,
charge
being
3.0,
so
that
is
every
hand.
D
Chart
needs
to
be
packaged
that
way
using
so
here's
the
demo
scenario
for
today
so
wordpress
as
a
service
the
helm
chart
that
we
have
built.
It
is
a
simple
wordpress
pod,
with
the
service,
ingress,
persistent
volume
and
so
on
and
for
its
database
needs.
We
are
using
the
press
labs
by
sql
operator,
which
actually
provides
a
mysql
cluster
as
the
custom
resource,
so
a
part
and
a
custom
resource,
and
so
that
the
assumption
that
we
are
going
to
work
with
is
the
operator
is
already
deployed.
D
So
that
will
be
something
typically
done
by
platform
engineering
teams.
They
will
work
and
deploy
the
operators
ahead
of
time,
so
we
start
there
and
we
have
for
the
demo.
We
have
two
worker
nodes,
it's
all
on
gke,
so
the
first
step
that
we
and
by
the
way
I
have
screenshots
just
because
it's
easier,
but
this
entire
demo
is
also
available
on
github,
and
I
I
have
pointers
to
it
towards
the
end.
D
So
the
first
step
that
we
start
out
with
is
define
the
resource
composition
crd,
which
in
this
case
is
going
to
take
the
url
of
helm
chart
as
input,
and
this
is
the
hem
chart
that
packages
the
wordpress
part
and
mysql
cluster,
and
in
addition,
there
is
a
policy
definition
which
is
we
are
on
the
left
side.
I
have
shown
the
policy
definition
where
what
policy
we
are
going
to
apply
is
there
are
two
policies,
so
the
first
part
is
for
every
part
that
gets
deployed
as
part
of
this
helm
chart.
D
We
want
to
define
the
resource
requests
and
limits,
and
I
have
picked
cpu
and
memory,
some
arbitrary
values
to
just
showcase
for
the
demo.
So
we
are
going
to
specify
that
as
part
of
policy
and
then
the
second
part
that
we
want
to
specify
is
on
what
node
a
particular
pod
needs
to
be
deployed,
and
if
you
see
here,
we
have
defined
node
selector
as
values,
dot,
node
name
so
essentially
that
allows
us
to
customize
the
inputs
that
we
receive
for
every
tenant
for
different
nodes
that
can
be
deployed
on
different
nodes.
D
So,
with
this
as
input
you
give
this
to
q
plus
and
what
q
plus
generates,
and
you
define
the
name,
whatever
name
that
you
want
to
define.
So
in
this
case,
wordpress
service
is
the
name
that
we
are
going
to
define
and
q
plus
will
register
the
helm
chart
and
will
register
this
new
crd
in
your
cluster.
So
that
will
be
the
first
step
now
once
that
is
done.
The
second
step
is,
you
are
going
to
create
instances
of
this
wordpress
service
so
for
tenant
one.
You
will
create
one
instance
for
tenant
two.
D
You
will
create
another
instance,
so
this
is
just
showing
that
wordpress
service
instance
and
the
spec
properties
of
this
instance
are
essentially
going
to
come
from
the
values.ml
of
that
helm
chart.
So
whatever
the
underlying
chart
that
you
have
built,
its
values.ml
will
be
reflected
as
properties
of
this
custom
resource,
and
that's
why
we
see
here
the
name,
space,
tenet
name
in
this
case,
these
three
node
name,
internet
name
and
namespace-
are
going
to
be
inputs
which
the
platform
team
will
have
to
specify
so
for
different
tenants.
D
They
can
pick
different
node
names.
So
this
is
the
second
step
once
you
create
this.
What
you
get
is
the
wordpress
stack,
which
I
am
now
using
the
q
plus
second
component,
which
is
the
q
cutter
connections
plugin
to
visualize
this
entire
resource
graph.
So
what
we
see
here
is
the
wordpress
service
wp
service
tenant.
One
was
the
instance
that
we
created
and
through
behind
the
scene,
what
q
plus
did
was
when
the
instance
of
wordpress
service
was
created.
D
The
actual
operator
got
invoked
and
it
did
its
right
thing
and
if
you
notice
that
the
mysql
cluster
tenet
one
has
so
many
other
resources
that
it
is
creating,
so
q
plus
is
able
to
discover
all
of
these
at
runtime
by
tracking
four
different
relationships
that
exist
between
kubernetes
resources,
so
owner
references,
spec
properties,
labels
and
so
four
labels,
and
we
have-
I
forget,
the
fourth
one
yeah,
but
all
the
four
properties
that
we
typically
have
for
kubernetes
resources.
D
We
are
able
to
track
those,
and
what
we
see
here
is
these
top
level
resources
for
when
it
is
of
wordpress
service
are
part
of
the
helm
chart.
So
there
is
a
secret.
There
is
a
service
ingress,
persistent
volume
and
the
pod.
So
these
are
part
of
the
wordpress
and
then
mysql
cluster
is
the
is
the
other
resource
part
of
the
hem
chart?
D
Okay,
if
you
see
there
are
only
two
parts
that
are
part
of
this
entire
graph
and
we
are
able
to
discover
these
graphs
and
once
we
have
these
parts
through
these
graphs,
we
are
able
to
verify
policies.
So
this
is,
this
slide
is
showing
that
if
you
look
at
the
pod
for
tenant
one
mysql
and
check
its
resources,
then
the
limits
and
resource
request
limits
and
cpu
and
memory
requests
and
limits.
They
are
what
were
specified
as
part
of
the
policy
input.
D
And
similarly,
if
you
look
at
the
the
node
name,
these
were
the
node
names
that
were
specified
when
this
particular
tenant
instance
was
created.
So
what
we
do
is,
as
part
of
q
plus
the
mutating
web
hook
is
able
to
catch
all
the
parts
that
are
getting
deployed
through
that
helm
chart
and
then
we
modify
the
spec
properties
of
the
pod
before
the
pods
get
deployed.
D
So
that
way
there
is
no
restart
of
the
parts
and
before
the
parts
are
even
deployed
right
policies
are
embedded
into
the
cod
spec
and
because
we
are
able
to
track
the
parts,
then
we
are
able
to
collect
the
metrics
as
well.
So
here
just
is
an
output
of
another
q
plus
cube
cattle
plugin,
which
is
q
cattle
metrics,
which
allows
us
to
collect
cpu
memory.
Storage
network
bytes,
received
network
bytes
transferred
metrics
for
that
particular
instance.
D
So,
in
this
case,
for
this
particular
tenant,
we
are
able
to
just
collect
all
these
metrics,
and
then
these
can
be
exported
in
prometheus
format
as
well
and
can
be
seen
in
any
any
tool
that
supports
prometheus
format.
D
So
now,
in
order
to
design
this
approach.
One
thing
that
we
really
focused
on
was
how
to
create
these
apis
to
wrap
the
helm
charts
and
the
approach
that
we
chose
was
to
build
this
top
level
or
meta
api
or
crd
for
crd,
to
create
these
platform
multi-tenant
platform
services.
D
The
the
reason
we
went
this
route
is
because
it
just
makes
it
easy
to
provide
a
declarative
way
to
define
these
apis
with
the
monitoring
and
policy
inputs
and
also
the
other
approach,
or
the
other
reason
is
by
having
a
resource
composition
as
a
top
level
single
crd
it
it
allows
us
to
have
a
single
operator,
so
the
the
custom
controller
that
q
plus
consists
contains
for
resource
composition,
that
is
a
single
custom
controller
running
in
the
cluster,
and
it's
able
to
generate
any
apis
and
is
able
to
handle
any
apis
that
you
would
create
as
against
this,
something
like
helm
operator
would
actually
create
a
new
operator
for
every
helm
chart.
D
So,
just
from
the
point
of
view
of
footprint
of
this
operational
control,
plane
having
a
single
operator
just
is
better.
So
those
are
the
reasons
why
we
went
with
this
approach
and
yeah
I
mean
I
just
wanted
to
come
here
and
present
to
this
community.
D
A
I
think
this
kind
of
reminds
me
a
bit
what
I
think
the
operator
framework
does
with
their
helm-based
operators
to
to
some
extent.
They
can
also
take
my
album
charts
package
it
and
they
get
a
new
c
or
d
for
for
the
values
file.
D
Correct
so
this
is
the
point
that
I
was
referring
to,
where
we
we've
seen
a
helm
operator.
The
advantage
of
using
a
crd
for
crd
is
you
don't
need
to
create
a
separate
operator
for
every
helm
chart
and
by
my
understanding
of
helm
operator?
Is
you
start
with
the
operator
sdk
and
it
will
actually
generate
a
new
operator
for
a
helm
chart,
and
then
you
will
have
to
instantiate
that
operator
in
your
cluster.
D
So
essentially,
let's
say
if
you
were
to
consider
something
like
this
situation,
where
you
want
to
have
an
api
for
wordpress
api
for
mobile
db
or
browser
as
a
service
in
as
part
of
the
same
cluster,
then
you
would
end
up
actually
instantiating
three
different
operators,
whereas
with
their
own
control
planes.
D
So
the
our
approach
is
different
than
that
where
we
are
actually
allowing
you
to
declaratively,
create
these
apis
and
the
input
that
you
provide
to
them
is
a
the
link
to
the
help
chart
and
that
gets
registered
as
part
of
creating
the
new
crd
and
then
the
consumers
actually
consume
create
instances
of
that
new
api
which
got
created.
H
D
You
are
correct,
so
the
platform
engineering
team
is
actually
we
will
declaratively
define
a
new
crd.
D
Let's
say
like
use
the
resource
composition
to
define
wordpress
as
the
custom
api
and
as
part
of
that
it
will
define
the
helm
chart
so
platform
engineering
team
will
be
only
responsible
for
managing
the
help
charts
and
they
can
use
them
from
community.
It
doesn't
matter,
and
then
q
plus
will
actually
install
the
new
crd
and
then
it's
able
to
also
react
to
the
events
for
those
new
api
types.
D
So,
for
example,
wordpress
service
is
a
new
new
type
that
our
new
kind
that
gets
registered
right
or
mongodb
service
will
be
a
new
kind
that
gets
registered,
so
q
plus
has
the
machinery
to
actually
react
to
these
kind
of
new
new
kinds
or
new
types
that
that
get
added
at
runtime
by
platform,
engineering
teams.
H
I
think
so.
I
think
that
the
way
that
I
understand
it
now
is
that
the
crds
are
in
fact
created
it's
that
you're
shifting
the
responsibility
from
the
platform
team
to
be
able
to
have
to
manage
a
host
of
crds
using
something
like
the
operator
framework
or
whatever.
Whatever
way
they
decide
to
create
the
crds
or
system
it
takes
so
is
responsible
for
that
management.
D
Correct
so
q
plus
takes
over
that
part
of
actually
instantiating
the
crd
and
then
providing
hooks
to
define
any
policies
that
you
want
for
to
apply
on
that
entire
heading
chart.
So
right
now
we
are
focusing
on
policies
which
are
more
the
mutation
based
policies,
because
a
lot
of
these
things
need
to
happen
at
the
pod
level.
I
mean
our
entire.
This
approach
stem
from
the
point
that
operators
and
custom
resources
they
create
a
lot
of
resources
behind
the
scene.
D
Right
like
mysql
cluster
will
end
up
creating
two
three
parts
and
those
parts
are
not
visible
outside.
In
the
sense,
the
only
abstraction
that
will
be
available
outside
is
a
mysql
cluster
instance.
So
how
do
you
really
control
what
gets
specified
in
a
pod
and
it's
possible
that
the
operator
developer
might
not
have
exposed
all
the
part
specs
at
the
custom
resource
level?
So
from
there?
D
This
entire
approach
actually
stemmed
where
we
want
to
be
able
to
mutate
what
or
we
want
to
be
able
to
provide
platform
engineering
teams
a
way
to
define
mutations
at
the
pod
level
and
because
an
application
is
packeted
as
helm.
Chart
predominantly
henchart
is
the
packaging
format
that
is
used
that
we
have
seen.
We
focus
on
that
to
take
that
as
an
input,
and
then
it
just
made
sense
to
wrap
an
api
around
that.
But
then
how
do
you
really
create
that
api?
D
Because,
as
a
platform,
engineering
team,
you
might
be
working
with
many
different
helm
charts?
You
know
and
you
there's
no
way
you
can
actually
create
a
separate
operator
for
each
of
that
health
chart
management,
and
so
that's
why
this
uber
crd
actually
sort
of
entered
our
thinking
that
okay,
let's
have
a
resource
composition
as
the
top
level
crd,
which
allows
the
platform
engineering
team
to
define
new
cr
design
just
takes
away
the
responsibility
of
them
having
to
instantiate
and
create
new.
C
D
Yeah
absolutely
so
that's
a
great
question.
So,
to
be
honest,
we
are
looking
for
inputs
and
right
now
the
policies
that
we
have
defined
are
stemming
from
the
work
that
we
have
done
with
our
early
adopters.
D
So
the
the
things
that
we've
seen
people
ask
for
is
ability
to
control
the
cpu
memory
and
just
collocation
of
parts
onto
certain
nodes.
But
you
are
right,
there
are
the
one
thing
that
is
coming
up.
Next
is
how
do
you
really
attribute
network
traffic
to
only
the
tenant,
specific
network
traffic
and
more
specifically,
traffic
that
is
outward
facing,
which
is
by
outward
facing
what
I
mean
is,
let's
say
in
this
particular
application:
wordpress
part
is
the
one
which
is
user
facing
right
internally.
D
It
uses
mysql
port,
but
any
network
traffic.
That
is
that,
if
you
want
to
count,
it
has
to
go
accounted
for
the
wordpress
pod
to
be
accurate.
So
with
through
our
connections,
we
will
be
able
to
track
those
like
which
part
is
the
only
outward
facing
part,
and
that
also
applies
to
the
policies
where
the
the
policies
around
what
like
hpa
related
things.
If,
ideally,
we
would
like
to
provide
controls
for
everything
that
can
be
mutated
at
the
pod
level.
D
D
So
you
can
say,
as
a
platform
engineer,
you
can
say
that
I
want
to
override
what
it's
already
in
the
helm
chart,
but
if
you
don't
want
to
override
the
flag,
will
tell
you
that,
but
right
now
the
policies
that
we
support
today
are
the
resource
request,
limits,
node,
selectors
and
then
the
next
one
that
is
coming
up
is
for
affinity,
anti-affinity,
pod
disruption
budget.
So
there
are
a
set
of
things
that
we
are
planning
to
do
and
we
would
love
additional
inputs
on
what
new
things
can
be.
A
C
A
Recommend
is
likely.
We
have
this
little
demo
project
here
with
potato
heads
also
have
an
example
available
for
potato
head,
but
people
can
simply
try
it
out
and
play
around
with
it
yeah.
The
first
thing
I
would
do
now,
so
that's.
D
Okay,
so
yeah
I
mean
we'll
share
this.
These
links
on
the
slack
channel,
so
if
folks
can
provide
try
it
out
and
provide
inputs
that
will
be
really
great
and
yeah,
but
thanks
for
the
opportunity
to
come
here
and
present
and
if
any
questions
come
up,
we
are
also
available
on
slag
itself,
open
source
so
feel
free
to
try
it
out.
A
A
That
was
good,
two
new
project
presentations,
so
I
see
that
we're
getting
more
momentum
here
next
one.
What
was
he
moved
to
the
next
session,
where
I'll
reach
out
to
people
yeah
thanks
for
both
presenters
today,
obviously
litmus
guys,
we
will
follow
up
on
next
step
here
and
also
thanks
for
the
new
plus
presentation,
it's
always
great
to
see
new
projects
that
you
haven't
seen
before,
and
obviously
multi-tenant
deployment
and
more
declarative
ways.
A
On
top
of
beneath
this
and
moving
more
into
this
platform
idea
of
always
great
ways
to
look
at
all
right,
then
I
would
actually
call
it
a
meeting
for
today.
Thank
everyone
for
participating
and
see
you
again
in
two
weeks
from
now.
Thanks.