►
From YouTube: CNCF App Delivery 2020-09-02
Description
CNCF App Delivery 2020-09-02
A
A
B
Hello,
this
is
lee
jong
and
welcome
to
joining
us
in
the
c-gap
delivery
community
meeting
and
we
cancelled
the
previous
meeting
due
to
the
kubicon
europe,
because
our
folks
are
have
speak
there,
so
they
don't
make
so
we
postponed
the
several
presentations
to
this
time,
and
so
the
main
topic
of
today's
meeting
is
about
project
presentation
and
we
are
very
honored
to
have
two
projects
here.
B
The
first
one
is
litmus
chaos,
which
is
already
a
since
project,
sponsored
by
c-gap
delivery,
and
today
we
are
very
happy
to
invite
them
to
give
a
project
update
on
little
scales.
So
what's
new
here
and
what's
the
progress
of
their
extensive
onboarding
and
we'll
see
new
features
and
also
what's
the
next
roadmap.
So
this
is
the
first
item
we
have.
B
The
second
item
we
have
is,
we
are
very
happy
to
have
a
flagger
project
here,
which
is
a
automatic,
progressive
application
delivery
system
and
which
can
work
seamlessly
with
git
with
detox
systems
like
flux
or
auger
cities.
So
we
are
very
happy
to
have
its
maintainer
here
to
give
us
the
first
presentation
of
flagger
regarding
to
to
how
it
works,
how
it
designed
and
what
scenarios
can
be
applied
to
how
it
fits
to
the
landscape
of
the
same
cfc
gap
delivery.
So
we're
very
happy
to
have
this
project
here.
B
So
so
I
I
don't
want
to
waste
time
here,
because
we
do
have
two
project
to
present
and
present,
and
so
I
will
let
a
little
more
skills
folks
to
take
over
from
here
to
do
their
presentation
as
they
are
followed
by
flagger.
So
every
presentation
we
hope
it's
like
15
to
20
minutes,
but
it
it's
not
restricted
so
and
we
also
have
a
very
short
q,
a
after
every
presentation.
So,
okay,
I'm
not
sure.
If
who
will
present
the
latino
skills
project
here.
C
Hi
everyone-
this
is
karthik,
I
think
umar
is
due
to
present
about
litmus.
He
will
just
be
joining
us
a
few
minutes
from
now.
Could
we
go
on
with
the
the
next
topic?
I
think
here.
B
Okay,
no
problem,
so
let's
just
start
from
the
flagger
presentation,
I
I
believe
stenson.
He
should
be
already
already
in
a
meeting
right.
D
D
Most
of
them
overlap
with
continuous
delivery.
Of
course,
first,
you
have
to
have
a
ca
pipeline
that
produces
immutable
artifacts,
something
that
signals
that
you
shouldn't
be
using
latest
image
tags
in
production.
D
D
Most
cd
pipelines
right
now
are
just
reacting
to
some
event,
and
only
then
are
you
know
applying
the
state,
and
this
is
something
that
github's
covers
the
way
that
you
define
your
desired
state
and
that
that
state
is
continuously
reconciled.
Not
only
when,
let's
say
you
do
a
git
push,
something
can
change
in
the
cluster
at
some
point
and
who
is
who
is
playing.
D
D
D
D
Try,
okay,
so
let's
say
you
have
your
cd
pipeline
ready
for
for
this
kind
of
reconsideration,
then
progressive
delivery
needs
a
smart
routing
service,
something
that
can
look
at
the
traffic
and
route
traffic
dynamically
between
upstreams,
based
on
some
properties
of
traffic
like
cookies,
http,
headers
weights
and
so
on.
So
this
routing
component
cannot
work
for
just.
D
Let's
say
the
layer
4
cna
implementation,
you
need
something
at
layer,
seven,
hey
do
you
know,
do
you
know
who
is
who
is.
B
D
D
Okay,
so
when
we
started
working
on
flagger,
we
set
a
couple
of
goals:
yeah.
Okay,
I'm
going
to
stop
sharing
nazi
science
is
the
is
the
worst
yeah.
I
think
you
should
pass
or
protect.
B
Okay,
so
I
will
end
the
meeting
for
now
and
please
join
us
again:
okay,
okay,.
B
A
A
Yeah
I
it
just
so
happened
that
when
we
were
presenting
so
back
of
my
mind,
what
it's
happening
again
is
it
me
that's
like
causing
all
the
starts
to
align
very
badly
yeah.
I
don't
know.
I
thought
one.
Our
session
was
one
of
one
upper
one
but
looks
like
it's
continuing.
A
I
think
it's
if
that
happens,
it's
probably
best
to
talk
not
to
share
anything.
B
Okay,
I
I
I
found
one
thing
to
start
that,
so
let
me
try
this.
It
seems
that
I
can
stay
to
the
meeting.
I
mean
to
stop
anybody
to
annotate
the
presentation.
I
should
be
able
to
do.
B
A
D
B
I
mean
when
you
share
the
screen,
you
can
choose
their
setting,
which
name
is
disable
attendee
annotation.
You
can
try
that.
So
I
will
stop
sharing
my
screen
and
you
can
share
your
screen
and
then
choose
that.
That's.
D
D
Cool,
where
was
I
okay?
D
D
When
we
started
flagger,
we
set
up
some
goals.
The
major
one
being
able
to
you
know,
give
developer
confidence
in
automating
the
production
release
like
deploy
on
fridays.
It's
no
issue,
some
something
will
roll
back
if
it
fails
and
that's
something
should
be
thing
that
runs
on
its
own
fully
automated,
and
in
order
to
give
this
kind
of
confidence
we
we
decided
to
expose.
D
Some
fields,
some
things
inside
custom
resources
for
developers
to
be
able
to
control
the
blast
radios
to
define
the
validation
process,
run
their
own
integration
tests
or
any
kind
of
automated
testing
as
part
of
the
of
the
deployment
and
for
those
that
require
manual
approval
for
production
releases.
D
D
It
controls
traffic
and
allows
you
to
decouple
the
deployment
of
an
application
from
the
actual
release
process
and
flagger
implements
a
couple
of
deployment
strategies
that
you
can
choose,
based
on
your
on
the
type
of
application
that
you
want
to
deploy.
D
D
Another
diploma
strategy
is
a
b
testing,
and
this
this
is
used
for
user
facing
apps
that
need
the
session
affinity.
I'm
going
to
explain
a
little
why
this
is
needed.
Let's
say
if
you
want
to
do
a
canon
release
with
with
traffic
shifting
for
for
front-end
app,
then
what
what
the
traffic
shifting
means
is?
D
You
set
up
a
percentage
of
your
users
and
you
redirect
those
to
the
new
version,
but
if
your
app
has
static
assets
like
let's
say,
javascript
or
html,
and
also
the
http
api
or
grpc
api
on
the
backend,
if
you
don't
pin
a
user
to
a
specific
version,
then
they
can
get.
Let's
say
the
javascript
asset
for
from
version
one
and
the
html
form
version
two.
So
we
can
see
how
these
two
cannot
work,
fine
together.
D
So
for
for
this
type
of
apps,
you
have
to
have
a
way
to
pin
users
to
a
specific
version
and
that's
possible
through
http,
headers
or
cookies,
by
defining
a
regex
or
or
a
value
that
matches
those.
D
What
this
does
is
users
are
interacting
with
let's
say
version
one
and
all
that
traffic
is
cloned
and
sent
to
version
two,
but
the
response
is
not
returned
to
the
user,
so
the
user
is
is
not
aware
that
only
all
all
their
actions
are
basically
duplicated
on
on
the
two
versions,
and
this
works
great
for
either
content
apis.
If
you
are
using
traffic
metering,
for
I
know,
something
that
writes
to
a
database
makes
a
transaction,
then
you'll
get
duplicated
data
and
so
on.
D
So
this
works
great
for
things
like
a
machine
learning
model
that
you
want
to
test
or
something
like
cache,
processing
and
stuff
like
that
and
finally,
blue
green,
the
classical
blue
green
with
with
traffic
switches,
a
traffic
switch
where
you
spin
up
version
two,
you
run
the
integration
tests.
You
run
a
load
test
on
it.
You
determine
that
that
version
is
okay.
Then
you
switch
the
whole
traffic
at
once
from
v1
to
v2,
and
this
works
great
for
I
know
stateful
application
and
legacy
apps.
D
How
the
canary
deployment
strategy
works,
so
you
have,
let's
say
a
deployment
running
in
your
cluster.
That
deployment
is
at
version
one.
You
apply
a
change
to
that
diploma.
Let's
say
you
change
the
image
stack
to
version
two
and
what
flagger
does,
instead
of
letting
kubernetes
just
do
the
rolling
update
of
of
that
deployment.
D
It
spins
up
version
two
as
a
different
deployment
and
slowly
starts
to
route
traffic
towards
it,
and
while
it
does
that
it
also
measures,
latency
error
rate
and
and
other
things
to
determine.
If
the
new
version
is
is
okay
respects
your
kpis,
and
if
that
happens,
then
it
it
does
the
in
the
final
step
it
does
a
kubernetes
rolling
update
on
the
old
version
once
that
that
is
finished,
it
waits
for
all
the
traffic
to
go
back
to
that
deployment.
D
Then
it
scales
to
zero
the
the
canary
release,
a
b
testing
works,
the
same
just
the
fact
that,
instead
of
using
a
traffic
weight
here,
we
segment
we
use
a
user
segment
and
only
those
users,
let's
say
that
have
an
insider's,
cookie
or
a
header,
are
redirected
to
version
2
and
based
on
that
traffic.
D
The
decision
is
taken
to
promote
version,
2
or
roll
it
back
and
blue
green
is
the
same,
but
without
any
production
traffic.
Here
you
just
run
your
conformance
tests
load
tests.
Everything
goes
okay,
you
do
the
switch,
then
your
users
will
be
interacting
with
the
new
version.
D
D
So,
based
on
this
definition,
flagger
generates
a
bunch
of
objects
and
if
you
let's
say
if
you
want
to
do
a
canary
setup
manually,
you'll
have
to
duplicate
all
your
definitions.
You'll
have
to
have
let's
say
two
deployment
definitions,
two
horizontal
port,
auto
scalers,
two
cluster
ip
services
and
so
on.
Then
you
have
to
add
a
service
mesh
object,
objects
or
ingress
controllers
for
each
for
each
version.
D
If
you
are
using
flagger
and
the
canary
definition,
then
you
can
specify
only
your
deployment
and
your
horizontal
port,
auto
scaler
and
flagger
will
generate
for
all
these
objects,
including
kubernetes
cluster
ips
service
mesh
objects.
If
you
are
using
a
service,
mesh
or
ingress
objects,
if
you
are
using
an
ingress
controller.
D
So
this
is
how
flagger
simplifies
a
lot
the
setup
of
a
canary
release,
and
it
also
allows
you
to
move
from
one
service
mesh
implementation
to
another
or
from
one
ingress
controller
to
another
without
having
to
change
anything
inside
your
your
deployments.
D
If
you
are
not
using
a
service
message,
and
if
you
only
want
to
do
canon
releases
for
your
apps
that
are
exposed
outside
the
cluster,
then
you
can
use
flagger
with
an
ingress
controller
like
contour
blue
engine
x
and
two
weeks
ago.
Skipper
got
also
integrated
into.
D
Flagger
how
the
validation
process
works,
so
flagger
comes
with
two
built-in
metrics
based
on
prometheus.
So
if
you
install
istio
or
linkedin
or
any
any
kind
of
service
measure
ingress
controller,
these
proxies
will
expose
two
metrics.
One
is
the
request.
Success
rate
from
all
your
requests:
how
much?
How
what's
the
percentage
of
of
errors?
Let's
say:
500
errors
and
request
latency,
you
can
determine
let's
say
in
the
last
minute-
was
the
average
request,
duration
of
your
users
and,
based
on
these
two
metrics,
you
can
set
up
kpis.
D
You
can
also
define
custom,
metrics
and
we'll
see.
I
have
an
example
further
on
and
you
can
also
specify
webhooks
that
will
be
calling
into
your
integration
testing
platform.
You
can
run
load,
testing
and
so
on.
During
the
analysis
and
of
course,
flagger
looks
at
the
kubernetes
objects
and
you
know,
queries
the
the
deployment
health
status.
D
So
for
metrics
you
can
define
custom
metrics.
So
besides
these
two,
the
request
success
rate
and
request
latency.
If
you
want
to
extend
the
metrics
to
something
else,
let's
say
your
application
exposes
some
custom
prompt
use,
metrics
or
you
want
to
use
other
things
that
your
service
mesh
exposes.
You
can
create
an
object,
called
the
metric
template,
and
here
you
can
define
a
prometus
query
or
a
data.query
or
a
cloudwatch
and
flagger
will
call
into
all
these
services.
D
D
In
terms
of
alerting,
flagger
implements
four
allen
providers,
slack
microsoft,
team
discord,
rocket
chat
and
you
can
configure
for
a
canary
release
more
than
one
provider.
Let's
say
your
sre
team
is
using
slack
on
a
particular
channel
and
your
dev
team
maybe
is
using
a
discord
or
something
else.
So
with
flagger
you
can.
You
can
funnel
all
these
events
to
the
to
the
right
team,
no
matter
what
what
the
provider
they
use.
D
You
can
configure
flagger
to
call
into
your
services
at
different
stages
in
during
the
calendar
release.
For
example,
you
can
run
helm
test
before
you
expose
the
new
version
to
the
live
traffic
to
your
users
and
flagger
will
be
calling
into
a
helm
test
service
will
run
your
your
hand,
tests.
If
those
are
successful,
then
it
goes
to
the
next
stage
of
the
the
camera
release
called
rollout,
and
during
that
stage,
for
example,
you
can
start
a
load
test,
while
load
test
is
needed
because
let's
say
you
can
deploy
at
any
point
in
time.
D
But
maybe
you
don't
have
live
traffic,
so
a
load
test
is
there
to
generate
traffic,
so
flagger
can
have
flagger,
can
see
metrics
and
and
decide
what
to
do
and
for
load
testing
there
are
three
implementations:
hey
is
for
is
a
load
test
for
http.
D
There
is
also
one
for
grpc
and
for
conformance
testing.
There
is
support
for
hand
test
bash
tests,
but
you
can
also
implement
your
own
test
runner
and
dell
flagger
to
fall
into
that.
D
In
terms
of
manual
gating,
there
are
several
gates
that
you
can
set
during
the
analysis.
For
example,
let's
say
you,
you
push
a
change
in
your
production
system,
but
you
don't
want
that
change
to
be
automatically
deployed
or
tested.
So
there
is
a
confirmed,
rollout,
webhook
and
flagger
will
ask
you,
hey
I've
detected
a
change.
D
Do
you
want
me
to
start
the
analysis
or
not
and
after
the
analysis
is
over,
it
can
also
ask
you
hey:
do
you
want
to
do
the
final
stage,
the
promotion
or
any
at
any
point
in
time?
During
the
analysis
you
can
tell
flagger
to
roll
back,
even
if
there
are
no
errors
and
all
these
things
happen
through
through
webhooks.
D
In
terms
of
integration,
flagger
being
a
crb
controller
works
great
with
in
a
githubs
pipeline.
So
let's
say
you
change
something
in
your
deployment.
Spec
you
commit
that
to
git,
then
a
githubs
operator
like
flux,
argo,
cd
or
jenkins
x,
will
reconcile
that
object
on
the
cluster
flagger
detects
the
new
version
and
starts
the
canary
analysis
for
you.
D
D
We
are
looking
at
adding
more
pro
metric
providers.
For
example.
Maybe
you
want
to
look
at
stackdriver
or
influx
db
for
for
custom
metrics,
I'm
pretty
happy
about
kubernetes
ingress,
v2
api.
It
has
all
the
things.
Flagger
needs
to
implement
all
the
strategies.
So
at
some
point
I'm
guessing
the
ingress
controllers
will
be
switching
from
v1
to
v2
and
I
want
flagger
to
be
to
be
ready
for
that
switch.
B
Yeah
I
I
have,
I
would
say
well
quite
concept,
a
very
wonderful
presentation.
I
am
very
interested
in
the
roadmap
of
the
flagger,
because
there
are
several
things
I
want
to
ask.
So
do
you
have?
I
know
you
have
already
integrated
with
smi.
So
does
that
mean
I
can
now
use,
for
example,
microsoft,
open
service
mesh
directly
with
flagger,
or
we
still
need
some
work
to
make
that
happen.
D
D
B
So
so
that
means
the
first
thing
we
need
to
do.
If
you
want
to
support
osa
is
we
need
to
upgrade
or
we
need
to
make
lager
support
the
latest
version
of
smi
right,
yeah,
okay,
they're
also
another
question
in
checkbox:
it's
about
flagger
compared
to
argo
rollout.
Can
you
explain
a
little
bit
about
that.
D
Yeah,
the
argo
rollout
is
very
different
from
what
flagger
does
flagger
works
at
deployment
level.
Argo
road
works
at
replica
set
level,
so
it
will
it's
a
implementation
over
the
replica
set.
D
D
One
of
the
reasons
we
made
flagger
reference
deployment
the
the
same
way
and
horizontal
scalar
reference
at
diploma
and
doesn't
come
with
the
whole
deployment
spec
inside
of
it
is
because
we
at
the
beginning,
we
wanted
flagger
to
be
able
to
take
over
applications
while
they're
running
in
production
and
I've
seen
a
lot
of
users
doing
that
by
just
you
know,
changing
all
the
deployments
back,
all
the
charts,
all
the
hearing,
charts
that,
maybe
you
don't
even
control,
is
not
wasn't
an
option
for
us
so
that
I
think
that's
that's
one
of
the
the
main
differences.
B
B
Okay,
I
think
that
is
what
we
have
for
flagger
folks
and
again,
thank
you
very
much
for
joining
the
meeting
and
do
the
presentation
and
very
happy
to
see
what's
next
for
flagger
and
very
exciting
about
this
role.
That
especially
you
mentioned
the
ingress
version
two
in
english
v2.
B
I
I
personally
really
think
it's
it's
really
a
big
change
for
the
community,
because,
right
now
we
have
so
many
ingress
controllers,
and
this
really
makes
integration
with
such
kind
of
thing
it's
very
hard,
so
I'm
also
trying
to
looking
at
what
is
the
direction
of
english
v2
going
to
now
we're
happy
that
vlogger
have
it's
in
its
own
roadmap?
B
Okay,
so
we're
pretty
much
here
for
the
first
project
presentation
and
next
I
will
have
a
litmus
scales
folks
to
do
a
upgrade
update
for
their
project,
especially
after
it
has
been
donated
to
cncf
as
part
of
the
same
set
project.
So
what's
next
most
current
status
and
any
new
features
be
sure
to
be
aware
of
so,
let's,
okay,
so
please
take
over
from
all.
A
Right
yeah,
thank
you
harry.
Let
me
first
figure
out
the
annotation,
distable
and
indian
addition.
B
A
Perfect,
so
the
zoom
wise
we
are
safe.
Now,
okay,
that
was
a
great
presentation
from
stephen
I'm
going
to
look
it
up,
especially
you
know
litmus
really
operates
in
the
in
that
space.
So
what
I
thought
is
really
give
a
quick
presentation
on
what
did
we
achieve
in
the
last
three
months,
so
I'll
probably
show
a
few
actual
updates?
A
Not
really,
I
don't
have
a
detailed
slides,
but
this
is
just
a
10
to
15
minutes
update,
so
our
missions
to
first
of
all,
we
started
using
sandbox
logo.
So
that's
great.
I
think
the
community
is
really
growing
after
we
got
into
sandbox.
There
are
more
people
trying
out.
So
that's
definitely
a
great
news,
a
lot
of
value
being
added
for
the
project
itself,
so
thank
you.
A
Cigar
delivery
leads
who
have
done
a
detailed
due
diligence
law,
so
our
mission
statement
continues
to
be
finding
weaknesses
in
the
implementation
of
either
kubernetes
platform
or
applications
that
are
deployed.
Resiliency
reliability
is
of
foremost
importance
for
both
these
personas
and
our
little
bit
medium
to
longer.
Term
mission
statement
also
is
to
help
chaos
become
a
very
easy
to
use
for
developers
as
well.
It
is
an
extension
to
the
development
process
or
within
the
ci
pipelines,
but
right
now
we're
really
trying
to
concentrate
our
road
map
short
term
to
help
sres.
A
Do
chaos
end
to
end
and
get
some
validation
of
their
existing
operations
or
setups
so
contributions?
My
data
continues
to
be
the
prime
sponsor,
but
the
great
news
is
that
there
is
a
lot
of
community
embracement
happening.
A
Intuit
has
uploaded
their
work
on
aws
with
litmus
and
amazon
itself
is
pitching
in
one
of
the
contributors
from
amazon
is
helping
with
docs
and
ringcentral
is
helping
with
helm,
chart
management
so
we're.
Actually
we
had
a
problem
of
you
know
how
do
we
deal
with
so
many
various
types
of
contributors?
A
So
we
did
take
some
inspiration
from
six,
but
I'll
talk
about
it
and
one
of
the
primary
assets
of
this
project
itself
is
hub.
That's
we
continue
to
get
appreciation
from
having
this
all
these
experiments
in
the
central
place.
So
in
the
last
three
months
we
really
grow
grow.
The
project
usage
by
almost
hundred
percent
we
were
about
fifty
thousand
experiments,
runs
when
we
submitted
the
project
to
it
to
a
sandbox
as
an
application.
A
Within
few
months.
Actually,
the
usage
is
doubled
and
we,
of
course,
added
more
experiments
from
the
community.
So
I
just
put
one
slide
where
I
will
use
this
as
a
guiding
slide
and
then
go
to
a
couple
of
blocks
and
other
stuff
to
show.
A
I
am
not
talking
much
about
what
our
own
team
did.
Rather,
what
community
did
and
how
the
project
is
actually
growing
inside
the
community?
Those
are
the
related
updates.
So,
first
of
all,
we
have
now
a
dedicated
community
manager
to
help
with
community
questions
and
to
run
better
slack
communication
etc,
being
available
to
run
more
meetups
to
go
present
litmus
and
other
meetups.
So
he's
here
as
well,
so
welcome
to
the
project
between
we
added
four
projects.
A
A
So
octato
is
a
namespace
provider
for
developers,
kubernetes
environment
at
m
space
level,
so
they
have
taken
litmus
and
provided
it
is
an
option
to
introduce
chaos
so
that
actually
opened
up
a
very
good
new
persona
where
we
always
needed.
Kind
of
crds
need
to
be
installed,
so
we
needed
admin
privileges
to
run
litmus,
but
this
requirement
gave
up
that
strong
requirement
and
gave
gave
an
opportunity
for
us
to
rethink
the
personas.
So
I'll
probably
talk
about.
B
A
So
that's
a
great
update
and
we
also
have
new
website
which
of
course
is
according
to
the
guidelines
and
along
with
the
new
website
website.
We
also
got
a
contribution
from
an
open
source
contributor,
a
new
mascot
or
chaos.
So
that
was
a
surprise.
B
A
A
minor
thing,
but
it
was
pretty
awesome
that
this
came
from
the
community,
so
we
also
updated
a
chaos
hub
to
with
some
more
easy
to
use
features
apart
from
easy
to
search.
There
were
one
thing
that
community
wanted:
your
experiments
are
great,
but
I
need
to
add
or
tweak
the
experiments
after
so
for
that
I
need
to
download.
Is
there
a
way
your
hub
can
provide
integrated
editor?
A
So
all
these
experiments
can
now
be
updated
on
that
local
cache
in
the
browser
you
can
just
update
them,
tune
them
and
then
apply
them.
So
this
is.
A
You
know
this
was
another
thing
that
requested
by
community
and
then
we
had
it,
and
we
also
got
very
organically
engineers
from
azure
test
team
and
then
someone
from
rancho
community
they've
tested
all
the
experiments
of
ligma's
on
that
platform
and
sent
vrs.
That
hey.
Can
you
add
this
as
certified
platforms?
So
that
was
a
great
addition.
A
And
then
we
also
got
a
container
solutions,
have
done
a
good
study
on
open
source
chaos,
engineering
tools-
and
this
was
a
good
information
for
me
as
well
on
our
other
projects.
So
both
the
cnc
projects
are
covered
litmus
and
chaos
mesh.
They
both
are
coming
almost
with
the
same
feature
scoring,
but
it's
great
to
see
that
chaos
engineering
is
being
covered
as
part
of
the
cncf
encouragement.
A
Both
the
projects
are
scoring
pretty
high.
So
that
was
one
good
thing
for
the
community
and
other
thing.
We
run
monthly
meetings
and
community
meetings.
We
wanted
to
mentor
some
teams
who
are
coming
in
in
different
areas,
so
we
took
the
inspiration
from
six
cncf
for
kubernetes
six,
so
I
think
some
other
projects
are
also
doing
machery,
for
example.
A
So
we
have
done.
We
have
created
some
six,
the
concept
of
six
within
litmus
and
we
try
to
have
once
in
a
month
or
on
demand
basis
at
the
same
meetings
recently
the
doc
sick,
as
well
as
the
deployment
six.
Some
meetings
have
happened,
so
how
we
do
that
is,
we
did
not
create
any
projects,
but
we
made
use
of.
A
If
you
go
to
the
teams
or
in
the
litmus
cares
organization,
so
we
basically
defined
our
teams,
and
these
are
you
know
whoever
wants
to
come,
participate
and
they're
bad
as
part
of
that-
and
this
is
like
a
pretty
simple
way
of
telling
the
community
that
you
know
there
are
multiple
groups-
you
can
choose
observability
or
deployments
or
integrations
or
documentation
wherever
you
are
interested,
it
looks
like
chaos
is
applicable
to
almost
all
areas
or
across
the
scenes
of
landscape.
So
there's
good
amount
of
interest
that
is
coming
in.
A
So
this
is
one
way
where
we're
able
to
segregate
the
big
interest
into
smaller
groups
so
that
more
contributors
can
come
and
drive
themselves.
The
idea
is,
somebody
will
find
themselves
as
a
major
contributor
or
driver
within
that
group,
and
then
they
will
add
the
product
roadmap
prioritization
list
to
that
entire
project.
So
that's
another
thing
that
we
started.
That's
received
pretty
well
from
some
parts
of
the
community
and
the
other
one.
We
are
getting
a
a
lot
of
new
queries.
A
Air
gap
support
was
requested
from
you
know
somebody
and
from
community.
I
think
shantanu
who's
here.
So
we
added
some
additional
help.
There
arm
support
was
another
top
request.
We
added
that
as
well.
A
In
terms
of
a
roadmap,
the
our
team
has
been
very
busy
in
developing
a
litmus
portal
which
I'll
show
it's
not
it
released,
but
alpha
is
going
to
come
out
later
this
month,
but
the
idea
of
utmost
portal
is
it's
not
just
about
experiments
for
kubernetes
resources
or
applications,
but
sra
should
be
able
to
create
or
orchestrate
complex
chaos
workflows
in
order
to
find
the
deep
level
weaknesses
in
their
operational
deployments.
A
So
a
good
portal
is
needed.
Easy
to
use
monitoring
is
very,
very
important
and
what's
happening
to
my
chaos
workflow,
so
a
lot
of
work
is
going
on
and
we
have
embedded
the
entire
argo
workflow
into
the
project.
A
What
I
mean
is
argo:
experim
workflow
wraps
all
these
experiments
and
then
we
call
that
as
chaos
workload,
but
it
just
gets
done
in
a
declarative
way,
which
has
got
a
ui
support
in
the
front
end
as
well,
and
we've
been
getting
a
lot
of
inbound
interest
in
terms
of
we
need
graphana
dashboards.
Chaos
is
great,
it's
very
easy
to
use.
But
how
do
I
monitor
right
so
and
there
are
a
lot
of
existing
experiments.
A
We
got
more
updates
as
well
as
more
queries
our
intention
to
help
in
terms
of
can
we
have
more
granular
tunables
within
the
chaos
hub.
So
this
is
that's
probably
the
update,
and
these
are
some
of
the.
There
are
dashboards
that
are
in
the
making
they'll
go
about
in
this
release.
We
have
monthly
release
cadence.
The
whole
idea
is
prometheus,
matrix
and
graphene.
A
There's
a
lot
wealth
of
information,
but
when
something
happens
you
can
easily
know,
but
with
chaos
litmus.
You
are
willfully
introducing
these
faults,
so
you
need
to
know
what
was
that
problem?
Was
it
during
chaos,
injection
or
did
it
naturally
occur
and
we
needed
to
know
or
we
needed
to
give
that
perspective
of
willful
fault
simulated
fault
versus
the
organic
one.
So
this
is
an
example.
A
If
you
are
taking
a
cpu
node,
you
can
see
that
this
chaos
was
introduced,
a
node
chaos
most
likely,
and
you
can
see
that
node
increase.
So
it's
it's.
It's
also
a
good
once
it
is
introduced.
There
was
some
issues
with
that.
Node
the
node
utilization
was
not
coming
down,
so
it
could.
You
could
say
that
we
found
an
issue
there,
but
a
lot
of
things
can
be
interpreted
as
a
weakness
or
you
know
as
a
strength.
So
this
is
an
example
of
a
well-known
microservices
demo
application
from
weworks
softshop.
A
So
we
took
that
and
we
added
a
bit
of
chaos,
metrics
and
chaos,
graphs
to
that
area.
Graphs-
and
this
is
actually
coming-
contributed
or
the
process
of
being
contributed
by
the
community,
and
they
are
also
coming
in
and
doing
a
lot
of.
B
A
Developments
to
the
sick,
observability
area,
so
that's
really
heartening
to
see
and
the
other
thing
that
I
think
we
just
started.
We
probably
need
to
work
with
captain
team
allies
and
other
things.
We
did
not
find
a
lot
of
bandwidth,
but
we
did
research
captain
and
how
it
actually,
both
captain
team
approached
us,
and
then
there
were
some
other
community
members
asking
for
you're
able
to
do
chaos.
But
how
do
I
interpret
the
results?
A
A
So
that's
the
idea
so
we're
trying
to
not
use
all
features
of
captain,
but
just
the
quality
gates,
feature
they're
very
nice
features
so
end
users
can
use
both
captain
and
litmus
together,
just
like
litmus
and
workflow.
So
our
go
of
clothes
were
used
to
create
chaos,
workflows
from
chaos.
Experiments
captain
can
be
used
to
integrate
quality
gates
as
well.
So
we
see
that
you
know
a
lot
of
cncf
projects
are
coming
together
becoming
more
solutions.
We
are
also
trying
to
focus
if
we
are.
A
Is
better
be
a
cnc,
a
project
so
that
way
people
not
only
adopt,
but
there
is
more
community
engagement
as
well.
A
The
other
thing
that
we
are
dealing
with
right
now
is
multi-tenancy
looks
like
kubernetes
has
introduced
good
features
in
in
with
respect
to
multi-tenancy,
and
we
also
did
some
small
survey,
our
research
among
various
forums.
The
somebody
here
can
give
more
feedback,
but
at
least
what
we
got
to
know
is
many
team
members
using
a
single
kubernetes
cluster
is
a
not
an
uncommon
scenario,
so
what
it
really
means
that
they
would
expect
whatever
the
kubernetes
environment
that
I
would
need
for
development,
I
will.
A
A
Privileges
admin
privileges,
so
though
we
have
now
introduced
admin
mode
and
name
space
mode,
and
there
is
something
called
standard
mode.
Also,
the
whole
idea
is,
we
are
working
towards
making
litmus
usable
by
end
developers
or
sres
who
may
not
have
admin
privileges
so
that
it's
it's
pretty
easy
to
use,
and
there
are
some
thoughts
about
developing
config
map
based
approach
rather
than
crds.
A
But
easy
option
is
let
at
least
the
admin
use.
The
crds
install
the
crds
and
later
developers
or
other
people
can
come
and
use
litmus
within
the
imp
namespaces.
So
that's
another
thing:
there's
still
a
lot
of
discussion
is
going
on
in
the
community,
but
that's
I
thought
is
an
interesting
update
to
share
so
overall,
I
think
we
are
progressing
very
well
getting
a
lot
of
support
from
different
different
communities
within
cncf.
A
B
Well,
thank
you
very
much
and
I'm
very
happy
to
see
the
progress
that
the
little
scales
made
after
joining
the
since
and
we're
happy
to
see
that
there
is
a
positive
feedback
from
the
community
and
the
growth
of
the
usage
of
this
project.
So
we
now
have
don't
have
enough
a
lot
of
time
for
questions,
but
I'd
like
to
check.
If
anyone
has
any
questions,
so
we
can
let
uma.
B
A
All
right,
I
will
post
that
slides
the
meeting
notes.
B
Yes,
thank
you.
Yeah
yeah,
please
link
these
slides
to
the
community
meeting
notes,
so
other
folks
who
cannot
join
meeting,
can
still
check
what's
going
on
in
the
meeting
what
you
guys
presented
over
there
so
well
really
appreciate
it
for
the
presentation
from
uma
and
stanford
about
they
will
try
to
engage
more
people
and
to
present
their
awesome
project,
regardless
of
whether
they
are
part
of
things,
if
or
not,
or
they
have
any
motivation
to
donate
it
or
not.
B
We
are
trying
to
make
this
thing
a
more
collaborative
and
open
community
for
people
to
talk
to
each
other
and
share
knowledge
about
application,
delivery
and
application
development.
So
I
hope
that
we
will
have
more
projects
present
in
the
upcoming
meetings
and,
if
you
guys
have
any
idea
or
any
project
you
want
to
see-
and
I
will
have
it
very
happy
to
talk
with
you
and
I
will
try
to
reach
out
to
the
project
as
much
as
I
can
to
have.
B
You
know
these
these
awesome
ideas
present
in
the
meeting
okay,
so
I
think
we
are
pretty
much
our
a
little
appreciation
for
today's
meeting
and
again
we
really
thank
you
for
everybody
joining
this
meeting
and
I
will
be
super
looking
forward
to
see
you
focusing
you
next
time
so
bye-bye.