►
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from April 17-21, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
Greetings:
everyone,
my
name
is
prithviraj
and
I.
Welcome
you
all
to
the
cncf
on
demand,
webinar,
where
we
we
will
be
talking
about
the
literacy
hours
year,
review,
2022
the
chaos
engineering
updates.
It's
been
an
amazing
year
for
the
litmus
chaos
project
from
incubation
to
so
much
more
that
the
project
has
achieved
as
a
community
together
over
this
year
has
been
commendable,
and
we
we
are
here
to
just
share
the
year
review
with
you
all
I
have
with
me:
vedant,
we'll
be
introducing
ourselves
quickly.
A
So
moving
on
to
the
introduction,
as
as
you
all
know
me,
prithviraj
I'm,
leading
the
community
for
litmus
chaos,
we
started
off
at
Maya
data
and
since
then
it's
been
a
journey
of
more
than
two
and
a
half
years
where
we
have
switched
companies.
And
finally,
we
are
at
harness.
Who
is
a
primary
sponsor
to
litmus
chaos,
and
we
we
are
contributing
to
the
Little
Miss
project
here
in
and
year
out
and
alongside
that
I've
been
involved
in
the
kubernetes
community.
A
B
Hi
everyone-
this
is
vedant
I'm,
also
a
core
contributor
at
litmus
chaos
and
Senior
software
engineer
at
harness,
it's
been
a
same
Journey
with
prithvi.
We
started
from
my
data
and
then
we
are
here
in
harness,
so
I
also
work
as
a
part
of
a
religion,
automation,
team
at
Christmas,
Chaos
and
yeah
I'm.
Looking
forward
to
this
demo.
A
Awesome
awesome
so
moving
ahead,
let's
just
move
on
to
the
agenda
we
have
for
today
we
have
a
packed
agenda,
we'll
be
talking
about
a
lot
of
things
it's
a
year
review,
so
we
have
to
talk
about
a
few
things
which
are
important,
we'll
obviously
start
off
with
chaos.
Engineering.
There
are
a
lot
of
new
folks
who
to
tune
in
who
want
to
understand
what
chaos
is.
Don't
have
a
lot
of
idea
and
then
obviously
we'll
introduce
the
litmus
project.
A
We'll
talk
about
the
litmus
Journey
from
incubation,
which
happened
earlier
this
year
to
the
metrics
that
have
happened
over
the
year.
We
have.
We
are
going
to
talk
about
the
website,
the
adopters.
What
are
the
community
events
and
programs?
We
took
part
in
participated
in
and
what
happened
during
the
course
of
this
year,
then
obviously,
the
student
programs
and,
lastly,
kubecon
Is,
the
participation
of
ultimate
coupons
vedant
will
be
taking
it
ahead
with
the
project
updates
and
what's
in
La
watch
line
in
the
future
for
litmus
chaos.
A
And
lastly,
you
will
get
an
idea
how
you
can
be
a
part
of
the
community
how
you
can
contribute
and
how
you
can
join
this
amazing
massive
chaos,
engineering
community.
So
let's
get
started
without
any
further
Ado
chaos,
engineering,
a
closer
look!
Let's:
let's
talk
about
chaos,
engineering,
but
before
I
I
start
talking
about
chaos,
engineering
I
think
it's
it's
best
to
talk
about
how
this
practice
came
in
or
who
I
am
at.
This
practice
is
pretty
essential.
So
I
mean
this.
A
A
If,
in
the
kubernetes
term,
you
can
say
and
at
a
production
level
and
that's
what
brought
in
the
term
kiosk
engineering
and
I
think
initially,
it
was
just
where
people
saw
scaling
as
an
issue
where
people
saw
production
level
failures,
curating
them
as
something
which
is
vital,
and
that
is
why
the
term
chaos
testing
came
into
place,
but
slowly
slowly.
It
was
realized
that
chaos
engineering
is,
is
not
just
about
production
level
failures,
but
it's
it's.
A
Obviously
you
want
to
test
in
production,
but
but
eventually
it
was
more
about
bringing
in
it
as
a
testing
practice.
Where
you
understand
what
sort
of
chaotic
conditions
might
happen
in
real
life,
you
understand
what
sort
of
conditions
might
happen
to
your
system
when
it
goes
into
production,
but
just
before
that,
as
well
in
a
staging
in
your
pre-staging
in
your
testing,
devops
CI
pipelines
all
these
environments.
So
let's
take
a
closer
look
at
chaos.
A
It's
basically,
you
need
to
identify
the
weak
points
or
the
weaknesses
in
a
system
through
you
know,
testing
in
a
controlled
way
where
certain
random,
Behavior
or
unpredicted
behavior
can
be
analyzed,
visualized
and
understood
so
that
when
the
system
goes
into
production,
when
there's
a
requirement
for
scaling,
for
example,
on
your
Black
Friday
sales,
there's
a
spike
in
the
number
of
users
and
that's
in
your
system
might
show
an
unpredictable
Behavior.
So
what
what
helps
there
is
is
chaos,
engineering,
that's
the
goal
of
chaos.
Engineering
and
I
mean
it.
A
It
was
seen
as
something
as
breaking
things
in
production,
but
it's
more
like
taking
things
in
production
or
breaking
your
systems
to
identify
or
make
your
systems
resilient.
That's
if,
if
you
want
to
complete
the
sentence
there
and
that's
chaos
Engineering
in
in
a
few
words,
I
mean
there's
so
much
more.
You
can
talk
about
it,
but
let's
move
on
quickly
why
chaos
engineering
is
the
solution
and
how
you
can
you
can
start
your
chaos,
engineering
Journey
or
what
are
the
four
principles?
The
four
easy
five
easy
steps
on.
A
You
know
running
a
chaos,
engineering
solution,
so
I
mean
chaos.
Engineering
was
seen
important,
as
I
mentioned
about
your
systems
are
vulnerable
and
as
we
move
on
to
the
microservices
era
or
we
move
on
from
Legacy
systems
yeah.
If,
if
we
take
an
example
of
a
kubernetes
application,
then
you
can
say
that
your
kubernetes
application
itself
is
is
like
a
pyramid
right.
So
there's
a
kubernetes
application
on
the
top,
and
then
there
are
your
other
services
there's
a
platform
layer.
A
There
are
your
other
applications,
your
mongodb
Kafka
and
even
your
CNC
of
applications
like
code
DNS
or
open
EBS,
the
the
initial
application
we
started
using
the
chaos
for
running
alongside
their
application
right
so
and
each
and
every
layer
has
a
potential
vulnerability
or
it's
it's
dynamic
in
nature.
We
are
progressing.
A
There
are
new
enhancements,
new
developments
and
there's
a
potential
outage
that
can
happen,
and
that
is
why
chaos
testing
helps
in
continuous
validation
or
it
helps
in
continuously
tests
if
your
systems
are
resilient
or
not,
even
if
there's
an
enhancement
in
your
system
and
to
move
ahead
to
the
solution
or
to
make
sure
that
you
are
running
your
chaos
tests
in
the
right
way.
There
are
four
easy
steps:
I
mean.
Initially,
you
identify
the
steady
state
of
a
system.
How
does
it
behave?
What's
its
normal
behavior?
A
How
does
it
behave
and
it's
it's
in
a
normal
condition?
And
you
then
you
you
hypothesize
around
it.
You
create
a
hypothesis
where
you
know
how
your
system
behaves
in
a
steady
state,
how
your
system
behaves
when
certain
set
of
experiments
are
run
or
certain
set
of
vulnerabilities
are
found
out
in
that
experimentary
group,
and
then
you
introduce
a
fault
or
introduce
variables.
I
mean
that
that
happened
in
a
real
life
so
that
you're,
just
like
a
server
crash
or
a
spike
or
a
you
know,
network
connection,
error
or
a
malfunction.
A
Some
sort
of
a
vulnerability
is
introduced
in
your
system,
which
can
be
called
a
chaos,
experiment
or
a
chaos
scenario.
And
then,
once
you,
you
test
your
systems.
You
try
to
disprove
the
hypothesis
that
you
had
created
by
you
know
looking
for
a
difference
in
how
your
system
behaves
in
your
steady
state
and
how
your
system
behaves
when
the
experiment
happens,
and
you
continue
this
loop
I
mean
if
you
are
in
SRE
you.
A
You
understand
that
you
have
a
certain
set
of
service
level
objectives
slos
and
if
your
slos
have
continued
to
be
met
in
spite
of
running
their
scenario,
then
obviously
your
systems
are
resilient,
but
just
in
case
your
slos
are
not
met.
Then,
of
course,
your
systems
I
mean
you
need
to
again
find
a
potential
fix
to
that
vulnerability,
and
then
you
continue
to
test
which,
which
can
be
called
the
chaos,
engineering,
Loop
or
the
overall
process,
as
as
defined
in
the
principles
of
chaos,
engineering,
so
moving
ahead.
A
Let's
we
have
quickly
introduced
litmus
chaos,
I
mean
we
have
quickly
introduced
chaos,
engineering
and
now
we
move
on
to
the
litmus
chaos
project,
a
project
for
everyone,
who's
wanting
to
run
chaos,
engineering,
it's
open
source,
it's
it's
a
tool
to
identify
weaknesses
and
potential
outages
in
your
infrastructures.
Initially
it
started
off,
obviously
as
a
tool
to
test
only
kubernetes
I
mean
infrastructures
or
had
only
kubernetes
chaos,
experiments
and
slowly,
with
the
understanding
of
the
community.
A
We've
moved
on
to
creating
more
and
more
experiments,
which
obviously
are
Cloud
native,
but
also
are
Beyond
kubernetes,
where
you
can
test
your
applications
where
there
are
experiments
for
VMS,
gcp
AWS.
So
it's
it's
a
complete
tool
set
it's
it's
cncf
native
once
again,
and
it
became
a
Sandbox
project
back
in
2020
and
now
in
2022
it
was
successfully
it
became
a
cncf
incubating
project
and
it's
an
amazing
community
where
people
come
in,
contribute
and
make
chaos
engineering
more
helpful,
more
available
to
everyone
out
there
in
the
community.
A
So
let's
talk
about
clcf
incubation
first
before
we
even
move
on
to
more
details
of
the
project
on
January
11
2022
litmus
became
a
cnca
project.
After
more
than
a
year
of
hard
work,
oil
with
the
community
we've
got
some
amazing
adopters
who,
who
came
in
talked
about
how
they
have
been
using
litmus
I
mean
end
users
like
orange,
ketopy,
lenskart
and
so
many
more
and
then
obviously
now
the
the
community
is
growing.
A
We
are
hopeful
that
we
we
are
able
to
achieve
more
grow
more
as
a
project,
have
more
adopters
and
look
forward
to
the
graduation
stage
in
the
upcoming
months
or
in
a
few
years
time.
So
moving
ahead,
just
a
few
starts
I
mean
litmus.
As
you
can
see,
it's
adopted
by
some
amazing
Enterprises
I
mean
these
are
some
formal
adopters
that
have
left
listed
themselves.
A
But
then
again,
there
are
so
many
more
adopters
out
there
who
are
using
litmus
day
in
and
day
out-
and
there
are
so
many
stories-
I
mean
with
AWS
FIS
and
so
many
other
stories
Accenture
they
came
out,
spoke
about
how
they
are
using
litmus.
So
there
are
a
lot
of
stories
that
are
coming
in
day
in
and
day
out,
and
it's
been
five
years
of
active
development
I
mean
today
there
are
around
more
than
I
mean
a
million
experiments
that
have
been
run
recently.
A
We
saw
Five
Points,
some
four
million
Docker
pulls,
so
the
project
has
got
massive
growth.
You
can
see
that
exponentially
chaos,
engineering.
A
Release
of
2.0
back
in
2021
I
think
Fitness
has
achieved
a
stable
platform
and
with
what
we
are
looking
next
in
terms
of
our
future,
with
with
more
and
more
Enterprises
coming
in
litmus
as
an
open
source
platform
is,
is
growing.
Obviously,
so
these
are
some
metrics
this.
This
year
we
had.
We
grew
by
a
thousand
one
hundred
plus
Stars.
We
had
a
hundred
plus
four
forks
and
then
slack
members
also
grew
by
400,
plus
around
500
folks
joined
in
the
slack
Community
Docker
pools.
A
That
was
a
massive
surprise
for
all
of
us,
and
we
saw
more
than
2.5
million
dollar
pulls
growing
and
we
we've
got
a
massive
amazing
Community.
Obviously,
we
believe
that
the
metrics
do
not
actually
prove
the
love
litmus
has
received,
or
the
number
of
people
using
litmus
but
feel
free
to
check
out
the
litmus
chaos
GitHub
and
drop
us
a
star
make
sure
you
use
it
join
the
slack
community
so
that
you,
you
can
also
be
a
part
of
this
massive
community.
A
A
We
have
spoken
about
a
lot,
so
you
can
just
go
through
the
latest
website
which
helps
you
get
a
lot
of
information
on
on
what
litmuses
and
how
you
can
be
a
part
of
the
community.
What
are
the
docs?
What
the
docs
are,
what
what
the
Enterprise
version
holds
and
all
these
things.
So,
let's,
let's
move
on
back
to
our
slide
deck.
A
We
were
speaking
about
the
website
and
now,
let's
talk
about
the
adopters,
the
end
user,
the
formal
adopters
I
mean
we
saw
eight
amazing
and
use
the
adapters
in
this
year,
and
then
we
saw
so
many
people
more
I
mean
coming
out.
Sharing
the
litmus
stories
and
I'll
share
share
a
few
stories
with
you,
I
food,
FIS
and
Adidas.
A
The
other
stories
obviously
are
available
GitHub,
so
the
Adidas
Story
I
mean
they
started
a
few
months
ago,
and
it
was
obviously
about
bringing
the
culture
like
chaos,
engineering
as
a
practice,
but
then
they
eventually
after
evaluation,
after
various
tools
that
they
evaluated
they
chose
litmus
chaos
because
of
the
following
reasons:
I
mean
they're
using
litmus
cures
for
their
applications,
workloads
and
infra,
and
then
I
mean
they
are
also
using
experiments
like
Port
deletion,
Network
latency
packet
loss
for
for
the
payment
section.
A
They
check
out
the
login
section
and
obviously
they
haven't
moved
to
production,
yet
as
shared
by
Victor.
Who
is
one
of
the
community
members.
But
hopefully
there
will
be
changes
into
into
this
story
where
Adidas
moves
into
production
and
speaks
fairly
more
about
the
litmus
usage.
And
obviously,
we
are
glad
to
see
why
they
have
chosen
litmus.
They
have
they
had
their
priorities
matched.
A
These
are
the
priorities
that
they
have
shared
and
then,
obviously,
as
of
now,
it's
been
used
in
a
staging
pre-production
environment
and
then
the
the
future
plan
is
to
move
into
production
through
cicd
pipelines.
So
this
was
an
amazing
show
story
shared
by
Adidas,
I.
Think
one
of
the
best
stories
that
can
come
forward
for
litmus
usage-
and
maybe
you
can
also,
if
you
are
using
liquids
you
can
come
forward-
share
your
stories.
What
exactly
how
you're
using
litmus?
A
Why
did
you
choose
litmus
and
how
do
you
plan
to
use
difference
in
the
future
and
similarly
Raja
Raju
and
another
amazing
Community
member
he
shared
how
FIS
is
is
using
litmus
at
FIS
Global.
A
They
have
been
I
mean
moving
towards
more
SRE
practices,
transforming
platforms,
and
that
is
why
they
chose
litmus
chaos,
because
I
mean
it
fulfilled
a
lot
of
things
for
them,
I'm
glad
that
it
fulfilled
their
testing
requirements
had
a
great
Community
thanks
for
again
mentioning
good
words
about
the
community
and
all
these
factors
help
them
use
litmus
and
then
obviously
they
are
using
again
litmus
in
their
applications,
workloads
their
simulating
experiments
to
understand
their
utilization
of
jvm's
Key
Resources.
A
They
are
using
also
using
litmus
for
Kafka
resiliency
and
eventually
looking
to
integrate
a
litmus
with
SCI
CD.
So
these
have
been
some
amazing
Stories
and
one
last
story
which
which
also
became
a
case
study
for
us.
It's
come
out
as
a
Blog
is
the
one
by
iFood.
Ifood
is
a
food
delivery
platform
operating
in
our
based
out
of
Latin,
America,
Brazil
and
Colombia,
and
they
are
approximately
having
60
million
orders
per
day.
A
Obviously
they
introduced
a
fallback
or
a
circuit,
breaker
method
and
most
other
engineering
teams
try
to
provide
the
support
for
that
outage.
But
the
eventual
goal
was
that
they
needed
a
better
approach.
The
better
approach
was
chaos,
engineering,
that's
what
they
shared
in
in
the
community,
and
that
is
where
they
decided
to
check
out
various
tools.
Here's
an
architectural
way
of
defining
how
they
have
using
litmus
chaos
and
I
mean
they.
A
They
saw
the
broad
experiments
they
saw,
how
litmus
can
help
them
and
they
they
believe
that
it
has
a
well-defined,
our
back
and
authentication
mechanism
and
that's
what
led
to
his
I
mean.
Ifood
using
litmus
chaos,
these
stories
have
been
amazing.
They
have
come
out
this
year,
sharing
how
Community
or
how
people
can
use
litmus
chaos.
It's
become
a
business
case
or
a
use
case
for
everyone
to
adopt
and
feel
free
to
check
out
the
blog.
The
iPhone
story,
Philips
cures
on
how
they
plan
to
use
litmus
chaos.
A
Moving
on
this
is
the
community
events
and
programs
that
happened
around
this
year.
There
were
so
many
I've
just
took
out.
I
just
took
out
the
pictures
and
I
have
put
it
out
for
the
community
to
see
I
mean
there
are
so
many
more
I
shout
out
to
so
many
folks
out
there.
Amid
the
saranya
jaina
vedanta
has
joined
a
cyan
model,
Karthik
foreign.
A
I
mean
he's
also
been
leading
the
litmus
Community
for
some
time
now.
Shout
out
to
these
folks
who
have
been
contributing
to
the
community,
have
taken
part
in
amazing
community
events
like
ACD
Sri,
Lanka,
kcd,
Bangalore
Chennai
all
day
devops
I
mean
the
docker
meetups,
the
yeah
AWS
Community
Days
coaching.
There
are
there's
so
much
that
has
happened
over
the
year
and
we
thank
all
of
them
who
have
taken
part
in
the
community
who
have
contributed
and
have
joined
these
community
events
and
programs
to
make
the
community
a
really
successful
one.
A
So,
moving
on,
let's
talk
about
a
couple
of
things
that
we
usually
organize.
The
community
sync
up
calls
I
mean
our
monthly
cadences,
obviously
having
a
release
every
on
the
every
15th
of
the
month
and
then
obviously
following
it
up
with
some
patch
releases
and
fixes,
and
we
are
having
the
community
sync
up-
calls
every
third
Wednesday
of
the
month.
So
if
you
haven't
joined
one
yet
feel
free
to
join
in,
feel
free
to
join
the
slack
Channel
and
join
the
community
sync
ups
and
then
obviously
we
have
the
chaos
engineering
meetups.
A
We
had
an
in-person
one
late.
Last
month
and
we
are
having
it
online
every
last
Thursday
of
the
month,
so
if
you
are
available,
feel
free
to
join
in
feel
free
to
submit
a
talk.
Let
us
know
if
you're
interested
in
talking-
and
this
is
what
the
community
has
been
doing
to
meet
ups
every
month
and
then
chaos
Carnival.
Of
course,
the
2023
Edition
is
coming
up,
has
been
a
proud
Community
sponsor.
A
So
if
you
want
to
speak
about
something
related
to
litmus
and
want
to
join
chaos,
Carnival
as
a
community
member
feel
free
to
reach
out
to
us,
feel
free
to
join
the
community
and
and
submit
your
talk.
Lastly,
student
programs:
litmus
chaos
has
been
a
crucial
part
and
has
taken
crucial
part
in
the
student
program.
Gsoc
GitHub
externship
LFX
mentorship,
and
we
are
prayag
this
year
early
this
year
he
became
one
of
our
mentees
and
we
we
had
a
great
time
with
him.
A
He
helped
in
adding
new
CLI
commands
for
scenarios
good
operations
and
he
also
allowed
users
to
automate
scenarios
as
part
of
the
cicd
pipeline.
So
basically,
his
work
was
around
developing
new
features
and
adding
Integrations
test
for
the
litmus
CTL
and
again
kudos
to
prayag
for
being
an
amazing
part
of
the
community
and
helping
as
as
Analytics
mentee.
A
So
let
me,
lastly,
we
once
everything
got
over
we'll
talk
about
coupons,
of
course,
keep
on
both
the
cube
calls
this
year
were
amazing.
They
were
masterful,
it
was
chaos,
Master,
participation
from
the
community
in
terms
of
kubecon
EU
and
kubecon
n
a,
and
we
had
two
Amazing
Project
meetings.
We
had
a
case
study
I
mean
we.
A
We
had
Uma
and
Ramiro
from
the
operator
Community
talking
about
the
case
study
of
bringing
QRC
into
Cloud
native
developers
and
then
obviously,
both
are
maintainers
tracks
featuring
I
mean
an
end
user
story
from
sivo
we
had
Uma
and
Karthik,
sharing
the
project
updates
and
how
the
project
has
grown
over
the
last
one
one
and
a
half
years,
and
obviously
there
were
two
amazing
stories
that
were
shared
by
Raj
from
FIS
or
how
chaos
engineering
is
applied
to
the
fintech
domain
and
how
I
mean
iterate,
the
iterate
Community
came
out
inside
injection
and
SLO
validation
goes
hand
in
hand,
also
an
amazing
co-located
event
in
chaos
day,
which
it's
a
good
participation
from
a
lot
of
folks
from
the
community.
A
They
they
spoke
about
litmus
shout
out
to
Bianca
from
HCL
Crystal
Lam,
another
amazing
Community
member
who
spoke
about
how
they
are
using
litmus,
how
litmus
has
been
essential
to
them,
and
then
we
look
forward
to
kubecons
in
2023
again
Amsterdam
Chicago.
The
plan
is
to
have
litmus
chaos.
There
have
a
booth
if
possibly
they
are
having
a
booth
and
then
obviously
I'll
maintain
a
track
sessions,
and
speaking
more
about
litmus
in
in
these
amazing
cubecons
has
has
been
a
pleasure.
A
I
mean
we
thank
cncf
again
for
giving
litmus
the
platform,
and
obviously
we
will
look
forward
to
participating
more
in
kubecons
and
and
making
the
litmus
speech
the
community
further.
So
these
are
some
snapshots.
Lastly,
I
mean
thanks
a
lot
to
Chris
Priyanka
dims
Kiran
Sumit,
another
amazing
Community
member,
and
we
we
are
having
friends
from
everywhere.
A
So,
thank
you
so
much
to
all
the
community
members
who
have
participated
with
litmus
who
have
given
litmus
the
platform
and
who
have
loved
litmus
so
much
and
helped
the
chaos
engineering
adoption,
and
we
hope
that
you
love
keep
enjoying
what
what
litmus
is,
how
litmus
is
growing
and
we
hope
to
see
you
again
in
one
of
the
coupon,
some
conferences
or
somewhere,
where
you
continue
to
support
litness
with
this
I
would
allow
vedant
to
take
over
who
will
be
talking
about
the
updates
with
the
project
and
how
the
future
will
be
talking
about
the
next
version
and
how
you
can
get
involved
so
without
any
further
Ado
without
you,
you
can
take
over.
B
Thanks
sutvi
for
sharing
such
great
details,
yeah
I
mean
it's
always
good
to
know
how
chaos
engineering
is
doing
and
how
our
project
is
doing
right
and
obviously
it's
been
a
great
year
of
you
know
we
are
have
you
had.
We
have
had
like
gate
number
of
contributions
and
also
very
a
good
number
of
feature
requests
and
also
we
got
contribution
in
terms
of
feature
enhancement,
so
yeah.
The
community
helps
us
in
building
our
product
also
right
so
yeah
and
it's
been
a
year
of
learning.
B
While
building
such
you
know
feature
enhancements,
so
yeah
I
will
proceed
with
what
enhancements
we
did
this
year.
So
I'm
just
sharing
my
screen.
B
Okay,
I
hope
it's
really
good
yeah.
So
yeah,
like
the
main
Hazard
of
this
year,
we
introduce
HTTP
chaos,
experiments
I
mean
you
know.
We
started
with
the
poor
HTTP
latency,
but
now
we
have
our
own
five
five
different
experiments.
So
this
this
experiments
was,
you
know,
introduced
in
on.
You
know,
for
the
you
know,
kts-based
platforms.
So
now
what
you
can
do
is
you
know
you
can
Target
a
particular
report
at
a
particular
Port.
B
So
by
specifying
the
port
you
are,
you
know,
targeting
you
know
the
traffic
going
through
that
Port,
you
know
and
injecting
you
can
say.
You
know
latency
via
HTTP,
latency
experiment
and
you
know
modifying
the
status
code,
the
for
the
traffic
that
is
going
on
right
and
similarly
for
reset
peer.
You
know
connection
reset
events
and
modifying
the
header
of
the
traffic
and
modifying
the
body
of
the
traffic
right.
B
So
you
know,
if
you
want
to
you,
know,
know
more
about
these
experiments
now
these
are
available
in
our
you
know:
documentation
catalog.
So
what
you
can
do
is,
if
you
go
into
the
documentation
here
so
in
our
experiment,
documentation,
you
will
find
you
know
you.
If
you
go
into
the
bot
chaos
category,
you
will
find
you
know
five
different
experiments
based
on
report,
HTTP
chaos.
So
here
you
know,
this
is
what
we
started
with
latency,
but
now
we
have
our
four
or
five
different
experiments
here
and
so
yeah.
B
That's
on
the
for
HTTP
chaos.
Next
is
edition
of
AWS
is
an
experiment,
so
this
is
also
a
gate.
You
know
feature
in
a
feature
enhancement.
You
know
this.
This
experiment
will
help
users
to
detach.
You
know,
availability
zones
for
particular
load
balancer,
and
this
was
added.
This
is
added
in
in
the
litmus
Python,
and
this
is
actually
a
good
example
of
how
you
can
also
write
your
experiments.
B
Be
it
let
me
score
or
be
it
litmus
python
so
and
also
like
there's
a
place
like
you
know
in
case
you're
not
much
familiar
with
go.
You
can
also
use
litmus
python
for
writing
your
own
experiments
from
scratch
right
and
then
next
is
gcp
experiments.
So
this
year
we
all
like.
Previously
we
already
had
gcp
experiments,
it
was
TCP
instance
stop
and
gcp
disk
loss,
but
what
we
had
was
you
know
with
respect
to
names.
B
So
if
you
go
in
the
gcp
category
right,
we
already
had
the
gcp
PMS
Mr,
but
that
was
you
know
you
you.
We
will
have
to
give
the
instance
name
or
the
list
of
instance
names.
Then
you
know
inject
the
VM
instance
top
chaos,
but
here
you
know
one
issue
that
you
might
face
right.
For
example,
your
instances
might
not
have
the
same
name
always
right
or
you
might
be
using
manage
instance
groups.
Your
instances
are
going
down
and
coming
up
right.
In
those
cases
your
musicians
might
not
be
same
right.
B
So
what
you
want
to
do
is
instead
of
providing
names,
you
can
use
instance,
labels
and
similarly,
in
case
of
this,
the
same
behavior
was
there.
We
were
able
to
you,
know,
detach
the
disk
by
giving
the
disk
names.
But
now,
with
the
current
experiment
that
we
have
introduced,
VM
disk
loss
by
label,
you
can
provide
a
label
of
that
particular
disk
and
accordingly,
it
will
detach
like
this
from
the
VM
so
yeah.
This
is
also
available
in
our
documentation
and
also
in
case.
B
You
can
also
find
chaos,
engines
and
experiments
for
the
same
experiments
in
you
know
chaos.
So
these
are
the
new
experiments
that
we
have
it
added
this
year.
There
have
been
many
good
enhancements
in
on
the
experiment,
side
or
I
would
say
litmus
core
side.
So
moving
ahead,
let's
you
know,
I
start
with
the
enhancements,
so
I
will
just
make
it
full
screen
yeah
so
like
we
already
had
pod
Network
latency
experiments.
B
Now
now
what
we
have
is
a
new
EnV
or
it's
a
new
to
enable
that
you
can
provide
Jitter
so
digital,
it's
it's
like
you
know,
for
example,
you
are
giving
a
delay
of
10
seconds
to
your
experiment
right,
so
what
it
will
do
is
it
will
inject
chaos,
total
Target
ports
and
inject
a
constant
latency
of
10
seconds
right,
but
in
a
realistic
manner,
right
it
might
not,
or
the
latency
might
not
always
be
10
seconds.
B
You
want
to
be
more
realistic
right,
so
in
that
case,
what
you
can
do
is
you
can
use,
make
use
of
this
EnV
Jitter.
So
let's
say
you
have
a
10
second
latency,
and
if
you
provide
105
seconds
of
Jitter,
then
it
will,
you
know,
range
between.
You
know
5
to
15
seconds.
Then
latency
will
be
like
that.
So
this
will
help
you
to
you
know
simulate
more
realistic.
Behavior
of
you
know
traffic
latency
and
that's
how
you
can
also
monitor
your
applications.
How
they
are
you
know,
behaving
in
such
cases.
B
Right
and
next
is
we
have
done
some
enhancements
on
the
stress,
chaos
experiments.
So
by
default
we
have
the
two
enables,
for
you
know,
injecting
chaos
based
on
absolute
value.
So,
for
example,
you
want
to
you,
know,
consume
memory
or
CPU
or
disk
in
terms
of
you
know,
GBS
or
milli
cores,
be
it
CPU
or
memory,
but
you
know
like,
for
example,
you
can't
always
want
you
didn't
you
don't
always
want
to
consume
memory.
B
So
in
in
such
cases,
you
can
make
use
of
this
to
enable
we
have
the
like
memory,
but
memory
load
or
you
know
a
CPU
load
that
in
in
those
two
enables
we
can
provide
the
consumptions
to
be
done
in
terms
of
percentage
and
it
it
will
go
ahead
and
you
know,
consume
the
memory
or
CPU
in
terms
of
percentage,
provide,
percentage
right
and
then
moving
forward
in
stats
skills,
experiment,
yeah
same
experiments.
B
We
also
added
support
for
C
group
of
version
two
so
like
we
already
had
support
for
version
one,
but
now
we
have
version
two.
So
now
we
are
ssql.
Experience
also
support
the
C
group
version.
Two
next
was
I
would
say,
feature
request
from
Community,
so
there
are
many
many
use
cases
right
you
might
not
want
to
so,
for
example,
you
are
targeting
applications
in
your
cluster
right.
B
You
are
giving
app
labels
and
which
name
is
space.
These
particular
applications
are
residing
in.
But
what
you
want
to
do
is
you
want
to
you:
don't
want
to
touch
the
applications
which
are
on
a
particular
node
right
or
or
I
would
say
you
know
you
want
to
only
target
those
applications
which
are
on
a
particular
node
I.
Don't
want
to
Target
any
other
application
right.
So
in
that
case
now
what
we
can
do
is
we
can
provide
node
level
also.
B
So,
for
example,
there
are
three
applications
of
an
application
and
there
is
a
one
replica
which
is
adding
in
a
node
a
and
you
don't
want
to
touch
the
the
replicas
which
are
not
on
node
a
right.
B
So
you
can
provide
the
label
of
that
node
and,
in
you
know,
on
under
the
node
labeled
to
enable,
and
then
it
will
only
target
those
Target
Bots
which
are
residing
under
a
particular
node
a
so
in
that
case
you
are
also
you
know,
reducing
your
blast
radius
and
you
you
will
you
you
also
get
more
granular
granular.
You
know
control
over
the
chaos
right
yeah.
B
So
next
is
CMD
probe
enhancement.
So
this
year
we
have
done
a
good
enhancements
in
CMD
Pro
like
we.
There
are
some
good
additions
there.
So
in
CMD
probe
we
we
already
had
support
for
Source.
You
can
provide
your
own
image
to
run
commands
as
by
using
you
know,
by
deploying
a
new
board
Pro
board,
but
in
that
or
we
didn't
have
support
for
you
know
providing
envs
or
in
case
your
image
is
private
right,
you
might
not
be.
You
may
not
have
been.
B
You
know,
use
image
full
secret
for
pulling
that
particular
private
image,
and
you
may
also
want
to
provide
different
arcs
for
that
particular
image
right.
So
now
what
you
can
do
is
in
CMD
proof
we
have
added
the
support,
for
is
you
know,
adding
the
image
for
secret
image,
pull
policy,
ccmd,
RX
and
also
EnV,
to
enable
so
in
case
you
want
to
know
more
about
it
like
we
also
have
a
documentation
here.
So
if
for
this
one,
you
can
come
in
Concepts
and
under
the
pros
we
have
command
probe.
B
So
here
inline
mode
is
like
you
know.
You
are
not
going
to
provide
any
image
of
your
own,
so
it's
like
it
will
be
running
as
part
of
our
experiment,
part
right,
but
what
we
are
looking
for
is
the
source
mode
in
Source
mode.
It's
like
you
know
you
are
providing
your
own
image
right,
so
this
image
can
be
private
public
or
you
know
you
might
want
to
have
your
own
customized,
cmds
or
arguments
right.
B
So
in
this
case
now,
if
you
see
right
in
this
Source,
we
can
provide
the
image
and
we
can
also
provide
the
image
full
policy
that
particular
part
will
be
privileged
or
not.
B
If
that
container
is
going
to
be
in
the
host
Network
or
not,
and
similarly
you
know
envs
or
image
pool
stickers
and
other
things,
so
this
was
the
enhancement
done
in
CMD
probe
and
it
will,
you
know,
allow
you
to
run
the
Pro
Parts,
with
your
private
images
and
in
a
more
customized
way
and
to
have
more
control
over
it
right,
so
moving
forward
yeah.
So
this
was
a
great
enhancement.
This
has
been
done.
You
know
this.
Will
this
this?
So,
let's
just
you
know,
go
through
it.
B
So,
for
example,
you
are,
you
know,
coming
to
a
litmus
chaos
and
you
want
to
write
your
own
experiments
right
previously.
We
had
we
have
had
so
we
provide
like
qsdk
template.
So
if
you,
you
know,
go
in
this
litmus
litmus
go
repository
right,
so
we
have
a
developer
guide.
You
you
can
use,
a
community
can
follow
to.
You
know
bootstrap
their
own
chaos,
experiments
from
scratch
right,
but
we
previously
we
have.
We
had
templates
for
creating
experiments
based
on
you
know,
ezek
model.
B
But
now,
if
you
see,
as
part
of
you
know,
enhancement
done
in
this
year,
you
will
be
able
to
boot
step
experiments
which
are
having
helper
pod
model
or
even
on
the
non-kubernetes-based
experiments.
Let's
say
AWS
gcp
VMware,
so
we
have
templates
now
for
all
the
different
categories
of
experiments.
So
if
you
see
we
have
templates
here
and
if
you
check
right,
we
have
AWS
execution
model,
helper
model,
VMware
and
gcp.
So
now
you
don't
have
to.
You
know,
write
everything
from
scratch.
B
You
can
use
these
templates
to
bootstrap
your
experiments,
for
you
know
any
of
these
categories
and
we
will
continue
to
add
more
of
these
so
that
you
know
it
becomes
easier
for
you
know,
generating
experiments
for
the
community.
Also
right
so-
and
you
know
it's,
it
always
helps
to.
You
know,
promote
our.
You
know:
byoc,
building
your
own
chaos
right.
B
So
next
is
container.
We
already
support
for
container
dcri
support
for
DNS
chaos
experiment.
So
it
was,
you
know
previously.
We
DNS
cures
experiment
only
supported
Docker
container
runtime,
but
now
these
experiences
are
also
supporting
a
container
these
runtime
support.
So
this
will
also
work
in
such
cases
also
and
next,
so
this
one
was
done
for
the
service
mesh
enable
environment,
so
this
was
also
a
query
from
community.
B
So
in
some
cases
you
know
if
you
are
using
HTTP
chaos,
experiments
or
if
you
are
using
network
kiosk
experiments
right
in
this
case,
if
you
are
providing
destination
host
and
because
how
the
traffic
you
know,
flows
into
I,
would
say,
service
enabled
environments
right.
It
might
be
different
in
such
cases
how
we
deduce
the
target
IPS
for
this
destination
hosts
so
in
network.
If
you
know
like
in
network
Qs
experiments,
we
have
the
unable
for
providing.
You
know:
destination
host,
particular
destination
post.
B
You
want
to
Target
right,
but
in
service
mesh
enabled
environments
that
the
process
is
a
bit
different.
So
we
were
not
able
to.
You
know,
find
the
destination
IPS
for
the
Sim,
but
these
enhancements
will
allow
us
to
you
know,
run
our
HTTP
chaos
and
network
chaos.
Experiments
in
the
service
smash
enabled
environments
also,
so
it
will,
you
know,
go
through
a
different
process
for
driving
the
target
IP
of
Target
ports
yeah.
So
next
is
this
one.
So
this
one
was,
you
know
we
have
had.
B
We
wanted
to
do
this
one
so,
for
example,
in
node
and
infrared
related
experiments.
We
have
we
had
the
aut
statistics,
so
Target
application
checks,
but
that
has
been
removed
now
so
and
like
because
these
are
node
and
inflated
experiments.
If
you
are
using
non-kats
based
experiments
or
if
you're
using
node
based
experiments
right,
you
might
not
want
to
check
the
target
application.
You
want
to
monitor
your
project,
Target
node
right.
So
in
those
cases
this
person
required-
and
this
was
also
a
query
from
community.
B
So
this
has
been
removed
now
and
also
for
the
Pod
rated
experiments
right
so
and
it
is,
it
is
done
for
all
the
experience.
I
would
say
so
now.
It's
like
the,
for
example,
in
pre-qs
and
post
cures.
Whenever
you
run
experiments
we
have
a
precurs
and
post
chaos,
checks
and
those
checks
in
those
checks
we
what
we
do
is
we
check
the
status
or
you
know,
or
liveness
of
the
target
application
or
Target
node
right,
but
those
are
now.
Checks
can
be
made
optional.
B
There
is
a
tunable
app
help
checks
parameter
in
chaos
engine,
so
what
you
can
do
is
you
can
make
it
true
and
false
So.
Based
on
that
now
it
will
check
the
app
Health
our
node
Health
accordingly.
So
if
you're
providing
the
Apple
check
as
follows,
like
you
are
sure
that
you
know
I
mean
if
you're
confident
in
your
notes-
and
you
are
already
showed
that
okay,
if
even
if
I
do
chaos
it
will
the
target
application
will
be
healthy
or
you
know
the
target
node
will
be
healthy
or
Target.
Infra
will
be
healthy.
B
You
can,
you
know,
make
it
false
right.
You
don't
want
these
checks,
but
in
case
you
want
to
have
it.
You
can
just
you
know,
have
it
true,
so
by
default
this
this
will
be
having
its
value
as
true,
but
you
can
make
it
false.
So
it's
optional
for
us
so
yeah
these
were
the
you
know,
some
changes
which
we,
which
were
done
on
the
litmus
core
side
and
now
moving
ahead
to
KR
Center
site
right.
B
So
here
here
like
in
one
okay
or
Center
side,
we
have
had
some
good
number
of
enhancements
here.
Also
so
like,
starting
with
the
sales
and
certificate,
so
this
was
also
you
know:
Community
request
for
using
cell
sign
certificate
for
the
communication
between
the
delegate
and
to
the
graphql
server
right,
because
previously,
what
was
happening
in
case
you
are
using,
you
know,
I,
would
say
virtual
Gateway
or
Ingress
right.
In
that
case,
you
might
be
using
your
sales
sign
certificates
right.
B
So
if
you
are
using
cell
science
certificate,
then
in
that
case,
that
communication
would
break
because
we
were
not
supporting
service
and
certificate.
So
now
we
have
added
support
for
sales
and
certificate.
What
you
can
do
is
for
enabling
it
right,
I
will
show
you
in
Helm.
So
in
Helm
now
we
have
an
in
the
you
know:
server
envs,
you
have
a
EnV
TLS
third
64.,
so
you
can
provide
base64
encoded
form
of
the
TLs
that
you
have
yourself
signed,
and
you
know
you
can
also
you
will
also
provide.
B
You
can
also
provide
secret
name.
So
there
are
two
ways
to
provide
certificate.
First
is
TLS
secret
name,
so
in
case
your
you
know
how
you
have
deployed
Care
Center.
So
in
case
you
have
deployed,
chaos
enter
in
a
cluster
scope,
so
it
can
go
ahead
and
check
the
secret
right.
So
in
that
case
it
will
be
able
to.
You
know,
fetch
the
secret
and
the
certificate
from
the
same,
but
in
case
right,
if
you
are
using
namespace
scope
in
that
case,
might
not
be
able
to
fetch
the
secret
directly.
B
So
you
will
have
to
provide
the
base64
encoded
form
of
the
set
here
directly.
So
in
that
case
the
server
will,
you
know,
decode
the
same,
and
the
same
certificate
will
be
used
for
the
communication
between
delegates
and
server.
So
whenever
you
are,
you
know
connecting
a
new
delegate,
a
new
agent
to
your.
You
know
care
center
right.
In
that
case
the
Manifest
you
generate.
While
it
must
the
Manifest
you
generate.
While
it
must
detail,
for
you
know,
deploying
your
agent
that
manifest
is
going
to
have
that
certificate.
B
B
So
this
helped
us
to
you
know,
support
sales
and
certificate
communication
between
graphql
server
and
the
agent,
and
you
know
it
makes
it
easier
for
users
who
are
using
their
certificates
yeah.
So
next
is
you
know
we
added,
so
this
was
also
a
feature
request.
So
yeah
like
I,
would
say.
B
Like
you
know
this
year
we
got
some
good
and
great
feature
requests
from
community
and
it
also
helped
us
so,
as
I
was
seeing
that
you
know
it
this
this
year
has
been
a
year
of
learning
for
us,
and
this
actually
was
a
learning
for
us.
This
I
will
show
you
through
UI
right.
So
if
we
go
into
the
chaos
scenarios
UI,
you
might
be
running
your
experiments
here
right.
B
So
you
are
running
your
scenarios
here,
so
this
was
the
field
that
was
added
here,
but
previously
it
wasn't
here
so
here
the
ish.
What
was
the
issue
here?
Why
did
we
added
and
why
was
it?
The
you
know
like
feature
like
feature
request
from
Community
right.
So,
for
example,
you
are
in
a
project
right
in
a
project
in
the
chaos
Center,
and
you
have
multiple
users
right
now.
Their
main
number
of
users
and
anyone
can
come
and
run
the
expense.
They
run
the
scenarios
right
now.
How?
B
How
will
you
know
who
ran
which
ex
which
scenario
right
and
that
that
creates
a
zoo,
because,
for
example,
someone
run
experiment
and
you
had
a
you
know
down
time
or
you
just
want
to
monitor
right
who,
who
did
the
part
delete
or
who
did
you
know,
Network
loss,
why
your
target
application
was
not
behaving
correctly
right?
In
that
case,
you
know
we
need
this
field
executed
by
right.
So
here
this
will
this
will
now
what
will
happen
is
like
in
case
someone
is
running
the
experiment
here
in
the
scenario
here.
B
It
will
show
the
username
of
that
particular
user
who
ran
this
scenario
right.
So
this
will
make
the
you
know,
or
you
know,
audit
easier
for
the
users.
They
will
be
able
to
know
who
ran
the
experiments
and
or
the
scenarios,
and
you
know
it's
not
a
I
would
say
you
know
it's
it's
a
good
way
to
audit.
You
know
who
who
executed
the
experiments
and
similarly
in
the
same
manner,
we
also
edit
the
updated
by
field
in
the
chaos
scenario.
B
So
in
this
case
we
also
added
a
last
updated
by
so
last
updated
by
is
also
required
because,
let's
say
as
a
user
or
I
would
say,
let's
say
you
are
the
admin
of
the
project
right
and
you
created
one
scenario
and
now
you
know
when
the
next
day
you
are
coming
and
someone
has
updated
the
scenario,
and
you
say
you
know
you
are
saying
that
okay,
it
was
working
fine
yesterday,
but
now
it's
you
know,
behaving
differently
right.
B
So
you
want
to
know
who
updated
it
right
if
you,
if
you
are
the
one
who
updated
it
you,
your
name
should
be
coming
here.
But
if
let's
say
you
created,
didn't
I
go
ahead
and
update
right,
then
my
name
should
be
available
here,
so
that
you
know
you
can
reach
out
to
me
and
ask
me
like
why
maybe
because
I
might
be
having
my
own
hypothesis
for
that
particular
scenario
right,
so
you
just
wanna
know
like
you
might
be
curious.
You
want
to
know
that
you
know
why.
Why
did
we
change
the
particular
scenario?
B
So-
and
you
know
you
might
also-
you
know,
get
some
good
points
right
so
yeah
that
was
that
is
actually
also.
You
know
going
to
help
you
in
the
last
updated
by-
and
you
know
this
field
executed
by
and
updated
by
was
a
feature
request
from
community
and
also
this
was
a
contribution
from
the
community
right.
So
yeah
like
this,
is
you
know
some?
You
know
good
things
and
the
great
contributions
that
we
get
from
the
community
right.
B
So
next
is
we
added
ability
to
configure
self
agent
components,
node,
selector
and
Toleration.
So
previously,
when
you
are
deploying
the
you
know,
self-age
and
the
other
delegates
right,
while
it
must
detail,
we
have
had,
we
had
the
flag
for
providing
no
selector
and
Toleration,
but
for
self
aging
we
didn't
have
so
for
self
agent.
Now,
I
will
show
you
through
help.
So
here
now
we
have
the
envis
I
will
just
make
it
a
quick
yeah
zoomed
out.
B
So
we
have
the
EnV
self
agent,
node,
selector
and
self-age
and
Toleration
right.
So
we
can
use
these
cnvs
to
provide
the
Toleration
and
node
node
selectors
for
the
self
agent
and
for
other
delegates.
If
you
want
to
know,
we
can
always
use
litmus
CTL
litmus.
Ctl
is
what
is
used
for
deploying
external
delegates
right.
So
there
we
have.
We
already
have
the
flag,
so
in
case
you're
not
familiar
with
my
city.
You
can
also
check
this
report
lit
mask
here,
slash
litma
CTL.
Here
you
know
this.
B
This
is
a
CTL
CLI
tool
that
we
use
for
deploying
kios
delegates
and
connect
them
to
Care
Center.
So
this
tool,
also,
like
you
know,
provides
a
functionality
to
provide
the
node
sector
and
Toleration
and
for
self
agent,
as
we
said,
we
can
provide
the
same
from
the
EnV,
so
this
EnV
is
available
in
Helen
chart
and
in
case
you
are
using
manifest
there
also
under
the
graphql.
So
you
can
use
these
cnvs
to
provide
the
not
certain
Toleration.
B
So
in
this
way
the
self
agent
is
also
going
to
have
the
provided
correlation
in
both
selector,
but
yeah.
One
thing
just
make
sure
to
you
know:
add
these:
no
certain
installation
before
deploying
right
I
mean
the.
If
you
know
like
when
we
log
into
chaos
enter
the
self
agent
is
deployed
at
that
time.
So
in
case
you
didn't
edit
it
before
you
know,
logging
in
you
will
have
to
edit
the
deployment,
but
if
you
are
adding
it
before
logging
in
then
it
will
be.
B
You
know
the
self
engine
will
be
deployed
with
the
provided
Toleration
and
you
know
node
selectors,
okay,
so
next
is
we
added
support
for
scheduling,
same
experiment,
multiple
times
in
a
single
scenario,
so
yeah.
This
was
one
issue.
If
you
come
in
a
care
center
right
and
if
you
try
to
create
a
scenario
from
chaos,
Hub
right,
so
we
were
able
to
you,
know,
select
chaos
up.
That
is
fine
and
we,
you
know,
move
forward
right
here
and
when
you
are
adding
you
know,
chaos
experiments
here.
B
So,
let's
say
I,
add
bod
delete
once
right
and
I
again
at
the
audit,
because
I
want
to
do
the
same
for
it.
But
you
know
my
hypothesis
is
I.
I
want
to
do
the
quadrant
parallely,
but
on
the
different
nodes
or
different
Target
applications
right.
So
in
that
case,
what
was
happening?
B
We
were
providing
the
same
names
for
both
the
experiments,
so
this
was
creating
issue,
but
if
you
see
right,
I
added
two
different
quadrate
experiments
and
they
both
have
different
names
and
even
if
I,
you
know,
move
forward
right
in
the
weights
case,
I
will
have
a
different
name
for
both
the
experiments.
I
I
will
not
be
confused
like
to
whom
I
am
providing
the
weights
right.
So
this
way
you
are
able
to,
you
know,
select
multiple,
multiple
number
of
same
experiments
and
then
since
getting
currently
by
default.
B
They
all
target
applicable
to
internet,
but
you
can
go
ahead
and
add
it.
So
you
know
Target
application
here
and
that
way
you
will
be
able
to.
You
know,
specify
a
different
application
for
both
the
default.
You
know
experiment,
so
the
experiment
is
same.
Target
application
is
different.
So
now
you
are
able
to
do
this
also
previously,
since
they
are
the
same
name,
so
we
were
not
able
to
do
that.
B
So
next
is
you
know:
support
for
Custom,
Image
registry
inside
the
experiment.
So
to
this
let's
go
to
care
center
and
go
to
settings
page.
So
we
had
introduced
image
registry
tab
here
in
the
settings.
B
So
what
it
does
is
you
can
provide
your
own
image
registry
here
that
yeah
and
the
Avis
server
and
registry,
and
if
it
is
public,
then
you
can
let
it
be,
and
if
you
describe
it-
and
you
know
you
can
provide
the
image
secret
so
now
what
was
happening
is
you
can
provide
your
own
image
history
here,
but
this
what
it
was
doing
is
it
was
updating
the
images.
B
So,
let's
go
to
manifest
generation
right
so
let's
say
I
go
ahead
and
select
a
chaos
Hub
and
if
you
see
there
is
a
checkbox,
enable
images
see
changes.
So
if
you
don't
want
to
use
your
own
image,
let's
say
you
are:
you
are
using
your
private
images
for
running
experiments,
but
you
just
want
to
use
litmuscious
images
right,
so
you
can
just
disable
it
or
in
case
you
want
to
use
your
own
private
image
that
you
specified
in
the
images
Theta
we
can
enable
it
and
then
move
forward.
B
So
now,
what
is
what
will
happen
if
you,
you
know
select?
Let's
say
we
select
a
part
date
experiment
and
come
here
and
we
go
into
the
edit
ml
right.
So
here,
if
you
see
this
image
right
here,
this
image-
and
so
these
are.
These
are
all
the
workflow
level
images.
So
these
all
images
will
be
updated
with
your
own
image,
so
the
if
the
private
image
you
will
be
providing
these
will
be
updated
if
you
are
enable
enabling
that
checkbox.
B
So
this
way,
what
will
happen
is
whatever
the
parts
that
gets
generated
by
this
workflow
and
what
are
the
parts
that
get
generated
as
part
of
the
chaos
injection?
All
will
have
a
private
image
that
you
specified
under
the
image
registry.
So
this
way
you
will
be
able
to
use
your
own
image
registry
and
you
don't
have
to
you
know,
go
to
vml
and
update
the
you
know
image
here
by
yourself
manually
right.
B
You
can
just
enable
that
checks,
box,
checkbox
and
you
know
providing
provide
your
image
details
in
the
images
sheet
Tab,
and
that
way
it
will
be
easier
for
you.
So
in
a
single
checkbox
you
can,
you
know,
enable
or
disable
the
image
registry
over
override
process
here,
so
I
will
just
exit
from
here
so
yeah.
This
was
the
image,
the
screen
enhancement
that
we
did
this
year,
and
this
was
also
I
would
say,
a
feature
request
from
Community,
because
you
know,
for
example,
you
are
using
a
private
image.
B
You
don't
want
to
you
know
in
let's
say
you
are
overriding
the
image
in
one
workflow,
but
you
don't
want
to
be
overriding.
The
images
in
all
the
manifests
that
you
are
going
to
deploy
right
so
it's
better
to
set
it
up
in
the
UI
for
once
and
then
you
can
just
keep
scheduling
your
workflows
with
your
private
images.
You
don't
have
to
always
update
it,
so
this
actually
helps
in
a
great
way.
B
So
next
is
NY
proxy.
So
previously
in
front-end
ninja
next
we
were
using
HTTP
version
1.0,
which
was
not
you
know,
compatible
with
NY
proxy.
So
we
upgraded
our.
You
know.
Ftb
person
in
you
know
nginx
con.
So
if
I
show
you
here
so
this
is
the
config
map
that
we
use
for
control,
pin
of
front-end
nginx
config.
So
this
configure
contains
the
default
con
for
the
front
end.
B
You
know
nginx,
so
here
previously
we
had
1.0,
but
now
this
contains
1.1,
which
you
know
helps
us
in
supporting
NY
proxy,
also
so
in
case.
So
this
was
the
issue
raised.
If
you
are
using
istio
enabled
you
know
enabled
environment
in.
In
that
case,
you
might
be
using
virtual
Gateway
for
in
instead
of
Ingress,
where
you
might
be
using
virtual
Gateway
right
and
virtual
Gateway
uses
NY.
For
you
know,
you
know
directly
detecting
the
traffic
right,
so
in
that
case
he
wanted
to
support
NY
proxy.
Also.
B
So
now,
with
this,
we
are
able
to.
You
know,
support
in
my
proxy.
So
this
was
a
great
enhancement
and
a
feature
request
from
community,
so
yeah
next
one
is
Advanced
tuning
feature
for
experiments,
so
this
was
done.
I
will
show
you
in
the
UI,
so
let's
go
back
and
come
back
here
so
yeah.
So
previously,
this
is
the
like
Advanced
options
that
we
added
here
so
previously.
This
of
these
options
were
not
here.
B
We
were
only
able
to
you
know,
update
the
okay
yeah,
so
only
update
the
steady
state
details
and
Target
application
details.
But
now,
if
you
see
there
is
one
more
tab
here
also,
so
one
is
that
you
can
update
the
you
know:
Advanced
configuration
for
at
Double
flow
level,
and
one
is.
You
can
also
update
the
advanced
configuration
level
in
the
kiosk
engine,
so
this
details
node
selector
Toleration.
These
are
going
to
be
for
chaos,
for
chaos,
related
parts
and
like
what
are
the
experience
that
you
had.
B
You
can
enable
lit
and
add
your
node
selected
here.
You
can
enable
it
and
you
can
add
your
Toleration
here
and
similarly,
you
can
enable
add-
and
you
can
you
know
this
is
for
only
for
The
annotation
check
right.
So
this
is
like
already
a
core
functionality
from
litmus
score.
What
it
does
is
it
allows
you
to
reduce
the
blast
radius.
So,
for
example,
you
have
three
to
four
replicas
right
and
not
three.
Two
four
replicas
I
would
say
three
two
four
different
applications
which
are
having
the
same
label
right.
B
So
now
you
only
want
to
Target
one
application,
but
they
all
are
having
same
level.
So
you
know
by
default.
It
is
going
to
inject
Chaos
on
all
the
applications
which
are
having
that
label
right.
So
for
enabling
you
know
the
for
reducing
the
plastic,
just
two
single
application
or
those
applications
which
you
want
to
Target.
You
can
just
add
a
label
lit
mask
here,
slash,
you
know,
chaos,
colon
group,
and
then
you
can
just
click
this
annotation
check.
In
that
case
it
will
only
Target
those
applications
which
are
having
this
annotation.
B
So
this
way
you
will
be
able
to
reduce
the
blast
radius
at
the
experiment
level
also,
and
similarly,
we
also
added
the
advanced
options
for
workflow
level,
so
these
are
the
the
one
that
we
shot
here.
That
was
at
the
experiment
level,
but
there
can
be
multiple
expands,
so
you
can
configure
there,
but
for
the
workflow
level
also,
you
will
get
the
node
selector
here,
so
you
can
go
ahead
and
add
the
node
selector
here
or
you
can
add
the
Toleration
similarly
here
and
there
is
one
more
tunable
that
is
clean
up
your
scenario
Port.
B
So
how
do
you
want
to
do
the
cleanup
right?
For
example,
you
are
running
a
workflow
and
you
know
you
want
to
clean
all
the
parts
after
workflow
is
completed
or
you
want
to
clean
all
the
parts
after
workflow
success
right.
So,
for
example,
you
want
to
debug
what
why
your
experiment
failed
right.
So
in
that
case
you
might
want
to
the
you
know,
set
the
Pod
GCS
on
workflow
success
so
that
the
ports
are
only
deleted
when
the
workflow
is
successful.
If
workflow
was
failed.
B
B
Okay,
so
next
is
we
added
support
for
connecting
a
remote,
kios,
app,
also
so
yeah.
This
was
a
you
know,
great
enhancement,
so
I
will
tell
you
what
is
the
issue
and
how
it
is
what
it
is
solving
right.
So
we
already
have
we.
We
had
this
feature
connecting
a
git
repository
which
was
already
available.
But
let's
say
you
are
you
know
you
are
in
a
air
gap,
environment
and
you
only
have
access
to
you
know
I
would
say
GCS
bucket
or
S3
bucket.
B
You
don't
have
you
know
access
to
git
repository
or
gitlab
right
or
any.
You
know
git
Source
right.
So,
in
that
case,
you
might
want
to
put
your
you
know:
chaos
chat
the
chaos
up,
as
as
a
you
know,
as
part
of
your
you
know,
S3
bucket
or
git
I
would
say
you
know
GCS
packet
or
any
bucket
right,
and
then
you
can.
What
you
can
do
is
you
can
provide
the
URL
for
the
same
here
and
provide
the
name
here
and
one
one.
B
If
you
see
right,
there
is
one
warning:
the
zip
name
and
the
chaos
of
name
should
be
Sim
and
yeah.
So
when
you're,
you
know
pushing
your
key
yourself
to
GCS,
so
S3
bucket
right,
you
will
have
to
zip
it
so
that
it
is
a
single
single
file,
a
zip
file
and
the
so
the
the
file
name
and
the
cursive
name
that
you
provide
here
should
be
same.
B
So
what
it
will
do
is
it
will
go
to
the
URL
that
you
are
going
to
provide
for
GCS
or
S3
bucket,
and
it
will
download
that
it
will
unzip
it
and
you
will
be
able
to
see
the
same
chaos
of
added
here
as
a
card,
and
when
you
go
inside
it
you
will
be
able
to
explore
the
what
are
the
experiments
there
and
what
are
the
different
care?
Scenarios
are
already
there
that
you
and
it
is
part
of
your
custom
chaos.
B
So
this
will
help
you,
you
know
so
that
you
know
you
become
independent
of
git
resource
and
you
can
also
connect
your
you
know,
kyosa
via
S3
or
GCS
bucket
or
any
other.
You
know
a
bucket.
B
So
the
next
is
so
this
one.
So
this
this
this,
this
announcements
bot
was
added
to
solve
a
problem.
So
first,
let's
see
like
we
added
an
API
for
fetching
the
server
version
and
also,
let
me
also
added
litma,
serial
compatibility
in
Matrix.
So
previously,
let's
go
to
litmus
CTL
and
come
back
to
read
me.
B
So
we
didn't
have
the
compatibility
Matrix,
and
in
that
case
it
was
creating
issues
right,
because
per
particular
version
of
litmus
serial
might
not
be
compatible
with
part
2
version
of
you
know,
chaos
enter
so
we
had
these
details,
but
we,
you
know,
if
you
are,
you
know
directly
using
in
your
cict
or
automation,
pipelines
right.
In
that
case,
you
will
have
to
update
your
CTL
in
case
you're,
upgrading
your
care
center,
because
that
particular
litmus
CTL
that
you
are
running
as
part
of
your
pipeline
might
not
be
compatible
with
the
QR
center
right.
B
So
in
that
case,
what
is
better
so
you
know,
and
if
you
look
at
the
debuggability
perspective,
you
know
the
slit,
Mass
detail
might
fail
and
you
might
not
be
able
to
debug
it
right.
So
in
that
case
you
might
want
to
know.
Why
is
it
failing
right
and
the
issue
can
be
because
of
the
Cure
Center
and
it
must
settle
version
compatibility.
So
now
what
we
have
done
is
in
litmus
ETL.
We
have
added
the
covers
and
compatibility
Matrix,
so
if
you-
and
there
is
also
a
command
litmus
CTL
version.
B
So
if
you
do,
you
know
run
any
other
now
if
you
run
any
command
while
it
must
settle,
which
is
you
know,
going
to
communicate
to
Care
Center
also,
it
will
check
the
version
first,
so
the
version
of
chaos
enter
as
well
as
the
version
of
current
litmus
CTL.
So
in
that
case,
if
they
are
compatible,
the
request
will
go
through
and
you
will
have
your
operation
successful,
but
in
case
the
versions
are
incompatible.
Let's
say
you
are
using,
you
know
0.7
version
of
litmus
ETL
and
you
are
using
gear.
Center
version
0.2.9.0.
B
Now,
in
that
case,
it
will
give
you
an
error
that
these
versions
are
not
compatible
and
the
request
might
not
be
successful.
So
it's
better
to
upgrade
litmusit
here
to
this
particular
version.
So
at
least
you
know,
you
should
be
able
to
upgrade
your
version
to
0.10..
So
now
it
will,
you
know,
make
the
debuggability
or
I
would
say
as
an
update
easier
and
you
will
be
able
to
know,
you
will
be
able
to
know
it
in
a
in
a
very
faster
way
and
like
API
was
also
made.
B
So,
for
example,
there
are
some
community
members
which
are
you
know
not
using
litmus
CTL,
but
still
they
are
using.
You
know
apis
for
you
know
or
their
automation
right.
In
that
case
they
are
using
apis.
So
they
want
to
know
in
case
they
upgrade
their
chaos.
You
know
Center,
so
is
the
API
that
they
are
calling.
Is
it
compatible
with
the
current
Care
Center
or
not
so
what
they
can
do
is
now
they
can
call
this
server
API
for
the
to
get
the
new
like
the
version
of
the
server.
Then
they
can
check.
B
You
know
if
the
version
is
same,
then
they
can
make
the
query,
otherwise
they
need
to
upgrade.
So
these
are
the
issues
that
it
solved,
and
this
was
also
query
from
Community,
because
many
issues
many
users-
are
facing
these
issues
with
respect
to
a
grid.
So
this
also
solves
that
issue
yeah.
So
this
one
is
Care
Center,
UI,
endpoint
e
and
B.
B
So
this
is
in
case
of
you
know:
you
are
using
Studio,
enabled
environments
or
any
other
environments
with
where
you
are
not
going
to
use
Ingress
right,
so
you
might
be
using
virtual
Gateway,
and
then
that
case
you
might
be
providing
the
you
know,
host
and
other
things
right.
So
in
that
case,
our
you
know,
graphqls
are,
you
know,
is
not
aware
of
those.
You
know.
Custom
resources
to
you
know
fetch
the
host
from
the
virtual
Gateway
and
other
things
right.
It
is
aware
of
Ingress.
B
It
can
go
ahead
and
fetch
the
server
host
from
the
Ingress.
It
can
fetch
the
node
IP
from
the
nodes
and
it
can
fetch
the
load
balancer
IP
from
the
target,
from
the
server
service
and
and
other
things,
but
it
you
know
it.
There
is
a
limit
so
now
for
solving
solving
that
issue
right.
So
you
know
what
we
have
done
is
there
is
a
new
ENB
chaos
enter
UI
endpoint,
so
what
you
can
do
is,
for
example,
your
host.
You
already
know,
because
you're
going
to
access
QR
Center
on
that
host
right.
B
B
In
that
case,
the
delegate
will
be
provided
with
this
URL
so
that
it
knows
that
it
has
to
connect
to
server
via
this
URL,
not
through
node,
IP
or
anything,
because
that
is
not
going
to
work
in
case
you
are,
you
know
here
get
you
know,
environment
right,
so
this
is
a.
This
is
going
to
help
you,
mostly
in
SQ,
enable
setup.
So
in
case
you
are
using
air
get
environment,
you
have
your
own.
You
know
domain
on
which
you
are
accessing,
Care,
Center
or
which
Care
Center
might
not
be
aware
of.
B
You
can
provide
the
domain
or
the
host
here
so
that
you
know
it
can.
Server
can
be
made
aware
of
this
and
then
delegate
at
the
result
is
also
made
aware
of
this
Sim.
B
So
yeah
these
were
the
enhancement
done
on
a
chaos
Center
side
next
is
litma
CTL,
so
litma
CTL
this
year.
We
actually
this.
There
was
a
get
contribution
done
by
prayag.
As
you
know,
we
shared
these
were
the
you
know.
He
helped
us
in
you
know
adding
he
contributed.
You
know
scenario,
credit
operations.
You
know
operations
to
be
done
via
CLI,
so
you
know
we
have
now.
We
now
have
support,
for
you
know,
cut
operations
which
we
can
do
while
it
must
detail.
B
So,
as
I
was
saying
you
know,
users
might
be
using
litmus
CTL
in
their
cicd
pipelines
right.
So
in
that
case
now
you
can
run.
You
know,
scenarios
by
via
your
and
it
must
CTL
or
you
know,
describe
the
scenario
or
get
the
scenario
run
so
now.
This
will
help
you
in
automating
your
cicd
pipeline,
because
now
you
can
create
scenario.
You
can
also
get
the
scenario,
so
you
can
check
the
status
also.
You
know
using
some
batch
description,
other
things
but
yeah,
so
you
can
check
the
status.
B
Also,
you
will
get
to
know
the
experiment
is
passed
or
failed.
In
that
case
right
you
can
also
delete
another.
You
can
do
delete
operations
and
other
things.
So
this
was
a
you
know.
I
would
say
a
feature
request
which
Community
has
been
asking
for
and
this
this
was
a
great.
You
know
contribution
done
by
prior,
and
next
is
you
know,
some
enhancement
that
we
did
in
litmus
CTL
as
part
along
with
the
you
know,
new
feature
that
was,
you
know
we
added
a
few
flag.
B
That
is
first,
you
know
slash,
you
know,
cubeconfig
plus.
So
in
case
you
have
multiple
cube
configs
in
your
cluster
and
you
want
to
Target
a
particular
you
know
Cube
config
via
to
a
cluster,
then
in
that
case
you
can
just
provide
the
cube,
config
flag
and
provide
the
path
for
the
cube
config,
and
in
that
case
it
will
work.
B
So
next
is
this
is
what
we
were
discussing
in
the
previous
slide.
We
might
edit
the
version
mapping
in
the
past
18
with
respect
to
QR
Center.
So
it
will
allow
you
to
check
the
compatibility
of
it
must
set
in
QR
Center
and
also
will
allow
you
to
upgrade.
You
know
upgrade
it
will.
B
You
know,
make
the
upgrade
easier
in
your
automation,
Pipelines
based
on
versioning,
so
yeah
like
these,
were
you
know
all
the
updates
on
the
chaos,
Center
side,
chaos,
core
side
and
you
know
litmus
CTL
site,
and
you
know
these
and
if
you
look
at
the
whole,
you
know
the
how
how
the
new
feature
the
question
and
answer
that
we
have
done
most
of
them
have
been.
You
know
via
you
know,
Community
feature
requests
and
even
most
of
them
were
done.
You
know
by
Community
contributions
also
so
yeah
thanks
for
you
know.
B
So
next
is
3.0
beta
right
right,
so
in
3.2
beta,
let's
go
to
discussion
like
what
are
we
you
know
having
in
a
roadmap
and
what
are
different
things
that
we
are
looking
into
right
so
now,
3.0
beta.
Currently
there
have
been
already
two
releases
in
beta,
beta0
and
beta1.
You
can
check
them
out
they're
still
in
beta.
We,
you
know,
don't
support
upgrade
for
them
now,
but
you
know
you
can
surely
try
them
out.
B
You
can
surely
try
to
check
what
is
what
is
new
coming
in
there,
and
you
know
you
can
also
give
feedback.
What
is
there
and
what
we
can
improve
in?
You
know
in
those
versions
also,
so
there
are
three
aspects
of
it:
how
we
are
making
to
make
it
robust
how
we
are
going
to
make
it
leaner
and
how
we
are
going
to
make
it
more
developer,
focused
right.
So
first,
is
you
know
more
how
like,
let's
start
with
robust?
How
are
we
going
to
make
it
more
robust
right?
B
So
first,
you
know
we
can
start
with
improve
chaos,
orchestration
right.
So
previously
you
know
this
is
more.
You
know
mostly
focusing
on
the
residue
that
you
know
it
stays
on
your
cluster
after
you
know
doing
the
chaos
right.
So,
for
example,
you
are
running
a
particular
experiment
in
your
you
know
in
your
cluster,
and
in
that
case
your
you
know,
pod
gets
evicted
or
something
like
that
happens
right.
In
that
case,
your
kios
engine
might
be
in
you
know.
B
You
know
the
chaos
Sports
might
be
living
in
your
cluster.
You
know
evicted,
state
right
or
in
the
error
state
right
so
to
make
it
easier,
and
you
know
to
make
it
more.
You
know
to
be
more
interfacing
to
front
end
also
because
we
need
to
know
what
happened
on
actually
onto
the
cluster.
We
are
going
to
make
it
more
better
by
you
know
to
you
know
so
that
you
know
no
chaos.
Resources
are
going
to
be
staying
onto
your
cluster
and
in
case
something
happens
right
in
case
something
happens.
B
We
should
get.
We
should
know
about
the
same
onto
the
UI
and
to
make
it
like
there
will
be
some
changes
which
we'll
be
doing
in
you
know
mostly
on
the
core
experiment
side,
because
we
have
to
reconcile
onto
the
pods
like
for
as
an
example,
we
can
say
that
the
expanded
pods
were
getting
evicted.
So
in
that
case
who
is
going
to
you
know
handle
such
situations.
B
You
know,
chaos
operator
might
be
the
one
curious
and
it
might
be
the
one
so
they
have
to
reconcile
on
those
spots
and
you
know
check
the
status
and
then
you
know
take
decision
based
on
that,
and
the
same
decision
should
have
been
should
have
to
be
reflected
onto
a
UI.
So
those
type
of
things
we
are
going
to
make
it
more
improved
so
that
you
know
we
can
stop
you
from
you
know
going
to
your
cluster.
You
should
be
on
to
the
UI
right.
So
next
is
help
based
or
helm-based
automation.
B
So
this
has
been
a
great.
This
has
been
I
would
say,
a
good
ass
from
the
community.
You
know,
and
we
you
know
we
agree
to
most
of
the
parts
because
like
if
you
are
using
litmus
CTL
for
connecting
your
delegates
right.
In
that
case,
you
don't
have
much
a
control
over
What.
That
particular
manifest
contains,
and
you
know
you
want
to
change
some
crd.
You
want
to
change
some
R
pack
and
I
would
say
not
change.
B
You
just
want
to
look
at
what
is
there
in
the
Arabic
and
what
are
we
going
to
install
as
when
we
install
you
know,
connector
delegate,
while
it
must
sit
here
right
so
now
there
is
a
new
Chaos,
Agent
coming,
which
will
be
Helm
based,
so
this
helm-based
agent,
so
you
can
just
you
know,
run
Helm
commands
to
connect
your.
B
You
know,
chaos
delegates
to
chaos
enter
and
with
this
Helm
you
know,
because
this
is
Helm
chart
you
can
you
know
you
can
have
your
own
custom
chart,
you
can
4K
it
or
you
can
have
your
custom
values,
dot
ml
and
you
know
you
can
have
your
preferred
settings
already
present
in
values,
dot
table
and
just
use
it
directly
and
because
it
is
going
to
be
a
Helm
chart,
you
are
going
to
be.
B
The
templates
are
going
to
be
visible
to
the
community,
so,
like
in
case
you
want
to
know
what
rbex
are
going
in,
what
crds
are
going
in
and
what
deployments
are
going
in
and
how
be
you
know
doing
all
this?
You
can
surely
go
to
the
templates
and
check
them
out.
So
this
is
something
which
is
in
the
roadmap
and
which
will
be
available
very
soon,
so
I'm
actually
simplified
ux.
B
So
like
now,
since
we
already
have
a
UI,
you
know
to
show
you
how
you
can
you
know,
construct
complex
cure
scenarios
via
UI,
but
now,
as
I
was
saying
right,
your
let
us
see
yeah.
So,
let's
say
like
you
know,
you
are
already
able
to
print
your
scenarios
right,
but
now
what
you
want
to
do
is
you
want
to
know
what
happened
onto
the
cluster
right
as
I
were
discussing
right.
Something
happened
on
your
cluster
and
your
ports
are
evicted
or
killed
right.
B
In
that
case,
you
want
to
know
the
same
onto
the
UI.
You
don't
want
to
go
to
the
cluster
right
and
what
you
want
to
know.
What
is
the
impact,
so
you
know
how
your
application
behaved
and
other
things.
So
what
you
can
do
is
no
like
this
is
we.
This
will
be
an
enhancement
coming
in
UI.
How
we
can
you
know,
look
into
more
onto
this,
so
this
you
know
you
can
also
give
feedback
for
this
one.
B
What
you
are
you
expecting
into
the
UI
and
other
things,
so
that
was
on
the
robust
side?
How
are
we
going
to
make
it
leaner?
So,
as
you
already
know,
first
is
the
native
workflow.
We
are
already
using
go
under
the
hood,
for
you
know,
scheduling
workflows,
which
is
a
you
know.
I
would
say
manifest,
which
contains
stitched,
kiosk
experiments,
so
it
it
contains
multiple
number
of
experiments
right.
B
So
as
we
were
scheduling
the
workflow
here,
just
as
an
example,
I
can
just
show
you
here
so
Let's
check
this
and
add
one
experiment:
let's
take
this
one
and
yeah,
so
this
manifest
I
will
just
make
it
fall
asleep.
So
this
manifest
is
a
cargo
workflow
and
you
have
the
stitched
experiment.
You
are
experiment
here
the
chaos
engine
here,
but
you
know,
like
you,
may
not
want
to
run
the
computer
flow.
You
want
to
directly
trigger
the
chaos
engine.
B
You
don't
want
to
run
this
complete
workflow
just
to
trigger
this
chaos.
Experiment
right.
So
in
that
case
now
there's
a
roadmap.
There
are
different
enhancement
that
will
be
done
so
that
you
you
can
you
can
when
we
can
enable
the
users
to
directly
run
chaos.
Engine
So,
currently
the
chaos
engine
is
embedded
here
in
the
workflow
as
a
artifact,
but
this
we
will
enable
we
will.
You
know
it
is
in
our
roadmap
to
enable
users
to
directly
you
know,
trigger
the
kios
engines.
Instead
of
you
know,
workflow
foreign.
B
Workflows
so
as
we
are
using
kindly
Argo,
we
might
also
introduce
our
own
litmus
native
workflow
so
that
you
know
we
can
have
more
control
over
it
and
we
can
make
it
more
easier
to
you
know,
schedule
the
scenarios
and
make
it
more
straightforward.
B
Next
is
so
this
one.
This
is
the
one
which
has
been
you
know
requirement
from
the
community.
Currently,
what
happens
is
if
you
are
using
stress
cures,
experiments
or
network
your
experiments.
It
generally
creates
helper
pods
and
now
in
case
you
are
having
10
Target
application.
10
replicas
of
Target
application,
the
10
helper
pods
will
be
created
right
so
and,
like
you
know,
you
are
going
to
Target
the
same
application
and
then
for
10
Target
application,
10
helper
pods
coming
in
you
might
hit
a
situation
where
you
might.
B
You
know
you
might
have
less
resource
to
accommodate
those
helper
pods
right
right.
So
in
that
case
now
what
we
are
going
to
do
is
you
know
to
make
it
more
scalable
and
more
helper
friendly.
What
we
are
going
to
do
is,
we
are
not
going
to.
You
know,
launch
helper
parts
for
all
the
target
Parts.
Instead,
what
we
are
going
to
do
is
we
are
going
to
launch
helper
pods
on
each
node
and
those
helper
pod.
B
So
in
case
you
know,
one
helper
pod
is
launched
in
one
node
that
part
that
particular
helper
part
is
Target
or
is
going
to
Target
all
target
Parts
which
are
residing
on
the
Node.
So
that
way
you
know
we
are
going
to
reduce
the
number
of
helper
Parts
going.
You
know
getting
launched
and
at
the
same
time
we
are
going
to
you
know,
reduce
the
impact
of
you
know
any
resource,
consumption
issues
and
other
things
and
last
is
you
know
how?
How
are
we
going
to
make
it
more
developer?
Focused
So.
B
Currently
you
know
we
haven't
been
using
KRC
I
live,
you
know,
library.
In
our
you
know,
chaos
Center.
While
we
used
to
scheduling
you
know,
while
we
schedule
workflows,
but
now
what
we
are
looking
into
is
how
we
can
enable
qrci
lab
also
to
you
know,
run
experiments
without
workflow
right.
So
that
is
something
that
Integrations
might
be
coming
in.
You
know
future
editors,
surely
in
a
roadmap
and
then
code
base
refactor
is
the
one
which
will
have
been
you
know.
This
is
a
continuous
process
and
it
will
keep
happening
right.
B
So
you
know
to
reduce
the
duplicate
code
and
to
make
it
more
optimized.
So
these
things
will
you
know,
game
coming
and
keep
happening.
So
that
is
something
you
know
that
will
be
in
a
roadmap
as
usual
and
then
improve
SDK.
So
as
we
as
I
already
have
shown
like
we
already
added
support
for
multiple
templates
here,
the
AWS
and
other
things
right
here.
So
similarly
we
will
keep.
You
know
working
on
it
keep
growing
it.
B
You
know
so
that
we
can
make
it
more
developer
friendly,
and
you
know
it
will
help
you
to
generate
your
experiments
in
very
easier
and
from
scratch
and
in
a
very
great
great
way.
So
it
will
be.
It
will
be
a
great
enhancement
there.
B
So
yeah,
that's
you
know.
Oh,
that's
all
you
know.
We
are
looking
forward
in
the
3.2
roadmap
and,
like
that,
so
you
know
I
would
say
this
has
been
a
you
know
great
year
for
us
like.
We
got
some
good
number
of
contribution,
as
we
discussed
in
the
previous
slides.
Many
of
them
were
feature
requests
from
community
and
many
of
them
were
actually
contributed
by
the
community,
and
this
is
great
and
like
thanks
to
all
like.
This
is
really
great
I.
B
Think
yeah,
that's
all
from
my
side,
I
think
which
we
can
take
it
from
here.
A
Thank
you
so
much
vedant.
Maybe
we
can
move
on
to
the
last
slide
if
you
can
share
the
last
slide
for
us,
see
ya
yeah.
So
obviously
thank
you
so
much
vedans
for
sharing
all
the
announcements
and
developments
we
have
had
over
the
year
and
lastly,
obviously,
as
we
spoke
how
you
can
get
involved
with
the
community,
the
GitHub
is
out
there
feel
free
to
check
out
the
GitHub
that
has
most
of
the
information.
There
are
other
docs
which
can
help
you
get
started.
Okay,
you
can
get
started
with
little
Miss.
A
What
are
the
various
I
mean
functionalities?
How
you
can
use
the
experiments
and
the
chaos
Hub?
Obviously
you
can
access
your
chaos.
Experiments
from
there
join
us
on
the
kubernetes
slack
channel.
It's
the
latest
channel
on
the
kubernetes
slack
and
feel
free
to
check
out
the
YouTube
channel,
as
well
as
the
Twitter,
to
make
sure
you're
connected
with
us
on
the
socials
and
that's
how
you
can
get
involved
in
the
community
once
you
join
slab
feel
free
to
Ping.
A
Us
mapping
mention
your
questions
on
the
slack
itself
and
we,
the
maintainers
the
core
contributors.
The
community
helps
you
get
started
thanks
again,
everyone
for
tuning
in
and
I
hope
this
webinar
was
really
helpful
to
you
all
and
with
this
we
look
forward
to
an
amazing,
2023
and
hope
to
see
you
part
of
the
the
chaos
engineering
community.
Thank
you
so
much.
Everyone.