►
From YouTube: CNCF CI Working Group 2020-01-28
Description
CNCF CI Working Group 2020-01-28
A
A
A
A
Thanks
so
much
for
adding
items
to
the
agenda,
this
should
be
good,
so
these
meetings
are
recorded
and
the
recordings
are
available
on
the
T
and
CF
YouTube
channel,
and
this
will
jump
right
into
some
upcoming
events.
I
think
we're
good
to
have
this
CI
working
group
call
and
on
the
forest
Tuesday
of
February.
That's
the
25th
at
12
noon,
pacific
as
well
as
March,
24th
and
April
28th.
There
are.
There
do
not
seem
to
be
any
conflicts
with
any
conferences
or
or
holidays
that
I
can
see.
A
And
the
next
cube
con
cloud
native
con
will
be
in
Amsterdam
at
the
ends
of
March
March
31st
to
April
2nd.
The
announcement
should
be
public
tomorrow,
so
we'll
be
able
to
share
that
after
tomorrow,
looking
forward
to
it
and
then
about
four
weeks
after
that
will
be
the
open
networking
an
edge
just
a
minute
in
Los
Angeles,
there's
still
time
to
submit
CFP
those
clothes
end
of
day
Monday
February
3rd,
and
then
the
announcement
for
Oh
n
es
will
be
announced
on
Thursday,
March
5th.
A
B
B
We
will
do
a
quick
demo
of
one
of
the
pipelines
that
they
were
able
to
build
and
probably
I
think
it's
going
to
be
recorded
demo
because
to
run
a
Python,
it's
going
to
take
about
20
minutes
or
more
I
am
sure
we
don't
have
that
much
time
in
this
study.
So
I
will
probably
take
about
10-15
minutes
to
skim
through
the
slides,
pretty
quick
and
then
Raj
will
take
about
10.
More
minutes
with
that.
Let
me
talk
a
little
bit
about
myself,
I'm,
the
co-founder
and
Steve
over
at
my
data.
B
The
cloud
native
error
management
company
for
kubernetes
and
I'm,
a
co-creator
of
the
following
open
source
projects,
open
EBS,
which
is
a
cloud
native
storage
container
attached
storage
for
kubernetes
littmus-
is
a
spill
of
interest
here.
It's
chaos,
engineering
tool
set
for
kubernetes
that
will
be
helpful
in
both
CA
pipelines,
as
well
as
for
doing
chaos,
engineering
in
production
and
then
cube
moon
is
another
project
that
is
a
cross
cloud
control
plane
for
data
movement.
B
As
you
can
see,
we
are
all
about
data
management
on
kubernetes
and
that's
what
led
us
to
create
this
record.
It's
and
really
more
about
deputy
people
specifically
here
I
want
to
talk
quickly
about.
You
know
what
led
us
to
create
littmus
in
the
first
place,
and
then
we
embrace
the
chaos
engineering
in
a
cloud
native
way
and
then
I
will
quickly
talk
about
what
are
those
principles
of
cloud
native
chaos,
engineering
and
then
quick
use
cases
I'll
talk
about
what
so
it
must
go
astray
and
then
I'll
probably
introduce
an
exciting
concept.
B
B
Thank
you
for
that,
and
then
we
were
asked
to
talk
here
and
you
know
probably
do
a
demo
so
happy
to
report
that
we
were
able
to
clone
in
cross
cloud
CI
project
and
then
implement
littmus
as
a
reference
implementation
of
chaos
into
coordinates,
which
is
what
we'll
be
doing
with
it
off
so
Genesis.
Is
it
really
started
about
18
months
or
little
more
than
that,
where
we
wanted
to
start
building
chaos
pipelines
for
this
open
source
project
called
open
EBS,
which
is
our
first
open
source
project,
which
was
sponsored
know?
B
It
is
sandbox
sandbox
project
inside
the
scene,
safe
itself,
so
that
and
our
idea
was
you
need
to
be
able
to
convince
the
community
that
this
project
is
well
tested
right
and
we
want
to
be
able
to
show
them
that
there
is
lot
of
negative
test
cases.
That
was
that
and
et
and
to
enter
run
in
a
very
complete
manner,
and
we
want
to
be
able
to
give
the
communities
an
opportunity
to
run
those
tests
themselves.
B
Then
we
started
looking
at
you
know
what
chaos
tools
are
available
on
kubernetes
to
run
those
tests
for
an
application
like
open
abs.
Then
we
started
looking
and
then
obviously,
kubernetes
itself
is
now
coming
up.
The
tool
sets
around
them
are
relatively
better
now,
but
looking
at
two
years
ago
now
you
need
to
build
the
tools
as
yourself,
so
we
started
building
the
actually
div
attacks
in
a
kubernetes
native
way,
which
means
that
you
know
we
were
strong,
writing
them
as
jobs,
and
then
we
did
them
for
open
ideas,
but
soon
we
realized.
B
We
need
to
run
the
same
jobs
for
other
applications
on
kubernetes
as
well,
because
open
against
is
the
storage
underneath
those
applications.
So
then
we
realized,
you
know
this
chaos.
Engineering
is
a
need
for
all
of
these
applications
and
then
this
infrastructure.
We
should
really
open
up
to
a
larger
community
and
it
can
become
a
bralette
in
itself
and
a
community
can
be
built
around
that.
B
So
then
we
announced
a
litmus
project
in
the
cube
corn,
2080
Europe,
and
now
we
are
almost
two
years
from
that
and
we
recently
did
not
zero
out
the
lives
of
elephants
as
well.
So
in
the
process
we
define
a
real
chaos
in
infrastructure
architecture
and
then
we
also
started
a
chaos
hub,
which
is
really
the
central
piece
for
community
to
come
together
to
build
chaos,
experiments
around
various
applications
for
kubernetes,
the
trunk
of
images.
So
that's
really
the
genesis
and
let
me
actually
define
what's
coordinated
chaos,
and
we
all
know
what
is
chaos
engine.
B
It's
about.
You
know
build
back
things
on
purpose
increase
the
resiliency.
If
that
is
what
is
chaos,
engineering
and
what
is
cloud
native
is
really
doing
the
chaos
and
in
cognitive
environments,
in
a
cloud
native
way
on
kubernetes
natively.
So
this
specific
topic
is
about
chaos,
engineering
for
kubernetes
or
for
applications
that
run
around
kubernetes
in
cross-section,
right
and
the
environments
why
this
is
very
important.
I
took
the
site
from
Dan
involved.
B
So
if
you
look
at
this
entire
thing,
your
code
that
you
re
bothered
about
is
really
less
than
1%
right.
A
lot
of
change,
that's
going
on
underneath.
So
how
can
you
really
define
the
resiliency
quality
of
that
right?
So
CA
pipelines
are
important.
What
does
a
pipelines
you
run
in
some
environment,
but
you
run
in
production
in
a
totally
different
way
right.
It
could
be
changing
so
that
they'll
answer
to
that
is
chaos.
B
Engineering,
radio,
when
your
environment
is
very
dynamic
and
you
need
to
continuously
validate
the
answer
to
keeping
your
system
resilient
way,
is
doing
chaos.
Engineering
right.
So
that's
that's
how
the
big
differences,
but
for
cloud
net
you
chaos,
engineering!
You
need
to
have
one
specific
thing
that
whatever
you
do
in
to
do
it
in
Agra
tops
model
right.
You
need
to
have
your
general
manifest
so
that
you
managed
care's
also
as
a
way
you
manage
similar
to
the
way
you
manage.
B
Other
applications
like
where
you
define
the
almost
fact
you
look
you
cuddle
up
like,
and
things
happen
like
somebody's
watching
for
the
terminals
and
not
write
the
function.
Does
it
show?
So
you
that's
exactly
what
is
cloud
native
chaos
in
sharing
this
like
so
another
way,
if
you
do
chaos,
engineering
with
the
mo
files,
but
without
engineering,
it's
really
called
cognitive
engineering
right.
B
So
what
do
we
need
for
regular
development?
You
have
like
to
find
by
kubernetes
and
then
including
some
theories,
but
for
chaos,
testing
or
chaos
engineering.
You
need
some
chaos
resources
like
juicy
oddest,
that's
what
we
started
developing
or
defying
right.
So
in
that
process,
what
we
defined
was
redefined
some
realities
and
we
defined
an
operator
developed
an
instability
and
also
entailed
some
cosmetics
around
it,
so
that
you
can
actually
go
and
see.
I
did
some
chaos
what's
happening.
B
I
came
when
I
put
some
perspective,
all
the
results
of
the
chaos
or
the
period
of
time
right
and
that's
that's
that
instead
of
chaos,
resources
the
greedily
built,
and
we
call
it
as
cloud
native
chaos,
engineering.
So
to
summarize
the
principles
of
cloud
native
chaos,
engineering,
it
has
to
be
open
source
because
you
know
anything.
That's
cloud
native
in
general
is
built
around
an
awesome
UCF
and
which
means
it's
open
source,
Apache,
License,
exit,
etc.
B
And
then
you
need
to
have
very
generic
api's
which
are
commonly
accepted,
which
means
that
you
know
it
means
we
build
around
the
community
and
then
you
do
not
want
to
be
saying.
This
is
how
a
particular
project
should
not
say
this
is
really
exactly
to
care
right,
so
the
chaos
itself
should
be
pluggable
right.
You
can
pull
apart.
Maybe
I
can
fill
it
differently
right,
so
somebody
else
can
write
a
binary
or
a
library
of
their
own,
plug
it
into
this
infrastructure
and
then
do
the
five
tails
the
wrong
way.
B
So
it
should
be
applicable
and
then
it
should
be
community
doing.
That
is
because
you
know
it.
The
APS
need
to
be
becoming
more
robust
over
a
period
of
time,
because
community
is
shining
and
the
Rovner
also
need
to
be
driven
by
the
community.
So
these
are
the
principles
of
cloud
native
chaos,
engineering
and
it
most
really
follows
all
of
that
and
then
I've
written
you
in
a
blog,
that's
published
on
CN
CF.
B
Witness
went
why
not
zero,
and
that
really
means
that
the
API
set
is
stable.
The
reference
implementations
we
ourselves
use
lateness
day
in
day
out.
We
when
I
say
we,
the
open
in
this
community
I
mean
so
those
good
amount
of
usage
for
the
first
to
say
that
you
know
it
is
stable
enough
and
it
can
be
used
in
various
other
projects
and
can
be
expanded
on
this
set
of
ideas.
So
the
typical
use
cases
for
Latinos
or
start-
testing
for
your
application.
Cynthia
balance,
that's
how
we
need
greatness.
The
first
few
scales.
B
Any
application
starts
the
testing
of
gratification
start
in
CI
with
our
friends
and
that
we
need
to
do
some
negative
testing
and
then
you
need.
You
need
not
build
the
entire
negative
distresses
yourself.
Somebody
could
have
built
that
for
you
and
you
can
just
pull
it
and
use
it.
Just
like
there's
a
docker
image,
you
pull
and
then
run
the
application.
You
can
pull
up,
Karis
experiment
and
run
it
right.
So
that's
in
C
airplanes
and
one
more
thing
that
happens
is
the
stage
testing
or
UAT.
B
Before
going,
people
want
to
make
sure
that
they,
my
deployments,
are
good
and,
as
you
are
seeing,
it's
only
1%
of
the
code
is
what
you
own,
and
how
do
we
actually
do
that
in
stage
testing?
So
you
need
a
way
to
do
a
lot
of
negative
testing
and
that,
then
you
can
use
it
most
at
and
then
one
of
the
other
major
areas
where
we
are
seeing
like
nurses
kubernetes
itself.
B
Is
it
good
because
you
know
I'm
running
a
lot
of
very
big
applications
on
kubernetes
set
of
applications,
but
when
it
is
itself
need
to
be
upgraded?
Quite
often
it's
not
like
Linux
kernel
I
might
need
to
upgrade
once
in
six
months.
If
not,
you
know
more
often
than
that.
So
how
do
you
actually
do
pipelines
in
pipelines
right
so
I
need
to
make
sure
the
kubernetes
itself
is
good,
so
you
can
run
kubernetes
set
of
test
cases
and
make
sure
that
this
cabinet
is
in
set
apart
for
you
and
set
of
applications.
B
Now
in
your
real
production,
you
can
update
kubernetes
and
of
course
the
last
use
case
is
chaos
in
production
itself,
so
for
this
presentation,
I
think
we
are
going
to
concentrate
a
little
bit
more
on
how
we
into
careful
pipelines
and
because
in
subsea
is
really
about
it's
a
project
that
defines
the
sea
airplanes
for
all
the
same
safe
projects.
So
how
can
you
improve
the
resiliency
or
credibility
of
those
pipelines
by
adding
chaos
staging
for
each
of
those
projects
right?
So
let
me
know:
systemic
I
just
mentioned
as
a
depicted
way.
B
It's
it's
open
source.
It
has
global
scale
because
it's
not
just
little
slab
rails.
It
has.
It
also
included
two
other
well
known:
chaos
lab
race.
One
is
power,
food
sale,
which
is
from
Bloomberg
and
the
other
one
is
Pumbaa.
You
might
have
heard
about
Pumbaa
as
a
chaos
shield
used
for
introducing
networked
agencies,
etc.
So,
right
now
litmus
is
a
chaos
and
posture
embraces
three
different
sets
of
chaos.
Libraries,
it's
called
the
CR
bees
and
it's
got
a
way
to
for
the
community
to
contribute
with
his
experiments
and
then
get
the
experiment
right.
B
It
is
called
I
do
I'll,
probably
skip,
and
these
are
the
different
studies.
Yes,
engine
the
way
you
can
tag
your
application,
following
on
my
Kyoto
experiment,
scale,
experiment
or
the
actual
experience
with
the
logical.
You
know
action
to
kill
something,
and
then
he
also
Zelda
Lucy
I'll
deduct
actually
encompasses
all
the
results.
B
And
then
you
will
have
multiple
chaos
experiments
in
a
given
for
a
given
application
of
chaos,
something
and
then
it's
like
pluggable
tails,
like
I
said
we
have
our
own
libraries
in
addition
to
mode,
and
then
you
can
have
one
more
library
if
you
want,
but
most
likely,
you
will
have
enough
experiments
that
you
can
just
add
and
you
might
need
to
add
more
experimental
than
more
languages
itself
right.
So
as
an
example
powerful
sail.
This
is
how
we
built
it.
We
just
built
a
docker
image
out
of
thoughtful
silk
chaos
and
then
weekly.
B
It
will
do
an
experiment,
and
then
we
set
up
chaos
live
library
to
point
to
the
powerful
say
it's
very
simple
to
clock
house,
and
then
you
have
chaos
hub,
which
is
really
the
most
user
centric
piece
of
hypnosis
chaos,
and
you
will
have
a
bunch
of
experiments
in
a
given
place,
which
I'll
talk
in
a
little
bit
and
then
what
developers
do
is,
after
the
develop
an
experiment
if
they
want
those
experiments
to
be
used
once
their
application
of
shape
in
production
by
the
users,
they
can
push
them
into
chaos.
Sub
and
s.
B
Re
saw
users,
where
is
using
the
application,
all
those
experiments
on
tails
up
and
then
run
them
in
production,
or
they
may
be
running
their
own
pipelines
before
pushing
them
into
production
before
doing
the
CD.
So
they
can
use
this
experiment
split
on
booster
on
biplanes,
and
this
is
how
coordinated
architecture
looks
like
it's
got
some
experiments
and
then
there
are
some
libraries
and
you
will
have
some
sea
audits.
So
this
is
how
users
interact
and
developers
will
interact
really
by
developing
the
applications.
B
So
that's
a
quick
look
at
this
one
and
how
do
you
start
litmus
I
start
using?
Is
you
already
have
okay
outside
with
a
set
of
experiments?
You
have
your
app
running
and
it's
pretty
simple
to
use
witness
or
you
can
use
either
health
chart
or
ml
files
to
apply
to
install.
It
means
that
installs
the
libraries
and
inoperative
and
then
you
can
pull
what
are
the
charts
that
you
want?
B
You
may
not
want
all
the
charts,
because
there
are
plenty
of
them
and
then
whatever
the
charts
you
want,
you
pull
them,
enter
your
kubernetes
cluster
and
then
you
inject
chaos
by
creating
a
new
engine.
Cr
and
once
you
create
that
cor
chaos
operator
picks,
it
up
introduces
that
chaos
on
that
given
application
and
it
creates,
renders
your
fault
calfs
result
and
you
can
go
and
see
what
is
happened.
B
Then
you
have
chaos,
exporter,
o
matrix,
if
the
method
exporter,
which
in
use
to
really
put
some
time
series
based
metrics
into
perspective
and
say
hey,
this
application
was
working
well
all
the
time.
But
now
there
are
some
issues
that
are
observed
when
a
particular
service
or
chaos
is
introduced.
So
you
can
get
some
analytics
out
of
it
and
make
sense
what
has
gone
wrong
and
you
can
take
corrective
action.
B
So
that's
all
it
means
really
works
right
and
it
is
developer
friendly
because
it
is
just
like
you
know,
like
a
developer
fits
a
pod
or
the
resources
you
inject
chaos
as
well,
so
injecting
occurs
is
create
a
set
of
p.m.
inspect
and
in
the
series
chaos
engine,
and
then
you
create
other
you
specify
which
of
experience
you
want,
and
then
you
run
it.
It
gets
executed
and
then
they
get
the
result.
It's
it's
a
completely
copper
negatives.
Kubernetes
way.
However,
you
do
your
object.
B
B
The
charts
are
generic
or
application
specific,
as
you
can
see
that
the
generic
chaos
and
then
there
are
multiple
applications
and
open
IDs
is
going
to
first
applications,
and
then
we
have
somebody
for
ticketing
and
for
the
purpose
of
this
demo,
we
actually
created
new
chaos
for
code
Ennis,
and
then
there
are
more
applications
that
are
in
pipeline
that
are
coming
so
we
hope
to
see
more
applications
coming
in
onto
this
hub
pretty
soon.
So
how
does
the
generate?
B
B
You
can
write
your
own
experiment,
push
them
onto
Caleb,
so
that
users
need
not
do
all
this
things.
Whatever
you
have
already
done,
and
then
that
can
used
as
an
OCR
itself
that
becomes
an
application-specific
chaos.
Experiment
on
the
hub
itself,
for
example,
open
idea
straight
open
abs
is
a
complex
application.
This
is
a
very
simple
deploy
where
it's
got
various
components,
and
it's
not
just
about
you
know.
B
Killing
apart
and
and
saying
hopefully,
this
is
working,
the
logic
is
more
complex,
so,
for
example,
you
have
so
many
specific
applications
that
are
really
talking
in
the
language
of
that
petition.
Example.
I
want
to
kill
a
target
of
open
a
base.
What
happens
is
everything
happening
as
per
my
expectation
or
an
actual
replica?
What
happens
right
underneath
you
will
be
doing
some
kubernetes
with
so
skill,
but
the
logic
that
you
write
above
and
below
is
you
are
really
going
and
verifying
the
application,
not
kubernetes
resource.
So
that's
how
application
specific
experience
comes
in.
B
So
the
proposal
that
we
have
is,
as
just
like,
we
are
using
ligaments.
You
know
pipelines.
Why
can't
other
kubernetes
means
we
have
projects
use
witness
for
doing
into
interesting
right.
So
it's
really
as
simple
as
that
start
using
litmus
for
it
should
be
fairly
simple
example:
code
ens
we
took
about
two
weeks,
but
most
of
the
time
was
about
understanding,
crosswalk,
CI
and
not
really
about
you,
know,
writing
an
experiment.
So
anybody
with
good
understanding
of
the
pipeline
and
no
reasonable
knowledge
of
litmus.
B
We
should
be
able
to
do
it
in
a
fairly
quick
manner
about
a
week
or
so
right.
There
use
easy
to
use
chaos,
experiments
for
kubernetes
already
and
then
code
eNOS,
we
added
and
what
can
be
easily
developed
with
album
respective
teams.
We
think
the
project
team
should
come
forward
because
they
know
their
applications
first,
so
the
applications
such
as
NY
o
mighty
sy
test
and
it
CD
should
be.
We
should
be
able
to
help
this
team.
B
Develop
experiments
are
based
on
litmus
and
add
them
as
chaos
stages
into
CS
here
and,
to
begin
with,
occasionally
when
it
is
itself,
has
a
lot
of
experiments
that
we
also
defined
sakuga
legacy.
Two
heaters
can
be
added
into
it
into
the
pipelines
so
with
that
I
would
like
to
sleep
at
all.
Any
questions
before
I
pass
on
the
temple
to
my
colleague
Raj
for
a
demo.
C
B
C
C
B
Absolutely
it
is
possible,
you
can.
The
hub
itself
is
I
mean
the
core
itself
is
open
sourced,
so
you
can
clone
the
hub
and
set
it
up.
Probably
a
documentation
is
missing
around
how
to
set
it
up.
I'll.
Take
that
as
a
note,
but
it
is
very
easy
to
set
it
up.
You
can
have
your
own
hub
and
then
you
can
set
up
some
synchronization
to
that
string.
Yeah.
B
D
D
We
chose
one
project
from
the
CIA
dashboard
that
is
co
DNS,
so
I
will
explain.
The
workflow
and
I
will
explain
how
we
can
integrate
the
litmus
to
the
pipeline.
So,
as
you
can
see,
my
block
diagram
so
here,
as
we
know,
whenever
developer
commits
to
the
source
management
source
code
management
because
to
github
Orkut
lab
where
we
can
trigger
pipeline.
So
this
is
the
pipe
line
of
code
eNOS.
D
Currently
we
have
the
two
stage
as
I
can
see.
The
first
one
is
the
British
things
and
the
second
one
is
the
package
stage.
So
in
Bill
stage,
what
we
do
is
a
I
saw
from
the
code
that
it
build
the
source
code
and
upload
the
artifacts,
which
will
be
used
by
the
packaging
stage
and
in
packaging
stage.
What
we
are
doing
is
a
just
building
and
pushes
to
docker
hub.
So
after
that,
after
this
stage
here,
the
test
stage
will
be
available
instead
stage.
D
We
have
multiple
jobs,
so
in
in
this
stage
we
are
using
litmus.
So
in
litmus,
we
have
experiment,
called
port
delete
code
in
spot
delete
experiment.
So
this
is
the
workflow
of
the
experiment.
In
first
we
create
one
cluster
who,
when
discussed,
that
it
is
called
kubernetes
in
docker
cluster,
where
we
install
litmus
and
all
the
operators
and
CR
DS
and
the
main
functionality
is
this
part.
This
is
this,
will
replace
the
kind
cluster
coordinates
image
with
the
latest
build
which
is
pushed
to
the
docker
hub
and
test.
This
latest
image
and
its
functionality.
D
Just
like
we
installed
the
code,
enos
portal,
8
experiment
and
run
the
experiments
and
based
on
the
experiment,
pass
or
fail.
The
build
will
be
decided
to
fail
or
pass
so
this
will.
The
workflow
I
will
explain
a
little
bit
more
on
the
experiment.
On
the
latest
session.
Moving
forward,
we
have
a
pipeline,
so
I
try
to
clone
the
pipeline,
so
I
clone
from
the
code
eNOS
configuration
earlier.
D
We
have
three
stages:
first,
one
is
built
as
I
told
and
second
one
is
packaged
and
third
I
made
is
the
testes,
so
I
will
explain
the
code
and
I
will
show
you
some
demo
of
the
code
and
you
doing
the
session
moving
forward.
So
these
are
the
earlier
build
pipelines,
so
it
took
around
9
to
10
minutes
to
build.
D
D
So
in
the
first
Lane
4c
user
will
install
the
holiness
experiment
based
on
the
checks.
If
litmus
is
not
just
told
it
will
be,
a
tilde
will
be
sure
none
of
the.
If
it
is
successful,
then
we
have
to
annotate
the
code
eNOS
deployment
to
be
used
by
the
witness
operator,
and
after
that
we
have
to
mean
component
called
EOS
engine,
so
I
will
show
you
a
scene
inspect
how
it
looks
so
now
on
creation
of
the
chaos
engine.
It
will
automatically
create
one
random
pod,
so
the
run
upward.
D
What
it
does
is
it
will
create
one
experiment
pod,
which
is
a
code
in
a
sport
job.
So
this
is
the
pre
cues
and
post
chaos
checks.
As
we
know,
the
coordinates
mean
functionality
is
to
service
resolution.
So
what
it
will
check
is
it
will
create
one
engine,
export
and
another
part
will
live
nice
for
the
liveness
part.
Will
recursively
checking
this
in
the
next
service?
If
it
is
filled
that
then
it
will
show
in
the
logs
and
if
all
the
things
are
up
in
good
there,
it
will
be
running.
D
There's
one
demo
in
coming
session,
so
I
will
show
you
a
demo
how
to
inject
the
QIOs
on
korrina's
so
based
on
the
code
in
a
spot,
we
have
a
two
libraries,
as
a
malt
already
told.
First,
one
is
the
distillate
most
library
and
second
one
is:
is
it
the
powerful
seal
which
is
a
process
by
Bloomberg,
so
it
will
kill
one
of
the
replicas
up
to
code,
rhiness
deployment
and
based
on
the
result.
It
will
save
one
a
chaos'
result:
custom
resource
the
result
may
be
pass
or
fail.
D
D
D
D
So
if
you
see
the
service
accounts
packed,
we
have
the
service.
Account.
Name
is
Gordon
SSA
here,
I
give
around
six
resource
permission,
which
is
necessary
to
run
the
experiment,
and
they
will
have
some
actions
like
creatively
list
and
my
binding
with
the
cluster
role.
I
applied
the
Arabic
and
if
you
see
the
chaos
engine,
this
is
a
mean
component
of
the
litmus
here
you
can
see.
We
have
a
mini
deformation
of
application
like
by
default
by
default
code,
enos
have
a
label
called
kts
HAP
equals
to
cube
DNS.
D
So
we
put
this
app
level
and
it
is
under
the
cube
system.
Namespace
and
F
kind
is
deployment,
always
so
in
the
chaos
type.
Currently,
we
are
supporting
step,
one
is
the
application
level
queues
and
another
one
is
with
physical
level
cures.
So
we
classify
this
experiment
hundred
the
infrastructure
level.
Yours
so
I
put
the
EOS
type
as
a
infrastructure
level.
So
here
you
can
see
that
service
account
name.
So
I
already
created
the
service
account
code,
even
as
I
say:
I
am
using
this
in
this
spec.
D
So
this
has
some
two
neighbors
by
default.
It
is
over.
It
is
optional,
but
you
can
add
your
jaw's
division
that
how
much
time
you
want
to
inject
the
QIOs
and
second
one
is
the
cures
interval.
Suppose
you
have
to
part
it
just
like
in
code
eNOS.
First,
the
time
interval
between
the
first
port,
killing
and
second
for
killing
is
the
queues
interval
and
the
rest
of
the
things
are
optional.
So.
D
So
immediately
create
one
pod
call
in
the
next
engine
and
code
in
Astana,
which
is
automatically
created
as
in
the
flow
diagram.
I
show
you
that
creating
your
engine
run
up
or
it
will
create
one
job,
so
it
is
creating
an
job,
so
it
will
create
two
more
part,
which
is
the
engine
export
and
the
lightness
for
the
lightness
port
will
recursively
check
the
in
the
next
service.
D
If
you
log
the
lagna
spot,
you
can
see
and
yeah,
we
can
also
see
that
the
port
is
terminating
six
seconds
ago
for
spot
goes
down
and
waiting
for
the
second
port
depends
on
the
close
interval.
Yeah
second
port
also
goes
down.
You
can
see
here
that
it
is
likeness,
is
filled
because
it
failed
to
curl
the
in
the
next
service.
So
if
you
don't
one
disk
error,
so
you
have
you
make
sure
that
you
have
sufficient
amount
of
replicas
of
the
deployment.
D
B
C
C
E
To
be
quick,
just
summary
and
not
find
more
information,
thanks
for
giving
me
a
chance
to
talk
about
it,
I
I've
put
I
updated
the
slide
on
the
deck,
so
one
of
you
can
share
that's
light
yeah.
So
it's
basically
as
I
said,
it's
just
a
short
update.
So
CDF
is
relatively
young
Community
Foundation,
which
was
founded
around
February
2018.
So
it's
nearly
one
year
old
and
the
purpose
of
contains
the
foundation
is
to
bring
different
contents.
E
Today's
content
delivery
projects
together
that
users
to
work
on
see
ICD
in
a
collaborative
manner
and
provide
neutral
platform,
and
it
has
many
members
which
you
can
check
from
their
website
to
see
who
they
are
and,
as
some
of
you
might
already
faced
the
challenges
when
you
look
at
the
different
CI
CD
tools
and
technologies
out
there,
and
if
you
intend
to
move
from
one
to
another,
you
might
have
faced
that.
The
things
are
not
really
streamlined
between
these
tools
and
technologies,
and
apart
from
streamlining
them,
they
can
interoperate
together.
E
So
CDF
governing
board
came
up
with
nine
strategic
goals
around
q2
last
year
and
one
of
those
goals
was
to
work
on
tool,
interoperability
and
based
on
this
feedback
and
based
on
our
own
learnings
from
the
communities
we
are
working
with
or
from
our
employers
companies.
We
said.
Maybe
we
should
go
and
propose
a
special
interest
group
for
interoperability
to
bring
users
and
projects
together
to
work
on
this
important
area
and
based
on
those
discussions.
E
We
proposed
the
formation
of
this
shake
through
CDF
talk
and
about
two
weeks
ago
the
formation
of
the
sig
was
approved
by
tech,
floor
side
committee.
So,
as
I
summarized,
this
seek
aims
to
bring
users
and
projects
together
to
collaborate
on
CI
CD
on
interoperability
area,
because
the
ICD,
if
you
look
at
it
it
is
for
lasts,
and
it
is
nearly
impossible
to
tackle
all
the
challenges
in
one
group.
E
So
that's
why
we
went
ahead
and
create
this
group
to
work
on
interval
with
aspects
of
CI,
CD
landscape
domain,
and
we
have
representatives
from
various
companies
in
this
SiC,
such
as
Netflix
Google,
Ericsson,
work
of
China
of
our
club,
biscuit
Levin,
puppet
and
alumina
networks.
Apart
from
the
company
representatives,
we
have
representatives
from
various
projects
like
Jenkins
I'm,
sure
all
of
you
have
used
it
or
heard
about
it.
Junking
sakes,
pinnacle
tech
tone
also
CN
CF
cross
glassy-eyed
also
takes
part
in
these
conversations
too.
You
know
work
on
this.
E
Whatever
challenges,
we
see
other
ideas,
we
have
shared
them
with
the
rest
of
the
participants
of
the
sake
and
the
basics.
Work
is
basically,
you
come
together
with
the
people,
and
you
just
start
talking
about
the
problem
study,
ask
those
possible
solutions
with
other
people
and
then
perhaps
form
some
work
streams
to
even
minimize
the
problem
domain
and
identify
the
questions
and
work
with
those
things,
perhaps
ending
up
with
some
kind
of
de
facto
standard,
or
at
least
call
for
action
for
border
participation
and
to
enable
that
we
as
the
sick,
meet
every
second
week.
E
Every
evening
week
on
Thursdays
at
3
p.m.
UTC
and
just
talk
about
these
things
and
our
first
meeting
was
last
Thursday
and
the
first
thing
we
start
working
on
is
simply
documenting
the
vocabulary
we
are
using,
or
these
tools
are
using,
so
we
can
appease
identify
or
come
up
with
some
shared
vocabulary
to
communicate
across
humans.
E
So
that
is
one
of
the
first
thing
we
start
playing,
and
there
are
other
ideas
like
pipeline
start,
the
standardization
event,
clemency
ICD
may
be
events,
standardization
and
so
on,
and
that
work
will
hopefully
start
soon
and
if
any
of
you
is
interested
to
at
least
look
at
what
we
are
doing
or
come
and
contribute
to
this
work,
you
can
just
look
at
our
repository
on
github
under
CDF
and
just
add.
Your
name
under
members
send
a
pull
request.
E
Comment
on
extinct,
pull
requests,
share,
a
document
which
you
might
have
put
your
ideas
on
and
just
talk
with.
Other
people,
so
I
saw
just
if
you
want
to
collaborate
on
this
area
just
come
and
join
us,
and
thank
you
for
cross
glossary
item
for
joining
to
this
effort.
Since
we
have
been
talking
about
this
for
years-
and
perhaps
this
is
our
chance
to
do
some
good
work
in
this
area-
that's
all
thank
you.
E
Cdf
is
arranging
de
zero
around.
It's
called
continuous
delivery
summit
and
it's
in
the
planning
phase.
Now
the
full
day
event
with
talks
will
happen
during
that
day
Monday
and
we
have
plans
to
submit
talk
about
giving
updates
with
the
work
we
are
doing
under
seek,
and
we
also
plan
to
have
some
kind
of
get
together.
There
find
a
coffee
machine
and
stand
around
it
and
just
talk
about
this
stuff
and
perhaps
have
dinner
and
continue
talking.
E
E
C
F
F
All
right,
so
this
is
a
review
of
the
CI
for
the
CNF
test
bed.
If
you're
not
familiar
with
the
CNF
test
bed,
you
can
get
there
at
github,
CNCs,
CNF
testbed,
it's
a
project
that
has
a
bunch
of
different
use
cases
for
networking
functions
for
kubernetes
and
it
tries
to
solve
problems
or
test
different
technology
within
that
space.
F
There's
a
bunch
of
different
use
cases.
Some
are
pretty
simplistic
and
then
all
the
way
up
to
trying
to
get
to
the
point
to
where
we're
maybe
testing
like
volved
mobile
technologies
and
things
like
that
and
so
for
CI
with
you
know,
testbed.
There
are
some
challenges
as
far
as
meeting
access
to
certain
hardware
resources
and
these
types
of
things
and
trying
to
keep
things
in
band
and
as
far
as
a
proper
way
for
installing
things
with
kubernetes
and
so
there's.
There's
there's
plenty
of
challenges
that
are
that
go
along
with
that.
So
for
networking.
F
As
far
as
like
some
of
the
challenges
like
deploying
hardware,
of
course,
the
provisioning
data
plane
technology,
such
as
VPP
installation
and
then
customizing
those
things
so
that
other
people
can
run
to
test
that
in
their
environment,
these
all
have
their
own
unique
challenges.
As
far
as
deploying
hardware,
you
can
think
of
different
things,
specific
networking
hardware
that
you
might
need
for
with.
F
In
our
case,
with
these
these
cases,
we've
have
things
like
smart,
Nix
and
other
types
of
hardware
there,
where
you
might
need
to
do
some
type
of
boot
option
or
a
BIOS
option,
or
some
changes
like
that,
so
for
deploying
Hardware
we're
making
to
deploy
everything
in
packet.
So
we
have
a
neutral
environment.
F
It
basically
is
going
to
make
sure
that
you,
you
know
it's
customizable.
You
have
your
note
structure
that
you
want
to
be
able
to
set
up
for
kubernetes
the
difference.
The
facilities
and
the
different
other
options
like
plans,
know
types
things
like
that
for
packet
and
what
it's
going
to
do
is
output
a
list
of
nodes
or
eyepiece.
F
How
is
it
that
you're
going
to
maybe
configure
some
of
the
network
specific
things
like
a
data
plane,
so
most
of
our
use
cases
are
going
to
have
VPP,
which
is
a
open
source
data
plane
and
using
that
to
configure
v
switch
on
the
node
itself,
I
believe
it's
still
on
the
actual
notes.
So
this
is
going
to
be
one
of
those
things
that
might
be
out-of-band
because
you're
configuring,
the
node,
but
this
ends
up
being
something
you
end
needing
for
performance.
F
F
You
can
use
the
make
file
directly
and
then
there's
these
options
that
we
have
here
for
firing
off
the
different
stages
yourself
manually.
So
if
you
end
up
wanting
to
maybe
do
some
run
some
of
your
networking
functions
and
use
the
CNF
testbed
to
do
that,
this
is
going
to
help
you
there.
We
have
a
package
generator
that
helps
generate
all
the
package
stuff
like
that
for
the
for
the
tests
and
things
like
that's
what
some
of
that
is
more
granular
in
this,
so
I
think
that's
it
I'm
out
of
time.