►
From YouTube: Knative Community Meetup #10: 3.24.21
Description
On March 24, 2021 the Knative community hosted a meetup featuring a demo, "Release Automation for Knative Apps with Iter8" presented by Srinivasan Parthasarathy, Research Scientist at IBM. Iter8 is an open source AIOps platform for cloud native release automation. Iter8 enables developers, data scientists, and SREs maximize business value and guarantee SLOs by automating metrics-driven experiments, progressively delivery, validation, and promotion/rollback of new versions of apps and ML models.
A
Great
so
hello,
everybody
welcome
to
the
creative
community
meetup.
My
name
is
maniacruz,
I'm
a
program
manager
in
the
google
open
source
branch
office
and
we
have
quite
a
packed
agenda.
So
we
are
gonna,
go
ahead
and
get
started
if
you're
just
joining
in,
please
find
the
agenda
on
the
event
details
on
the
calendar.
A
C
All
right,
just
a
quick
update,
we
have
basically
been
really
heavily
focusing
on
getting
our
house
in
order
and
making
sure
that
we
are
well
on
our
way
of
getting
event
inside
to
be
v1
ready,
as
in
making
sure
that
we
have
all
the
conformance
tests
and
all
the
code,
quality
bars
and
so
forth.
So
that's
something
that's
been
worked
on
heavily.
We
would
like
to
go
ahead
and
get
some
more
folks
coming
in
and
partying
with
us,
it's
a
lot
of
fun.
C
You
don't
want
to
miss
this,
so
come
and
check
it
out.
We
have
a
couple
of
project
boards
that
we
I'll
update
this
after
the
fact,
but
there's
that
just
visibility,
wise
okay,
so
that's
the
conformance
testing.
The
other
update
for
everybody
to
know
is
that
in
version
dot,
23
aka
the
release
after
the
one
that
is
coming
up
in
the
in
the
next
two
weeks.
So
this
is
end
of
may.
We
are
dropping
all
the
v1
beta
1
resources
in
the
venting
messaging
and
flows.
C
So
it's
all
going
to
be
v1
so
plan
accordingly
and
just
we
had
a
an
issue
filed
up
that
had
to
do
with
some
of
the
web
hooks
and
the
resource
allocations
that
was
causing
all
kinds
of
hilarity
because
it
was
spinning
up
too
many
of
them.
So
we
had
a
community
member
who
basically
came
in
filed
issue.
We
chatted
about
that
asked
them
to
go
and
come
in
and
see
if
they
could
tackle
it,
which
they
did
so
just
wanted
to
say.
Thank
you.
Thank
you.
Thank
you
for
the
first
time
contributor.
C
I
well
I
haven't
talked.
I
want
to
go
ahead
and
pass
the
mike.
The
doug,
who
is
going
to
eloquently
give
an
update
on
some
of
the
work
that
is
going
on
as
far
as
decay
native
as
a
whole.
So
that
was
more
about
the
eventing
side.
How
we
are
getting
ready
and
doug
is
going
to
go
ahead
and
chitty
chat
about
the
v1
work
as
a
whole.
Yeah.
E
A
F
E
Okay,
perfect
thanks
yeah,
so
really
asked
me
to
talk
about
what's
going
on
with
v1
from
high
level
perspective,
but
I
realized
that
not
everybody
understands
or
knows
what's
going
on
with
the
trademark
committee,
so
I
wanted
to
bring
everybody
very
quickly
up
to
speed
there.
So
we
made
a
very
clear
decision
as
part
of
the
trade
art
committee
to
s,
to
have
a
very
clean
separation
between
the
k-native
trademark
committee,
stuff
versus
what
we're
calling
the
reference
implementation,
meaning
the
open
source
code
side
of
the
house.
E
Okay,
because
there's
a
very
there's,
a
very
large
mixing
between
the
two
worlds
today
and
that's
not
really
necessarily
good.
We
want
to
make
sure
that
people
understand
that
you
can
be
conformed
to
the
specifications
which
are
owned
by
in
essence,
the
trademark
committee
and
and
stuff
like
that,
and
not
have
anything
to
do
with
the
open
source
code
right.
So
you
can
be
conformed
with
the
specs,
but
not
have
anything
to
do
with
the
open
source.
That's
a
very
important
distinction.
Okay,.
F
E
E
Now,
even
though
the
k
the
trademark
committee
owns
the
trademarks
for
k-native,
we
are
giving
the
open
source
reference
implementation,
special
rights
to
go
ahead
and
use
the
name,
because
it's
the
default
reference
implementation,
historical
reasons,
whatever
you
want
to
call
it.
That
thing
is
special
okay,
so
we
can
continue
to
use
the
name
for
everything.
That's
fine,
unless
you
guys
do
something
that
really
upsets
us,
then
we'll
end
that,
but
that's
not
gonna
happen.
E
Now
with
that,
though,
aside
from
the
name,
we
need
to
move
over
the
specifications
into
a
repo,
so
we
created
a
specs
repo
and
move
all
the
conformance
def,
I'm
sorry
the
specifications
into
that
repo
that
way
it's
clearly
owned
by
the
trademark
committee
and
it's
distinct
and
separate
from
the
reference
implementation
itself.
Okay,
along
with
that,
we
obviously
then
need
to
move
over
the
conformance
tests.
Okay,
so
everything's
now
going
to
be
under
this
specs
repo.
E
For
the
trademark
committee,
okay,
now
the
trademark
committee
will
then
just
will
define
what
it
means
to
be
compliant
or
conforming
to
the
conformance
specifications
and
what
it
means
to
actually
be
conformant
and
how
long
that
lasts.
You
know
stuff
like
that.
We'll
define
all
those
rules
around
that
in
the
not
too
distant
future.
Okay,
we
also
need
to
go
back
and
update
the
toc
and
steering
committee
charters,
because
the
language
in
there
right
now
is
a
little
bit
fuzzy
in
terms
of
who
actually
owns
what.
E
In
fact,
I
think
this
steering
committee
actually
has
a
sentence
or
two
that
kind
of
implies
they
own
the
specs
or
they
own,
the
name
and
stuff
like
that.
So
we
need
to
clean
that
up
a
little
just
to
make
that
split
a
little
bit
clearer.
Now
everything
I
talked
about
here
is
from
a
strict
legalistic
perspective,
meaning
we're
trying
to
get
a
clean
separation
that
way
there's
a
clear
delegation
of
who
owns
what
and
everything's
very
clear
from
legalistic
perspective.
Now.
E
Having
said
that,
though,
in
the
real
world,
the
trademark
committee
does
not
work
in
isolation.
Okay,
we
are
obviously
going
to
be
heavily
influenced
and
expect
to
be
influenced
by
the
open
source
community,
whether
it
be
the
reference
implementation
or
just
anybody
who
just
happens
to
want
to
contribute
to
the
specs
okay.
So
we
expect
a
lot
of
collaboration
between
the
trademark
committee
and
the
open
source
code
and
the
open
source
code
teams
to
make
sure
that
everybody's
in
sync
everybody's
happy.
Okay,
it's
just.
We
need
a
clear
legal
separation
between
the
two
worlds.
F
F
I
mean
you
regret,
you'll
regret
not
saying
that
sooner
I
know
it
may
just
be.
You
may
just
be
misspeaking.
It
sounded
like
you
were
saying
that,
because,
because
you
were
kind
of
differentiable,
oss
reference,
implementation
and
ktc
spec
and
that
sort
of
implies
that
the
spec
isn't
either
open
source
or
developed
in
the
open
and
that
that
seems
wrong.
F
Right,
like
I
thought
that
the
open
source
community
developed
two
things,
a
reference
implementation
and
a
specification
that
anyone
can
implement
and
the
ktc
owns
a
trademark
and
needs
to
sign
off
on
what
the
open
source
community
does
in
terms
of
changing
that
stack.
But
I
didn't
think
that
when
we
created
the
ktc,
we
were
creating
kind
of
a
we.
We
were
saying
the
open
source
community
doesn't
own
the
specs
anymore.
If
you
see
that
no.
E
F
Are
they
are
they?
Are
they
not
owned
by
the
same
open
source
community?
That
owns,
I
mean
they're,
open
source?
Obviously
right
I
can
go
and
get
the
source
of
them,
but
are
they?
Are
they
not
owned
by
the
steering
committee?
Are
they
not
owned
by
the
the
toc?
I
know
kt
has
to
sign
off
on
them,
but
no,
I.
E
No
ktc
owns
the
specs
from
a
strict
legalistic
perspective
and
the
steering
committee
and
toc
own
the
reference
implementation.
There
is
a
very
clear
delineation
between
the
two
because,
technically,
if
you
think
about
it,
if
one
day
the
reference
limitation
decides
to
just
vanish,
the
specs
will
live
on
so
who
owns
that?
It's
the
ktc.
F
I
mean
I,
I
kind
of
thought.
Also
the
right.
I
mean
the
reference
implementation.
If
the
whole
open
source
community
disappeared,
the
code
would
not
right
and
then
ktc
would
own.
E
F
E
That's
why
I
tried
to
be
very
clear
that
this
first
major
bullet
is
all
from
a
strict
legalistic
perspective,
but
reality
is
the
second
bullet
right.
We
don't
expect.
You
know
brenda
ron
and
myself
to
be
the
only
one
making
changes
to
the
specs.
In
fact
chances
are.
We
will
never
make
changes
to
the
specs.
It's
the
it's
the
open
source,
folks.
That
will
be
doing
that
through
prs,
okay,
right,
okay,
so
moving
forward.
Then
let's
get
to
the
good
stuff
v1.
E
So
there
were
a
couple
of
steps
necessary
to
get
to
v1
from
a
spec
perspective.
Obviously
we
talked
about
two
of
them
already.
I
moved
the
specs
move.
The
conformance
test
to
the
spec
repo
and
marty
already
did
that.
So
thank
you
to
him
for
making
that
happen.
Oops,
okay,
now
in
terms
of
actually
getting
over
the
finish
line
to
v1.
E
What
we'd
like
to
do
is
this,
hopefully
on
a
previous
toc
call,
and
I
apologize-
I
haven't
been
attending
those
recently.
Hopefully,
all
the
through
the
toc
call
we've
been
made
aware,
or
the
wgs
have
been
made
aware
that
they
need
to
go
and
start
reviewing
the
specifications.
E
Okay,
reviewing
them
to
make
sure
that
they
have
everything
they
need
in
terms
of
feature
wise
for
v1,
make
sure
there
isn't
stuff.
That's
been
slipped
in
by
mistake
that
people
don't
want
to
support
for
v1,
basically
just
make
sure
it's
v1
ready
from
a
text
perspective.
Okay,
if
people
like
to
make
changes,
they
can
follow
the
normal
pr
process
just
to
the
specs
repo,
and
I
haven't.
We
haven't
decided
on
the
formal
deadline
for
when
we
want
to
get
all
found
issues
or
review
cycle
to
be
finished.
E
But
later
this
week,
when
we
have
the
ktc
meeting,
I'm
going
to
propose
that
we
shoot
for
an
end
of
april
date
for
all
the
prs
to
be
found.
Hopefully
we'll
resolve
them
relatively
quickly,
and
hopefully
there
aren't
many
at
all.
But
when
all
those
prs
are
resolved,
then
we
can
go
forward
and
say:
okay,
folks,
we're
planning
on
this
being
v1,
slash,
ga
of
the
specs.
E
Then
the
reference
implementation
can
decide
when
they
want
to
go
ga
right
technically,
they
can
go
ga
in
before
that,
if
they
really
really
wanted
to,
but
that's
up
to
them
it's
since
their
separate
entities.
Okay,
now
from
a
conformance
testing
perspective,
we
would
try
to
move
as
quickly
as
possible
to
get
that
all
set
up
in
terms
of
the
rules
and
regulations
make
sure
the
test
suite
works
properly,
but
we
decided
that
we're
not
going
to
hold
up
ga
for
that.
Okay.
E
That
way,
ga
can
happen
at
the
appropriate
time
from
a
technical
perspective
and
we'll
worry
about
the
the
bureaucracy
later,
that's
necessary,
because
I
know
people
are
very
anxious
to
get
to
ga
okay.
Anyway,
that's
the
current
thought
process.
I
just
want
to
bring
you
guys
up
to
speed
and
if
anybody
has
any
questions.
B
So
what
happens
if
the
an
implementation
only
passes
a
percentage
of
the
conformance
test.
E
Yes,
that's
an
excellent
question,
so
the
way
we've
been
thinking
about
that
particular
question
isn't
so
much
only
do
a
percentage,
because
no
one
has
suggested.
Oh,
I
only
want
to
do
50
of
serving
the
problem
has
always
been
stated
as
well.
What
if
I
want
to
be
eventing
compliant,
but
not
serving
compliant,
that
kind
of
stuff-
and
we
do
want
to
allow
that
kind
of
thing,
and
I
got
to
be
honest
with
you.
E
We
had
a
conversation
about
this
last
week
and
I
don't
remember
exactly
where
we
landed,
but
we
are
definitely
going
to
try
to
accommodate
those
people
because
there
are
organizations
out
there
like,
for
example,
trigger
mesh,
who,
I
believe,
is
only
doing
inventing
stuff
and
not
serving
stuff,
and
we
want
them
to
be
able
to
get
some
sort
of
seal
of
approval
right.
So
I'll
have
to
go
back
and
double
check
on
the
on
the
full
answer
max.
But
we
are
definitely
considering
that
type
of
scenario.
B
E
G
E
C
No,
so
for
the
first,
that
is
the
goal
where
we
want
to
get
to
to
go
ahead
and
make
progress.
We
have
kept
these
in
the
eventing
repo
for
the
eventing
and
then
we
have
been
using
say,
rabbitmq
broker
as
a
keeping
us
honest
to
make
sure
that
we
are
not
just
testing
the
core,
but
the
the
long
term
goal
is
to
move
them
into
the
the
repo
them
into
the
k-native
specs
repo.
But
it
is
not
there
yet.
G
And
to
that
there's
a
eventing
is
doing
a
great
job
of
documenting
or
writing
down
the
laundry
list,
so
if
any
in
the
community
or
watching
this
and
want
to
contribute
on
writing
tests
or
in
test
area.
That's
a
lot
of
work
that
you
can
determine
so
just
my
two
cents.
E
A
Thank
you
so
much
zach.
I
think
we
have
one
more
update
from
the
ux
working
europe
and
then
we
need
to
dive
into
the
demo.
G
Yeah
I
can
give
the
the
update
is
one
minute
so,
the
last
time
that
we
were
in
the
meetup,
the
it
was.
I
think
we
were
in
the
process
of
creating
a
working
group,
a
new
working
group
for
user
experience,
and
this
time
around
that
process
is
complete.
We
went
through
the
tlc,
we
did
all
the
mechanics.
G
We
have
now
a
repo
repository
we're
using
a
proj,
a
github
project
roadmap,
which
is
the
link
is
in
the
agenda.
You
can
go
out
there
and
there's
about
six
items
cards
in
there,
they're
called
outcomes,
and
that's
if
somebody
wants
to
see
what
we're
working
on
or
they
want
to
help
with
one
of
those
outcomes
in
the
roadmap.
You
can
sign
up
there
make
a
comment
or
join
the
slack
channel
in
the
user
experience
channel,
so
we
gave
a
presentation
to
the
toc
was
our
turn,
so
the
recording
is
available.
G
If
you
want
to
see
what
are
the
things
that
we
did
in
terms
of
user
interviews,
analysis
design
thinking
and
the
management
of
the
pro
of
the
working
group,
yeah
we're
open
to
for
contributions
and
members,
that's
all.
H
H
A
Thanks,
thank
you
both
sorry
to
to
hustle.
Through
the
last
update,
I
think,
with
that
we're
gonna
pass
it
to
srini
who
is
going
to
present
the
demo
for
today,
release
automation
for
canadian
apps
with
iteration
or
iterate.
I
I
My
name
is
srinivasan,
I'm
a
racist
scientist
at
ibm,
research,
new
york
and
I'm
here
to
introduce
the
iterate
project
to
the
key
native
community.
Today,
the
iterate
project
is
made
possible
by
contributions
by
a
number
of
wonderful
people.
I've
listed
their
names
here
and
this
is
a
growing
list
and
hopefully
will
continue
to
grow.
I
Okay.
So
what
is
iterate
iterate
is
an
open
source
platform
for
cloud
native
release,
automation
and
experimentation
and
as
developers
of
kubernetes,
apps
or
machine
learning
models,
we
have
a
often
we
have
a
number
of
goals.
Maybe
we
want
to
in
a
canary
test.
We
want
to
validate
our
canary
release.
Canadian
reversion
is
satisfying
service
level
objectives
or
maybe
you're
doing
a
b
test
or
an
abn
experiment,
and
you
want
to
identify
the
best
version.
You
have
a
number
of
versions
of
your
application.
I
You
want
to
compare
them
and
maybe
pick
the
one
that
maximizes
the
business
objective
of
interest
to
you:
maximizes
user
engagement,
maximizes
revenue,
minimizes
cost,
whatever
pick
the
best
version,
and
in
doing
these
types
of
releases
and
doing
these
types
of
experiments,
we
also
want
to
protect
the
end
user
experience.
So
if
a
version
is
satisfying
slos,
maybe
we
want
to
shift
traffic
to
it
gradually.
I
I
In
particular,
you
can
use
this
experiment
to
say
that
you
want
to
do
a
conformance
test.
This
is
a
different
conformance
test
than
what
we
heard
about
before.
This
is
really
about
validating
slos
using
metrics,
and
you
can
do
a
canary
test
where
you
have
two
versions
and
you're
comparing
two
versions,
you
can
progressively
shift
traffic
to
a
particular
version
of
your
application
and
based
on
how
it
performs
with
respect
to
metrics,
you
can
specify
slos
or
slis
based
on
metrics,
and
you
can
also
bring
in
more.
I
You
know
integrate
more
advanced
features
like
dark
launches
traffic,
mirroring
and
segmentation,
which
we
will
see
today,
and
you
can
also
integrate
experiments
with
app
configuration
tools
like
help
and
customize.
So
this
is.
These
are
some
features
at
a
glance,
and
these
are
things
that
you
can
start
using
today,
all
right,
so,
let's
dive
into
our
first
demo.
This
is
a
conformance
testing
demo
for
a
key
native
app
and
once
again,
by
conformance
testing.
What
we
mean
is
that
your
app
your
this
version
of
your
app
satisfies
slos
that
are
specified
within
the
experiment.
I
This
is
the
scenario
that
we
are
going
to
demo.
You
have
a
single
version
of
your
k
native
application,
so
we
will
see
that
this
is
actually
a
canadian
provision
in
k.
Native
versions
correspond
to
revisions,
so
you
have
a
native
revision
and
you
know
the
picture
says
that
it's
really
a
production
version,
so
it's
receiving
user
traffic
and
it's
serving
requests
from
users
so
in
in
reality,
you
can
do
the
step
conformance
test,
either
with
a
production
version
or
in
a
staging
environment.
I
It's
really
up
to
you
how
view
which
version
you
pick
to
do
this
experiment
on
essentially
as
part
of
a
conformance
test,
you're
going
to
continually
validate
the
version
and
check
if
it
is
going
to
satisfy
sls
that
are
that
you
care
about
in
this
experiment.
We
will
specify
latency
sellers
and
we
will
specify
error
data
sellers,
but
the
slos
can
be
based
on
any
metrics
that
are
available
to
you,
and
this
is
how
the
demo
is
about
to
unfold.
I
So
manually,
I'm
going
to
launch
a
k-native
service
with
a
single
revision,
and
then
I
will
launch
the
iterate
conformance
experiment
and
iterate
is
going
to
automate
several
steps
for
me,
so
it
is
going
to
first
of
all,
it's
going
to
do
some
basic
checking
in
the
beginning.
It's
going
to
verify
that
you
know
the
canada
version
actually
exists.
It's
not
it's
not
a
non-existent
revision
and
all
the
metrics
that
I
specify
within
the
experiment.
They
actually
exist
and
then
it's
going
to
periodically
query
the
metrics
backend
in
this
case
prometheus.
I
I
This
is
a
very
simple
introductory
experiment
to
for
us
to
get
started
with
iterate
and
key
data.
There
are
more
advanced
demos
coming
up
shortly,
so,
let's
head
over
everything
that
I
am
going
to
talk
about
today
is
documented
here
as
part
of
iterate.tools.
So
you
can
try
out
this
experiment
at
your
home
in
a
couple
of
minutes.
I
Oh,
so
I'm
going
to
say
that
I
already
set
up
a
few
things
here
right,
so
obviously
I
have
a
local
kubernetes
cluster
and
it's
running
canada
and
it's
also
has
got
ins
iterate
installed.
So
some
of
the
setup
has
been
taken
care
of
in
the
background
already
so
that
I
can
get
started
with
the
demo.
So
I've
gone
ahead
and
launched
this
keynative
service.
It's
a
very
simple
service
taken
from
key
native
tutorials.
I
I
Of
course,
in
a
real
environment
in
a
production
environment,
your
service
is
probably
receiving
user
requests,
real
user
requests,
but
you
know
in
this
demo
I'm
just
going
to
simulate
user
requests
by
generating
a
few
requests
myself
using
a
using
this
tool
called
photia
all
right.
So
I'm
about
to
create
the
iterate
experiment,
but
let's
take
a
quick
look
at
what
this
experiment
is
going
to
do
so.
I
First
of
all,
I
specified
that
the
target
of
my
experiment
is
this
particular
key
native
service,
so
sample
app
service
in
the
default
name
space,
and
if
we
dive
a
little
bit
deeper,
I'm
specifically
saying
within
that
service
there
is
a
revision
called
sample
app.
We
want
pick
that
up.
That
is
what
we
are
going
to
do
or
experiment
with,
and
we
want
to
validate
this
version
by
verifying
it
satisfies
these
objectives.
I
So
these
are
the
objectives
that
I
would
like
my
version
to
satisfy
its
mean:
latency
needs
to
be
within
50
milliseconds,
its
state
latency
needs
to
be
within
100
milliseconds
and
its
error
rate
needs
to
be
within
one
percent.
If
I
see
these
things
happening,
then
I
would
say:
yes,
your
version
is
great.
Otherwise,
well
not
me,
but
iterate
is
going
to
say.
Your
question
is
great.
Otherwise,
literate
is
going
to
declare
that
your
aversion
is
not
satisfying
objectives
and
there
are
other
things.
There
is
a
initialization
action
that
I'm
specifying
here.
I
I
Experiment,
okay
experiment
is
created.
Now
we
want
to
see
what
this
experiment
is
actually
doing.
There's
a
little
cli
tool
that
iterate
provides
called
iterate
cdl.
You
can
use
it
to
periodically
see
how
the
experiment
is
proceeding
and
what
the
experiment
is
reporting
back.
The
experiment
looks
like
it's
just
getting
started,
so
there
is
no
data
coming
back,
we'll
look
into
what
is
coming
back,
but
you
know:
experiment
itself
is
a
community's
resource,
so
we
can
just
cuba
city
will
get
it.
I
We
are
seeing
that
there
are
three
iterations
of
the
experiment
that
are
completed
going
back
to
the
experiment.
I
missed
one
part,
so
the
duration
of
the
experiment
says
that
there
are
10
iterations
of
the
experiment
and
in
between
each
iteration
there
is
a
10
second
gap,
so
periodically
iterate
is
doing
this.
Metrics
fetch,
followed
by
evaluation
of
the
version,
looks
like
it's
now
beginning
to
get
some
metrics
from
the
metrics
back
end.
I
There
are
a
few
requests
that
have
gone
to
your
version
and
it
is
reporting
back
the
mean
latency,
the
tail
latency
error
rate
and
your
version
is
doing
great.
All
the
objectives
are
satisfied
and
looks
like
it's
it's
doing
well.
So,
as
I
said,
this
is
a
very,
very
simple
conformance
experiment,
but
it
already
demonstrates
how
iterate
validates
your
application
and
your
application
version
in
this
in
this
simple
setting
all
right.
So
I'm
going
to
go
ahead
and
stop
this
experiment
and
move
on
to.
I
Hopefully,
what
is
a
bit
more
interesting
experiment,
so
in
between
these
demos,
I
need
to
do
a
little
cleanup
so
that
my
demo
environment
is
fit
for
the
next
step,
all
right.
So
let's
go
back
and
look
at
our
next
demo.
Here's
what
we're
going
to
do
in
the
next
demo
we're
going
to
do
a
canary
release
of
a
key
native
application
and
once
again
so
in
a
canadian
release,
you
obviously
have
a
baseline
version
and
you
have
a
candidate
version
that
your
canada
releasing.
I
Once
again,
we
are
going
to
do
validation
of
these
versions.
We
are
going
to
check
if
they
satisfy
the
slos
that
you
specify
in
the
experiment.
At
this
time
we
are
going
to
progressively
shift
traffic
to
the
candidate
if
it
satisfies
the
sls
and
at
the
end
of
the
experiment.
Assuming
the
candidate
is
doing
well,
and
it's
found
to
be
a
good
one,
not
a
problem.
It
satisfies
everything
that
you
want
to
satisfy.
We
will
promote
it.
In
other
words,
it
will,
it
will
become
the
new
baseline.
I
It
will
essentially
take
over
all
of
the
traffic.
That's
what
the
express
that's,
how
the
experiment
ideally
should
end.
So
this
is
the
picture.
Once
again,
we
have
user
traffic
in
the
beginning,
candidate
is
going
to
get
a
very
small
percentage
of
the
traffic,
but
hopefully
it
is
going
to.
I
It's
going
to
use
the
traffic
spec
as
part
of
the
service
resource
to
do
this
traffic
shifting
and,
as
I
said
in
the
end,
it's
going
to
automatically
promote
your
candidate
version
because
it
would
emerge
as
the
winner
in
this
experiment.
It
would
have
satisfied
all
the
slos
and
it
will
end
up
being
the
one
and
only
revision,
the
other
baseline
will
be
garbage
collected
by
canada
all
right.
So
once
again,
this
is
how
the
experiment
is
going
to
unfold.
I
I'm
going
to
launch
the
two
versions:
launch
the
experiment
and
iterate
is
going
to
do
all
of
that
magic
for
me:
verify
progressively
shift
traffic
and
declare
a
winner
and
promote
at
the
end
of
the
experiment,
all
right.
So,
let's
head
over
to
this
experiment,
all
right.
So
a
lot
of
these
set
up
steps.
Kubernetes
cluster
iterate,
k
native
install
everything
is
done
already.
I
So
this
is
once
again
the
simple,
a
sample
application
with
a
blue
version
and
then
upgrading
it
to
a
green
version
and
in
the
beginning
the
green
version
is
not
going
to
receive
any
traffic.
It's
just
you
know
deployed,
but
not
receiving
traffic.
At
the
moment,
that's
how
the
server
specification
is
specified.
I
I
But
this
time
I
have
more
things
more
details
specified
for
the
version,
so
I
have
first
of
all,
not
one
but
two.
I
have
a
baseline
version
and
a
candidate
version,
a
baseline,
a
sample
of
v1.
This
line
is
just
a
way
for
me
to
see
if
anything
goes
wrong,
not
able
to
do
the
experiment
properly,
even
or
canada
is
not
working
out,
just
go
back
to
the
baseline.
So
it's
a
it's
a
fallback
mechanism,
so
baseline
is
my
sample
of
v1
and
revision.
The
candidate
revision
is
sample
app
widget.
I
Oh
sorry,
about
the
background
noise,
maybe
I'll
head
over
to
different
space.
Hopefully
the
background
noise
will
subside.
So
that's
about
the
versions
themselves.
Once
again,
I'm
using
the
same
criteria,
I'm
using
the
same
objectives
mean
latency,
pay,
latency
and
error
rate
metrics,
and
here
is
the
action
that
I'm
specifying
as
part
of
the
experiment
for
doing
the
version
promotion.
I
So
when
the
experiment
finishes,
I
can
simply
use
a
cube.
Ctl
apply
command.
It's
not
me
who's
using
it.
It's
literally
experiment
which
is
automating
this
step.
For
me,
it's
going
to
use
this
cube
serial,
apply
command
and
launch
the
appropriate
version.
If
the
version
that
needs
to
be
promoted
is
the
baseline,
it
will
launch
the
baseline.
It
will
cube.
Cd
will
apply
the
baseline
manifest.
Otherwise
it
will
keep
cdl
apply
the
candidate
manifest.
I
I
All
right
experiment
created
once
again:
let's
use
iterate
ctl
to
start
monitoring
how
the
experiment
is
unfolding.
Nothing
has
happened
so
far.
It's
just
starting
up.
Let's
look
at
the
status
of
the
experiment,
the
stage
of
the
experiment
here
by
doing
a
cube,
cdl
watch
of
the
experiment,
a
couple
of
iterations
of
the
experiment
are
over
and
let's
also
watch
the
traffic
in
the
service
itself.
I
I
I
So
what
we
see
here
in
the
status
is
that
the
traffic
shift
is
held
at
95.5
right,
so
you're
not
actually
seeing
traffic
shifting
from
baseline
to
your
revision
to
the
candidate,
because
you
know
we
are
not
getting
any
metrics
for
the
baseline
or
the
candidates,
so
traffic
is
actually
held
at
95
percent
for
the
baseline
and
candidate
at
five
percent.
I
wonder:
what's
going
on
unavailable,
unavailable
and
available.
I
So
I
want
to
check
one
quick
thing.
I
All
right,
so
there
was
a
downtime
with
create
araya,
which
is
actually
I'm
using
a
red
hat
operator
for
launching
prometheus
pods
and
up
until
last
night.
This
the
container
registry
was
down
at
way
and
I'm
wondering
if
it
has
something
to
do
with
it,
so
the
experiment
actually
is
finishing,
but
it's
not
finishing
the
way.
I
want
it
to
actually
finish.
I
I
was
hoping
to
see
some
metrics
here
and
traffic
shifting
to
the
candidate,
but
that
it
didn't
quite
unfold,
as
I
was
hoping
it
would,
but
you
can
see
that
if
the
candidate
is
not
validated,
the
other
rollback
mechanism
actually
kicked
it.
The
experiment
is
over
and
it
raid
has
actually
gone
ahead
and
rolled
back.
The
candidate
100
of
the
traffic
is
now
going
back
to
the
baseline
because
it
could
not
confidently
declare
the
candidate
as
being
the
winner
in
this
experiment.
I
I
I
I
So
once
again,
there
is
going
to
be
a
canary
release
experiment
but
and
then,
once
again,
hopefully
we
will
see
some
progressive
traffic
shifting
in
this
particular
instance,
but
we
are
going
to
segment
the
traffic
and
use
only
a
portion
of
our
user
base
to
do
the
experiment,
and
this
is
how
the
the
demo
scenario
looks
like.
So
you
have
a
once
again.
You
have
a
baseline
version
and
a
candidate
version,
but
we
are
only
going
to
use
traffic
from
a
specific
country
called
wakanda
in
our
experiment,
all
the
users
from
wakanda.
I
They
will
be
split
between
the
baseline
and
the
candidate
during
the
experiment.
But
if
you
are
not
from
wakanda,
then
you're
going
to
end
up
seeing
only
the
base,
there
is
the
all
the
other.
The
rest
of
the
world
is
not
going
to.
Essentially
it's
not
going
to
participate
in
this
experiment.
It
has
a
clean
path
to
the
baseline,
but
if
you're
from
wakanda,
then
we
have
an
experiment
for
you
and
you
will
either
go
to
baseline
or
the
candidate
based
on
the
traffic
split
percentage.
We
will
spread
the
traffic
from
a
condo.
I
Once
again,
we
have
the
same
slos
that
we
want
to
verify
within
the
for
the
baseline
and
candidate
versions
and
once
again,
hopefully
we
will
progressively
roll
out
the
candidate.
But
this
time
the
progressive
rollout
will
happen
only
for
users
within
wakanda.
As
I
said,
all
the
other
users,
they
are
unaffected
by
this
experiment.
Even
if
the
camera
is
doing
very
well
the
rest
of
the
users,
we
don't
want
the
rest
of
the
users
to
see
the
candidate
version,
because
we
want
to
minimize
the
exposure.
That's
the
idea.
I
So
this
time
I'm
going
to
go
ahead,
launch
two
different
key
native
services.
So
I'm
going
to
use
some
histo
magic
under
the
powers
to
do
this
experiment
right,
because
the
traffic
segmentation
is
a
capability
that
I
actually
get
from
istio
virtual
services.
So
I
can
overlay
that,
on
top
of
the
canadian
resources
that
I'm
creating
to
achieve
this
experiment
here,
so
I'm
going
to
go
ahead
and
create
two
canadian
services.
I
So
my
two
versions
here
are
canadian
services,
not
revisions,
but
creative
services,
and
I
will
split
traffic
between
them,
based
on
headers
I'll
check.
If
a
request
is
from
wakanda
and
based
on
that,
I
will
decide
to
use
it
or
not
within
the
experiment,
and
that
policy
is
specified
as
part
of
these
two
virtual
cells
and
then
I'll
launch.
The
experiment
and
the
experiment
will
once
again
do
the
validation
and
traffic
shifting
for
me.
I
I
What
is
this
virtual
server
saying?
So
if
you
look
at
the
virtual
service,
I
have
a
custom
host.
If
you're
you
know
doing
this
in
production,
you
probably
have
your
own
host
for
which
you're
routing
traffic,
I
have
a
custom
host,
and
I'm
saying
if
the
request
for
this
house
comes
from
a
condo,
I
can
split
it
between
the
baseline
and
the
candidate
version.
I
But
if
the
request
for
this
host
is
coming
from
elsewhere
other
than
wakanda,
we
just
go
to
the
baseline
version.
It's
not
going
to
participate
in
this
experiment.
That's
what
the
virtual
service
is
letting
me
do,
let's
go
ahead
and
generate
some
traffic,
and
this
time
the
traffic
generation
is
a
little
bit
more
involved.
So
I
have
traffic
coming
from
wakanda
to
my
services,
and
I
have
traffic
coming
from
gondor
to
my
services
under
is
the
rest
of
the
world.
I
Once
again,
these
are
just
placeholders
right.
You
can
segment
the
traffic
in
any.
Whichever
way
that
issue
actually
allows
you
to
segment
the
traffic
it
can
be
based
on
headers,
it
can
be
based
on
country,
headers,
user
headers.
It
can
be
based
on
other
parts
of
the
request.
Also,
okay,
so
let's
go
ahead
and
launch
the
iterate
experiment.
I
The
virtual
service,
in
terms
of
what
it's
doing
with
the
traffic
wow
okay,
this
time,
I'm
getting
some
metrics
looking
helpful
now,
okay,
so
I
am
getting
metrics
for
my
baseline
version
and
yes,
the
baseline
version
is
satisfying
all
the
slos.
True
true
true,
because
its
mean
latency
is
within
50
milliseconds.
Its
state
latency
is
within
100
milliseconds
and
its
error
rate
is
zero
and
looks
like
I'm
beginning
to
get
some
a
metrics
for
my
candidate
versions.
I
Also,
candidate
version
is
also
satisfying
all
the
slo's
that
are
specified
as
part
of
the
experiment
here
and
experiment
is
proceeding
and
traffic
is
beginning
to
shift.
I
initially
it
was
a
595
split
between
the
base
line
and
the
candidate
or
between
the
canada
and
the
baseline.
But
now
you
see
the
candidate
taking
on
45
percent
of
the
traffic
and
the
baseline
at
55.
I
Once
again,
this
is
all
traffic
from
within
wakanda.
The
non-wakandan
users
from
the
rest
of
the
world
are
just
left
undisturbed
for
the
purpose
of
this
experiment,.
A
I
Perfectly
perfect,
so
I'm
done
with
all
the
demos,
so
you
can
head
over
to
iterate.journals,
because
there
are
more
demos
out
there
and,
for
example,
you
can
do
a
dart
launch,
a
fp
data
application
and
you
can
use
traffic
mirroring
another
obvious
use,
capabilities
to
route
traffic
and
you
can
do
other
things
like
you
know.
I
showed
you
istio
networking
layer,
but
you
know
k
native,
supports
other
networking
layers
and
you
can
experiment
with
iterate
with
canada,
with
other
networking
layers.
I
Also
soon
we're
going
to
be
releasing
a
couple
of
new
releases
of
iterate,
where
you
will
be
able
to
go
beyond
prometheus
and
get
metrics
from
any
rest
api,
and
we
also
are
planning
to
include
support
for
ab
and
abn
experiments,
which
I
mentioned
but
did
not
demo
today,
and
we
also
want
to
enable
gadopsy
experiments
right
use,
experiments
as
part
of
your
githubs
pipelines
and
say
icd
pipelines.
So
you
will
start
seeing
more
samples
along
these
dimensions.
I
Oh,
we
do
have
a
kubecon
and
cloud
native
con
europe
321
talk
and
the
iterate
team
will
be
talking
about
best
practices
in
kubernetes
experimentation.
So
if
you're
attending
kubecon
be
sure
to
check
us
out
there
all
right,
so
you
can
join
us
on
italy.tools
on
our
slack
channel
workspace,
and
you
can
also
check
out
our
replays
and
raise
questions
or
comments
there.
I
A
Thank
you
so
much
sirini
as
you
were
mentioning
you,
have
a
presentation
right
at
kucon,
and
you
mentioned
to
me
that
you
were
interested
in
getting
some
feedback
from
the
canadian
community.
So
I
just
wanted
to
share
that
with
everyone
here.
There
were
a
ton
of
comments
and
some
questions
in
the
chat.
A
If
this
is
available
to
you,
can
you
and
you
to
maybe
start
the
conversation
with
a
question
or
comment
that
you
may
have.
J
B
Well,
this
this
is
max,
I
I
don't
have
a
question,
but
I
have
a
comment,
which
is
that
I
I
love
it,
and
this
is
very
timely.
I
think
we
need
to.
We
need
to
figure
out
how
to
how
to
use
this
more,
but
we'll
be
following
up.
I
guess
this
is
all
open
source
just
to
be
clear.
It's
open
source
under
I'm,
looking
at
the
repos
right
now
the
license
is,
is
it's
compatible
right
with
with
what
we
have
right
now
in
canada,.
I
J
I
Use
grafana:
it
is
more
visual
for
people
to
follow
along.
Yes,
we
do
have
a
story
for
grafana.
We
want
to
bring
metrics
other
than
prometheus
metrics
into
graphana,
so
it
might
take
a
little
bit
more
time.
But,
yes,
we
are
doing
that.
I'm
just
seeing
other
questions
here.
I
I
So
if
you
have
more
questions
or
comments,
you
see
part
of
the
reason
why
we
are
doing
this
presentation
is
to
understand
the
needs
of
the
canada
community
right.
So
what
are
the
types
of
release?
Automation,
pain,
points
and
what
are
the
types
of
experimentation
needs
and
pain
points
that
are
worthwhile
attacking?
So
this
is
an
initial
presentation,
and
this
is
definitely
based
on
our
interaction
with
the
istio
community.
It's
based
on
our
interaction
with
the
kids
serving
community,
but
for
the
canadian
community.
I
F
I
have
the
opposite
question:
what
do
you
want
from
us
like?
What
did
you
find
good
or
what
could
improve
about
k
native
like
to
when
you
were
building
something
like
this?
What
was
what's
your
kind
of
feedback
to
us
in
the
same
way.
I
That's
a
good
question
so
internally
when
I
was
demoing
it
in
fact
to
doug
who
was
on
the
call
here.
His
suggestion
was
that
hey?
Why
not
do
the
traffic
segmentation
using
canada
service
instead
of
using
a
history
virtual
service,
and
my
answer
was
that
canada
service
doesn't
support,
features
like
that?
Probably
because
you
know
it's
your
dust
and
canada
is
meant
to
work
on
top
of
multiple
networking
layers,
so
I
don't
know
the
answer
but
anyway,
so
some
of
these
features
that
makes
experimentation
interesting.
That
makes
experimentation
useful.
I
F
Yeah,
I've
definitely
heard
that
before
for
this
work,
that
our
current
kind
of
traffic
stuff
is
hard
to
use
outside
of
a
demo
like
of
k
native
like
it's
hard
to
actually
use
it
for
any
anything
other
than
the
simple
stuff.
That's
it's
really
interesting,
yeah
I'd.
I
definitely
encourage
like
any
issues
that
you
could
open.
That
kind
of.
Like
I
mean
I,
I
don't
think
it
is
the
case
that
we
think
you
should.
F
I
mean
I
I'm
not
speaking
for
the
project
but
like
I
think
we
have
a
traffic
section
such
that
you
shouldn't
have
to
go
down
into
istio
to
do
at
least
most
of
the
stuff
that
you
were
doing
there
so
it'd
be
interesting
to
kind
of
have
a
like
if
you
had
a
sketch
of
like
what.
What
would
the
api
and
k
native
look
like,
so
they
could
be
orchestrated,
I
think
that'd
be
you
can
only
say
no
anyway,
right
like
it
doesn't
fit.
I
I
So
the
other
question
that
I
do
have
the
obvious
question
is:
what
are
some
things
we
can
do
to
improve
adoption
for
iterate
in
the
canadian
community?
This
is
obviously
the
top
top
question
in
our
heads
as
part
of
theater
and
projects,
so
any
feedback
along
those
lines
will
also
be
very
welcome.
I
B
Okay
yeah:
this
is
max,
I'm
also
from
ibm,
but
we'll
we'll
also
follow
up.
You
know
with
the
community.
Obviously,
with
all
this
one
question,
for
you
is
I
mean
this
is
this
seems
to
be
a
little
bit
mature
work.
I
was
in
research,
so
I'm
assuming
that
this
is
based
on
previous
work
that
you
had
before
or
or
this
is
all
4k
native-
that
you
did.
I
Now
so
this
is
definitely
based
on
previous
work
before
we
have
worked
with
the
istio
community,
in
particular
the
kiali
section
of
istio
to
support
experimentation,
and
we've
also
worked
with
the
kf
serving
community,
which
is
about
you
know,
serverless
inferencing,
on
top
of
cleanair
right
so,
and
we
have
some
sense
of
what
the
community
wants
in
terms
of
experimentation
and
release.
Automation
tooling.
So
this
is
based
off
of
that.
We
are
coming
off
of
that.
I
G
Can
you
hear
me?
Yes,
so
have
you
talked
to
customers
or
users
using
this
in
production
or
have
you
talked
to
someone
like
actually
using
this
in
in
a
customer
setting,
or
this
is
just
not
just,
but
at
this
point
only
open
source
work.
I
It's
mainly
open
source.
However,
we
do
have
users
who
want
to
use
this
introduction
right,
so
in
particular,
if
you're
engaging
with
them
in
the
open.
So
we
are
talking
to
mlaps
communities,
we're
talking
to
trip
advisor
and
selden,
who
are
interested
in
using
iterate
features
in
production,
and
the
top
comment
that
we
have
right
now
is
that
if
there's
a
way
to
make
this
kidopsy
inject,
you
know
experiments
and
get
ups
pipelines.
They
would
be
using
this
today.
I
So
that's
the
kind
of
feedback
that
we
are
getting
and
that's
definitely
direction
that
we
are
planning
to
head
forward
to
pretty
soon.
But
this
is
what
we
have
heard
so
far
from
users
who
want
to
use
this
introduction.
We
have
more,
you
know,
usage
in,
I
would
say,
non-production
environments,
but
for
production
users.
This
is
what
we
are
here.
G
Yeah,
so
that
way,
my
feedback-
I
put
it
in
the
chat
but
like
these
things,
are
also
in
youtube,
and
everything
in
the
chat
is
lost
by
the
way.
So
so
it's
recorded.
That
was
my
feedback
on
on
githubs,
and
I
think
scott
also
had
some
comments
in
terms
of
product
production
concerns
and
security.
So
my
recommendation
is
in
the
web
page
or
things
like
that
point
out
in
documentation
or
implementations
of
doing
githubs
of
separation
concerns.
G
I
saw
like
the
the
program
is
doing
keep
ctl
commands
against
itself
or
using
this
in
a
in
a
cluster
that
you
call
the
management
cluster,
and
then
you
have
the
you
have
a
cluster
that
is
the
dev
cluster
and
the
cluster.
Is
production
cluster
like
how
is
that
visibility
of
the
things
that
you're
changing
reflected
back
into
git,
so
an
operations
team
can
can
monitor
that.
G
So
those
type
of
operational
concerns
in
security
and
transparency
are
the
things
that
the
next
customer,
just
I
I'm
consulting
with
ibm
the
customers
are
going
to
put
it.
I
work
in
engagements.
That's
the
first
thing
that
I'm
going
to
look
at.
So
if
it's
not
real
guidance
there,
and
I
have
to
like
dig
through
the
documentation
to
really
find
it
yeah,
let's,
let's
adoption-
so
that's
right,
yeah.
I
I
I
All
right,
I
actually
want
to
thank
the
community
for
giving
us
a
chance.
Just
introduce
it
read
to
you,
and
hopefully
we
will
keep
going
this
far.
A
And
thank
you
so
much
srini
for
presenting
as
well.
I
think
we
have
come
to
the
close
of
our
media.
Today
we
are
going
to
upload
this
recording
to
the
youtube
channel,
so
look
for
it
there
and
if
you
have
an
implementation
or
an
app
that
is
running
on
creative,
please
do
consider
submitting
a
demo
as
well.