►
From YouTube: Argo CD and Rollouts Community Meeting Apr 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Okay
good
morning,
everyone
and
welcome
to
the
April
2023
Argo
CD
and
rollouts
community
meeting
I'm
your
host
for
today
Jesse
maintainer
of
the
Argo
project,
and
if
you
haven't
already,
please
add
yourself
to
the
attendee
list
in
the
Google
Doc
agenda
and
a
reminder
that
the
Argo
project
adheres
to
the
cncf
code
of
conduct.
So
please
be
courteous
and
respectful
of
doing
this
meeting
and
a
reminder
that
this
meeting
is
being
recorded
and
will
be
uploaded
to
YouTube.
B
So
today
we
have
a
couple
of
topics:
one
we'll
be
going
over.
What's
new
in
Argo
CD
2.7,
then
we'll
have
a
quick,
introduce
introduction
of
the
Argo
rollouts
plugin
feature
as
well
as
actually
a
implementation
of
the
plugin
by
Ops
MX.
B
Before
we
get
started,
also
a
reminder
that
we
do
have
kubecon
EU
in
just
a
few
weeks
and
as
well
as
Argo,
I,
guess:
Argo
Khan
right,
it's
a
EU,
argocon
co-located,
so
I
think
it
is
sold
out.
But
if
you
happen
to
be
there
just
stop
by
the
you
know,
Argo
booth
and
say
hi
to
some
of
the
maintainers.
B
C
All
right,
yeah
hi,
everyone.
Let
me
start
by
sharing
my
screen.
C
Okay,
take
this
out
of
the
way
yeah,
so
I'm
gonna
be
using
the
super
nice
blog
post.
That
cost
is
from
code
fresh
Road
related
to
these
2.7
or
2.7
rc1
release,
so
2.7
rc1
was
released
more
than
a
week
ago.
C
It
has
35
new
features,
83
bug
fixes
and
several
other
small
enhancements
in
the
blog
posts.
There
are
some
bigger
features
highlighted
and
some
also
as
important
smaller
features
represented
here.
So
starting
off
with
the
first
thing
that
my
up
for
you
when
you
install
argosity
2.7,
is
the
enhanced
logs
view
right
so
before
we
always
had
this,
this
logs
view
in
the
Pod
resource.
C
So
when
you
click
what
resource
there's
a
logs
tab,
so
this
log
step
was
enhanced
so
right
now
there
there
are
a
few
different
modes
that
can
be
selected,
so
the
follow
mode
is
something
that
we
had
in
the
past,
which
is
which
works
similar
to
the
tailing
logs
in
the
CLI.
But
now
we
also
have
the
historical
mode
that
allows
you
to
do
some
filters
by
by
time
range
and
also
filtering
by
specific
strings
in
the
logs,
and
it
was
also
redesigned
to
be
more
performed.
C
So
this
is
also
something
expected
in
the
new
log
tab.
This
was
implemented
by
Alex
Collins
from
Intuit
and
yeah.
It's
going
to
be
there
in
2.7
next
feature
is
a
it's
related
to
customize.
C
So
if
you
use
customize
to
generate
your
manifests,
you
might
have
noticed
that
certain
situations,
if
you
want
to
so
not
all
features
of
customizer,
are
currently
available
in
in
Argo
CD,
so
things
like,
if
you
want
to
leverage
the
feature
where
customize
allows
you
to
define
the
namespace
or
a
prefix
for
your
resources,
this
wasn't
available
and
with
the
news
with
this
new
feature,
if
you
look
here,
there
are
some
some
new
attributes,
and
this
will
interface
with
customize
to
properly
generate
the
Manifest.
C
C
Ui
extensions
can
now
use
backend,
so
basically,
Argo
City
had
already
a
functionality
to
have
UI
extensions
available,
so
UI
extensions,
the
way
it
worked.
C
It
was
mainly
a
UI
component
that
we,
you
could
deploy
in
Argo,
City
API
server,
and
that
would
make
it
make
available
some
additional
tab
in
in
the
configured
resource,
allowing
you
to
inspect
any
application
resource
owned
by
that
specific
application
right,
but
that
was
kind
of
static
in
a
way,
because
the
only
the
only
data
that
is
available
for
you
is
what
is
available
in
the
in
the
application
resource.
C
So
with
this
new
feature,
developers
will
be
able
to
configure
UI
UI
extensions
to
communicate
with
back-end
services.
C
So
the
way
it
works
is
mainly
Argo
CD
will
will
behave
as
a
some
sort
of
a
reverse
proxy
and
if
you
enable
a
specific
extension
in
the
ardu
city
config
map,
article
City
will
forward
requests
to
to
the
to
the
service
that
you
configured
in
this
service
can
live
anywhere
and
inside
kubernetes
outside
so
just
highlight
that
requests
will
have
to
be
authenticated
and
there
are
also
new
or
back
configuration
that
will
be
required
to
enable
those
request
requests
to
go
through,
and
that
was
a
new
feature
implemented
by
me
and
into
it.
C
That
is
gonna,
be
there
for
2.7
all
right
moving
forward.
One
more
feature
is
the
pot
Readiness
gate.
Error
will
be
now
surface
to
the
Harlow
City
UI,
which
is
a
super
nice
enhancement.
We
have
adding
to
it
several
users
that
are
kind
of
lost,
trying
to
find
the
root
cause
whenever
something
goes
wrong
and
with
their
deployments-
and
this
is
the
type
of
feature
that
personally
I
think
it's
super
valuable
for
Argo
City.
So
right
now,
this
feature
will
be
with
these
feature.
C
Users
will
be
able
to
to
see
a
new
new
section
in
the
Pod
resource,
stating
the
exact
error
that
happened
for
that,
while
that
sink.
So
in
this
case
for
Readiness
Gates
I
I
am
aware
there
are
other
tasks
being
worked
on
also
to
improve
visibility
of
errors
in
the
UI.
But
this
one
is
the
is
the
a
great
one
that
is
going
to
be
available
in
2.7
in
permitted
implemented
by
Myerson
from
akuyi
all
right
and
yeah.
C
One
more
feature
that
is
highlighted
in
Costa's
blog
post
is
the
ability
that
user
users
will
now
have
to
filter
resources
whenever
they
trigger
Argo
CD
app
wait
for
app
sync
CLI
commands
so
before
those
commands
were
mainly
listing
every
resource
that
was
belonging
to
those
applications.
C
So
right
now,
for
example,
if
your
application
has
tons
thousands
of
resources,
but
the
user
is
just
interested
in
seeing
the
deployments
that
are
being
synced,
then
there's
a
possibility
to
add
this
additional
CLI
flag
to
filter,
just
the
resources
that
the
user
wants,
which
is
a
super
nice
enhancement
for
Ardo
City
CLI.
This
was
implemented
by
my
hash
La
Liga
from
ifra
cloud
here.
C
At
the
bottom
of
the
post
we
have
other
smaller
features,
maybe
for
the
for
the
sake
of
timing,
I'm
not
gonna,
go
through
every
single
one
of
them,
so
I
invite
you,
if
you're
considering
upgrading
Argo
City
to
2.7
in
the
near
future.
Please
take
a
look
at
the
posts
read
at
at
this
at
this
complete
Feast
that
you
may
find
something
interesting
interesting
for
your
for
you
and
your
company
to
use.
C
So
one
last
thing
for
Argo
City
2.7
following
our
roadmap,
2.7
2.7
is
targeted
to
early
May.
So
keep
your
eyes
open.
Early
May
is
probably
when
we're
gonna
have
the
final
release
out
if
you're
willing
to
try
the
rc1.
Please
report
back
bug
issues
that
you
may
find
that
will
help
us
to
provide
a
and
release
2.7
with
less
amount
of
bug
possible
as
possible,
and
that's
it
for
me.
A
B
Right
thanks
Leo
any
questions
for
Leo
or
about
this
2.7
release.
B
C
Yeah
in
in
the
labs
project,
we
have
a
repository
called
Argo,
CD
extensions
metrics,
and
the
use
case
that
we
have
is
that
we
want
to
surface
Prometheus
metrics
to
users
to
so
they
can
visualize
right
from
agrocity
why
their
the
the
load
and
the
memory
consumption
from
from
their
application,
for
example.
C
So
for
that,
we
need
some
sort
of
a
mechanism
that
communicates
with
Prometheus
in
our
in
our
case
and
provides
this
this
data
to
the
UI
and
that's
the
use
case
that
we
currently
have,
and
we
want
to
make
it
available.
So
this
extension
is
going
to
be
provided
as
part
of
the.
D
C
Or
open
source
project-
and
it's
going
to
be
the
first,
the
first
article
City
extensions
to
use
this
back-end
functionality.
B
Yeah,
we
look
forward
to
seeing
the
demo
of
that.
Actually
there
was
a
mini
demo
earlier
last
year,
but
then
I
think
we'll
be
I'm.
Sure
there'll
be
more
at
that
yeah
all
right
all
right.
If
there's
no
questions
moving
on
the
next
topic
we
have
is
about
Argo
rollouts
plugins
by
Zach,
and
then
we'll
actually
see
a
implementation
of
this
feature.
E
So
I'll
be
I'm
trying
to
keep
this
relatively
quick,
since
there
will
be
a
demo
of
an
actual
implementation
later,
but
part
of
Argo
rollouts
1.5
release,
which
there
see
one
is
out
now
I.
Think
one
of
the
bigger
features-
or
you
know
change
in
the
ecosystem
is
that
Argo
rollout
is
basically
getting
plug-in
support.
E
We're
currently
just
starting
out
with
plug-in
support
for
traffic,
routers
and
Metric
providers,
and
so
that's
kind
of
where
we're
currently
I
have
some
other
plans
that
I
plan
on
bringing
up
later,
but
I've
kind
of
come
up
with
a
a
system.
I
guess
of
discovering
plugins
some
standardizations
around
naming,
and
things
like
that.
So
so
today,
most
plugins.
E
If
they
want
to
be
listed
within
the
docs
which
each
of
the
traffic
routers
or
metric
providers
will
have
a
section
in
docs
that
are
related
to
like
which
plugins
are
there,
I'll
basically
ask
that,
in
order
to
be
listed
there,
that
those
projects
keep
their
their
metric
plug-in
source
code
within
the
Argo,
proj
Labs
people,
and
it's
going
to
kind
of
follow
a
naming
Convention
of
rollouts
Dash,
whatever
the
plug-in
feature,
is
and
then
metric
Plugin
or
traffic,
router,
plugin
and
and
doing
underscores
for
some
separations
there.
E
There's
going
to
be
that
there's
a
pretty
decent
Dev
docs
that
have
just
that
kind
of
go
through
some
of
this
and
how
to
create
plugins
I,
also
plan
on
doing
a
more
in-depth
blog
post
around
creating
a
plugin,
what's
required
to
create
a
plugin
how
to
debug
plugins
Etc
that
should
be
coming
out
soonish
as
well,
just
a
quick
kind
of
high
overview
of
how
these
plugins
work
there's.
Basically
a
new
config
map
that
has
been
added
to
Argo
rollouts.
E
That
has
two
top
level
elements
for
metric
provider
plugins
or
traffic
router
plugins,
and
one
of
the
key
things
here
is
this
name
field,
so
the
name
field
has
a
kind
of
a
standard
convention
around
of
namespace
Slash,
plug-in
plug-in
name.
E
E
If
that's
done,
I
highly
recommend
using
the
shot,
I
think
yeah,
it's
optional,
but
I
highly
recommend
using
the
shock.
56
hash
check.
If
you
are
downloading
plugins,
because
they
who
knows
what
could
be
there,
the
other
option
is
a
file
based
which
then
leaves
it
up
to
the
end
users
to
figure
out
how
to
get
that
file
mounted
within
the
rollout
controller.
E
Some
some
basic
usage
of
this
is
what
this
basically
allows.
Your
plugin
users
to
do
is
within
rollouts
for
traffic
routers,
there's,
basically
a
new
plugins
field,
and
this
is
a
map
of
of
strings
I.
E
Don't
know
why
I
keep
doing
that
that
become
this
map
here,
map
maps
to
the
name
of
the
plug-in
here
and
those
have
to
be
the
same
basically
and
then
that
allows
the
plug-in
code
to
easily
find
its
particular
configuration,
which
plug-in
authors
can
then
basically
do
anything
that
they
want
inside
this
field,
for
whatever
configuration
they
need
and
and
that's
for
traffic
router
plug-in.
This
is
this
example
for,
for
various
analysis
templates.
E
It's
basically
the
same
concept.
There's
a
provider
plug-in
you
know
the
name
of
the
Paul
plugin
following
the
namespace,
slash,
plugin
name
field,
and
then
the
plug-in
authors
can
add
any
particular
struct
that
they
they
want
for
their
particular
plugin
yeah.
That's
kind
of
how
that
goes.
There
is
one
one
if,
if
you
are
creating
a
traffic,
router
plug-in
the
it
it
is
currently
left
up
for
the
plug-in
authors
to
Define.
E
The
requirements
for
for
kubernetes
are
back
for
those
particular
resource
kinds
and
make
sure
that
they,
you
know,
inform
their
users
how
to
either
bind
them
to
the
default.
Rollout
service
account
or
not.
Analysis
templates-
probably
don't
generally
have
that
problem,
but
that's
basically
how
you
use
plugins,
they're
relatively
simple,
to
implement.
It
basically
implements
the
for
the
most
part,
the
the
standard,
goaling
interface,
that
is
currently
used
for,
say,
a
metric
provider
there's
just
some
small
differences.
E
A
new
function
has
been
added
to
basically
a
nit
where
you
can
set
up
any
type
of
clients,
any
Coupe
clients,
any
Prometheus
clients
or
metric
provider,
client
that
you
need
to
and
then
air
types
there's
just
some
RPC
things
the
plug-in
system
uses
RPC.
In
the
background
it's
been
up
a
process
and
there's
some
typing
stuff
that
had
to
change
so
there's
some
minor
differences
in
this
interface,
but
the
functions
generally
stay
the
same
between
both
metric
providers
and
traffic
routers.
E
So
you
have,
you
know,
set
weights
that
header
all
the
things
that
if
you
have
looked
at
rollouts,
Implement
implementing
of
a
traffic
router
or
metric
provider
you'll
be
these
functions
are
very
common
yeah.
That's!
Basically
my
quick
intro!
You
know
people
feel
free
to
play
around
with
plugins
now
that
they're
there.
So
any
questions.
E
The
rollouts
helm
charts
actually
not
really
maintained
by
Argo
proj
maintainers.
It's
kind
of
a
separate
community
that
maintains
that,
if
that
makes
sense
to
do,
I
would
suggest
probably
opening
a
ticket
on
that
Helm
chart
repo.
A
A
B
B
B
Oh
okay!
Sorry
that
then,
okay.
E
Yeah
at
startup
yep,
so
there's
some
documentation
that
kind
of
talks
about
some
pros
and
cons
of
downloading
versus
file
mounting.
You
know
as
far
as
availability
is
concerned
and
running
NHA,
and
things
like
that
that
there's
some
things
there
to
think
about.
But
it's
it's
a
real
convenient
option
to
easily
get
plugins
installed
so
that
there's
a
a
pro
con
there
that
each
user
should
decide
what
works
for
them.
B
Yeah,
it
looks
like
the
question
is:
if
what
happens,
if
the
plugin
is
unavailable
temporarily
I'm
guessing
the.
E
After
talking
with
Leo
kind
of
heavily
about
it,
I
I
think
we
both
lean
towards
the
idea
that,
if
the
plug-in
isn't
available
that
the
controller
not
starting
is
is
a
good
thing,
one
of
the
kind
of
interesting
so
in
one
we'll
have
one
four
at
one
five,
there
was
a
pretty
big
rewrite
to
leader
election.
So
basically,
one
of
the
ideas
is
that
you
can
spin
up.
You
know
two
or
three
Argo
rollouts
controllers.
E
Each
of
them
will
download
the
plugin
and
for
let's
say
for
some
reason,
you
delete
a
pod
or
a
note
goes
away
and
deletes
that
pod
and
another
pod
will
become
leader
and
already
have
that
executable
available
to
it,
so
that
provides
some
safety,
I
guess
and
and
plug-ins
not
being
available,
but
but
that
is
one
of
the
risks
of
downloading
the
files.
Basically.
E
B
And
my
question
was
the
naming
convention
that
you
are
using
the
I
noticed
the
plug-in
is
a
suffix
for
the
the
plug-in
names,
but
if
you
wanted
it
to
be
more
discoverable
or
organized,
you
may
consider
putting
like
rollout
stash,
metric,
plug
and
dash
and
then
optimix
or
Gateway
API
at
the
end.
So.
E
Yeah,
that's
fair,
GitHub
searching
does
fairly
well
at
being
able
to
do
anything,
but
that's
an
interesting.
You
know
early
enough
that
we
can
play
with
that.
Still
I'm
hoping
the
docs
are
kind
of
the
the
more
popular
source
for
finding
finding
those.
But
it
is
nice
that
there's
some
standard,
at
least
in
the
repos,
but
that's
good
feedback.
A
C
Guess
our
when
we
discussed
about
this
this
naming
convention
I,
guess
that
the
idea
was
to
have
the
repo
name
as
part
of
the
of
the
main
structure
of
the
the
plucking
ID.
So
we
would
avoid
clashes
with
different
plugin
providers
right
so
that
I
think
that
was
the
main
intention
right,
Zach
that
we
discussed
in
the
past.
Yeah.
E
B
E
B
E
B
E
E
B
So
we
have
someone
from
Ops
MX
that
will
demo
implementation.
Is
it
yes.
C
F
So
hey
everyone,
I'm
shraddha,
I
work
with
optimix,
so
we
basically
work
in
CI
CD.
One
of
our
products
is
ISD,
wherein
we
use
Argo
and
Spinnaker
to
deliver
software.
F
We've
been
using
algo
rollouts
for
almost
about
a
year
now
and
basically
I'd
been
using
the
job
based
provider,
but
with
the
recent
plugin
that
Zach
has
kind
of
worked
on.
We
are
migrating
to
the
plugin
now
so
he's
also
created
a
repo
for
us
in
the
Argo
project,
Labs
organization,
and
that
is
where
our
binary
and
our
code
base
stays.
F
So
basically,
there
are
a
good
few
benefits
that
the
plugin
based
approach,
kind
of
gets
to
gets
us
compared
to
the
job.
The
first
thing
is
that
we
kind
of
no
longer
need
to
separate
create
separate
our
back
resources
that
were
needed
to
access
resources
outside
of
the
job
spot
that
we
had.
And
second,
if
you
can
see
all
this
stuff
over
here,
so
this
used
to
reside
in
a
separate
config
map
which
we,
you
know
kind
of
had
to
Mount
into
the
Pod.
F
So
this
stuff
now
goes
away
for
us
as
well,
so
overall
Things
become
more
simple
and
minimalist
for
us.
So
if
you
can
see
over
here,
this
is
our.
This
is
the
plugin
provider,
and
this
is
our
the
name
of
the
plug-in
now
that
we
have
right.
So
the
key
feature
that
our
product
ISD
brings
is
that
we
support
both
logs
as
well
as
metric
analysis.
So
we
support
different
kind
of
providers
like
we've
got
Splunk
and
then
we've
got
data
dog,
elasticsearch,
Prometheus,
New
Relic,
and
then
there
are
many
more
also.
F
So
in
this
example,
if
you
can
see,
I've
used
Prometheus
for
the
metric
analysis
and
elastic
search
for
the
log
analysis
right.
So
this
is
how
the
metric
template
looks
like.
So
we
have
a
metric
template
which
is
defined
in
a
config
map
and
over
here.
If
you
see
basically
what
we
have
is
basically
simple
application:
Health
metrics,
that
we
are
kind
of
monitoring,
and
then
you
can
see
over
here
that
we
are
not
just
targeting
one.
We've
got
multiple
metrics
that
we
can
Target
over
here
and
for
the
logs.
F
Just
give
me
a
second
I'll
shift.
This
down
yeah
for
the
logs.
The
logs
also
exist
in
a
config
map.
Here
we
can,
if
you
can
see
over
here,
we're
targeting
elasticsearch
and
here
in
the
error
topics
we
can
Define
what
all
error
kinds
are
we
targeting
and
what
is
the
criticality
of
it,
so
basically
at
the
core
of
our
product.
F
What
we
have
is
we
have
a
strong
verification
engine
which
is
kind
of
you
know,
backed
by
ml
models
and
Stat
tests,
and
this
engine
it
works
on
the
metrics
and
the
logs
that
are
collected.
While
your
analysis
is
being
run
and
it
gives
a
score
based
on
which
either
you
know
roll
back,
if
the
score
is
less
than
the
desired
score
or
we
progress
on
to
the
next
stage,
so
a
couple
of
hours
back
I
did
a
couple
of
runs
in
Argo.
F
If
you
can
see
over
here,
this
is
one
of
the
analysis
run.
Resources
that
we
have
so
yeah,
if
you
can
see
over
here
so
here
is
our
here.
Is
the
report
URL
that
got
generated
for
us
and,
if
I
click
on
this,
this
will
take
me
over
here.
So
if
you
see
the
metrics
that
we
had
kind
of
dragged
over
there
right
so
here
we
can
see
individual
details
about
each
of
the
metrics
as
to
how
they
performed.
If
you
can
see
within
the
three
minute
analysis
duration.
F
F
But
then,
if
we
go
to
expect
it
we'll
see
that
we'll
see
that
there
are
different
clusters,
which
we
kind
of
make
over
here
and
these
clusters
kind
of
eventually
lead
to
the
final
score
that
is
displayed
over
here.
So
I
kind
of
ran
another
run
over
here,
in
which
you
know
there
were
null
Point
exceptions
and
then
critical
exceptions
which
were
generated
due
to
which
our
analysis
kind
of
field
for
us,
and
then
we
go
here.
F
We
see
that
we
didn't
get
progressed
to
the
next
stage,
so
I
think
I'm
kind
of
done
over
here.
Also
to
mention
that
I'm
going
to
give
a
shout
out
to
Zach
here
for
all
his
help
and
work
with
the
plugin
that
he
did
oh
yeah,
Anjali
and
Gopi.
Would
you
want
to
add
something
over
here?
If
some,
if
I
kind
of
missed
out
on
something.
G
The
2.7
features
seems
like
really
good.
Obviously,
you
saw
one
of
the
reports
we
had
to
go
to
the
external
URL
if
I
don't
like
it
works
but
rollouts
as
well,
but
if
we
could
use
the
Argo
CD
as
a
previous
proxy
and
get
it
displayed
in
the
same?
U
y,
you
know
that
will
make
it
so
much
easier
for
us.
B
Yeah
exactly
so
the
the
extension,
the
back-end
support
for
Argo
CD
extensions
is
probably
something
you'll
also
be
interested
in,
because
the
up
until
this
feature
it
was
the
UI
could
only
show
more
information
about
the
yaml
that
you're
looking
at.
So
it's
very
limited.
Basically,
it's
a
pretty
Phi
the
yaml
into
like
a
UI
components,
but
now,
with
the
back
end
it
you
could
build
out
a
UI
extension
that
actually
talks
to
your
optimax
backend
and
presents
information
in
any
way
you
see
fit.
B
So
actually,
that's
probably
in
the
next
step
that
you
you
should
consider
for
kind
of
bringing
value
to
your
users.
G
The
other
features
that
we
are
excited
is
also
this
our
goal:
plugin
capability
to
interface,
with
the
newer
or
different
types
of
load,
balancers.
That
also
makes
it
easy
for
us
to
customize
this.
We
see
quite
a
few
different
ways
of
doing
things
yeah,
and
that
combination
with
this
verification
capability
will
really
help
us
with
the
automation
there.
B
Yeah
exactly
and
that's
also
been
important
for
us,
because
the
it's
hard
as
the
for
the
project
to
understand
how
every
Ingress
controller
behaves
and
works,
and
so
that
separation
allows
one
more
rapid
development
of
a
certain
support
for
Ingress
or
or
mesh,
but
also
kind
of
separates.
The
kind
of
have
the
cleaner
separation
of
concerns
with
by
abstracting
out
to
a
plug-in
system.
D
B
D
B
G
B
E
This
is
a
a
weird
I,
don't
know,
question
maybe
thought
so.
I've
been
toying
around
with
kind
of
hacky
ideas
for
these
plugins,
because
there
are
a
handful
of
them
now.
E
Rollouts
allows
you
to
have
multiple
traffic
routers
defined
for
a
rollout
one
of
the
interesting
ways
that
you
could
use
for
that
is.
You
could
feed
back
in
the
percentages
of
traffic
to
your
metric
system
as
well
with
that
just
be
kind
of
an
interesting
idea,
because
knowing
what
percentage
of
weight
things
are
at
can
determine
kind
of
what
your
analysis
looks
like
right
or
affect.
G
So
this
is
a
feedback
loop
right.
So
the
way
we
think
about
that
is
we
go
and
configure
set
of
percentages
in
the
routers
but
actual
values.
We
want
to
measure
directly
from
the
router
in
the
application
and
that
becomes
our
feedback
loop
into
making
the
decisions.
So,
for
example,
if
you
say,
send
30
traffic
yeah
to
version
two,
we
want
to
actually
measure
what
level
of
traffic
is
coming
through,
for
it
sure.
D
E
D
It
Chase
yeah,
yes,
Jason
I'm
on
the
Argo
Helm
project
and
I
just
want
to
emphasize
from
earlier.
We've
done
a
really
good
job
the
past
year,
making
sure
the
helm
charts
to
stay
up
to
date
with
the
parent
projects
PR's
are
welcome.
Issues
are
welcome.
We
try
to
look
at
when
the
updates
get
released
for
rollout,
CD
or
workflows
and
if
any
changes
that
could
happen
to
the
values.yaml,
we
try
our
best
we've
been
communicating
with
the
parent
projects.
D
So
these
new
plug-in
features
and
stuff
that
are
about
to
come
are
already
here
we're
going
to
definitely
try
and
get
those
in
their
own
Parts.
Also
so
yeah
I'm,
gonna
ping,
some
Upstream
folks,
and
we're
definitely
going
to
try
and
stay
up
to
date
as
fast
as
we
can
I
know
customizes
in
the
parent
projects.
So
it's
easier,
but
we
know
a
lot
of
companies,
including
the
one
I
work
for
who
use
the
home
charts.
So.
B
Okay,
any
additional
agenda
items
that
people
want
to
bring
up
before
we
wrap
up.
C
I
have
just
one
more
question
related
to
the
to
the
demo
that
was
provided.
I
really
like
the
the
logs
capability
I'm
just
curious
to
understand
a
little
bit
better.
How
does
that
work?
Because
I
didn't
understand
that
the
UI
that
was
shown
I?
It
seems
that
it's
some
sort
of
a
proprietary
tool
that
does
that
so
I'm
I'm
curious.
If,
if
we're
able
to
configure
that
plugin
to
run
some
some
analysis
on
logs
provided
by
Splunk
and
having
that
yeah
guide,
our
our
rollouts
based
on
that.
G
Yet
the
Integrations
with
Splunk
are
available,
the
the
previous
there
is,
as
you
are
rolling
out,
we
want
to
look
for
any
of
the
new
kind
of
errors
or
new
logs
that
are
categorized
as
errors
or
warnings
in
it,
and
so
it's
a
sort
of
it
is
using
processing
through
ml
to
understand,
because
it's
a
free-flowing
text
and
data.
That
is
where
the
ml
comes
in,
to
categorize
and
also
to
identify.
G
If
two
of
these
log
lengths
looks
similar
or
not,
and
based
on
that,
it's
categorizes
groups
them
and
then
contextualizes
those
errors
and
then
Compares
are
these
errors
new
and
based
on
that
it
scores.
So
yeah
integration
with
Splunk
also
exists.
So
you
can
say
my
application
logs
are
going
into
this
Splunk
and,
as
you
roll
out,
it
can
pull
the
data
in
from
there
and
then
verify
that.
C
Is
it
mandatory
to
work
with
this
score
concept
or
I
can
configure
that
a
specific
analysis
template
to
just
check
for
the
rate
incident
of
a
specific
query
that
I
want
to
be
executed
against?
My
Splunk
instance
is
that
possibility
as
well.
G
Yes,
you
can
customize
the
things
that
you
want
to
do.
You
can
also
train
the
system
if
it
comes
in
says
these
are
errors,
I
found,
you
say,
ignore
it
or
if
you
find
some
structure
that
you
want
to.
Never
repeat,
you
can
specify
that
as
a
critical
issue.
So
if
it
ever
shows
up,
it
immediately
fails
the
analysis.
B
D
B
Yeah,
nothing
thank
you
for
yeah
testing
the
waters
with
this
new
system
and.
E
Was
I
made
them
make?
They
were
really
really
early
adopters,
so
they
got
the
fun
joy
of
making
changes
a
few
times,
but
it
was
working
with
them.
So
yeah.
B
B
All
right,
I
think
if
there's
no
other
questions
or
topics
I
think
that's
a
wrap
for
this
month.
We
do
have
the
weekly
contributors
meeting,
although
and
then
Argo
con
and
coupon
are
coming
up,
so
we'll
likely
that
week,
we'll
get
actually
we'll
figure
out.
If
that's
still
happening,
I
think
only
a
few
people
are
going
to
argument.