►
From YouTube: Argo CD and Rollouts Community Meeting 1st Sep 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Today
we
have
a
guest
hong
tsai
he'll,
be
talking
to
us
about
carmada
and
how
it
works
with
argo
cd
and
then,
after
that,
I'll
give
an
update
on
the
upcoming
argo
rollouts
1.1
release
and
just
go
an
overview
over
the
any.
The
new
features
that
are
coming
in
just
a
reminder
that
these
meetings
are
being
recorded
to
the
cloud,
so
everything
you
say
will
be
uploaded
to
youtube
later
and
with
that,
how
excited
are
you
ready
to
present.
B
Okay,
hello,
everybody,
I'm
hong
kong
from
china,
I'm
I'm
the
maintainer
of
the
kamada
project.
B
The
reason
why
I'm
here
is
a
friend
of
mine
who
were
using
ago
cd
to
distribute
applications
to
multi
clusters,
and
luckily
he
is
also
a
kamata
user.
She
he
his
side.
He
told
me
that
kamada
and
aku
city
worked
together
pretty
good,
so
I
run
a
demo
and
I
want
to
share
with
with
the
auxiliary
guys.
B
Now
kai
share
the
screen.
B
B
A
B
A
B
A
Okay,
yeah:
we
can
see
your
desktop
now.
B
Before
the
demo,
I
I
want
to
make
a
brief
introduction
of
the
commander
project
commander.
Product
is
a
mult
cluster
federation
product.
He
provides
the
application,
many
management
in
multi-cloud
and
hybrid
cloud
scenarios
and
okay,
let's,
let's
see
the
architecture
here,
commander,
kendrick,
lawling
control
plane
includes
api
server.
This
api
server,
essentially
a
coupe
api
server
and
it's
a
lot
of
controllers.
B
Okay,
commander
control
player
you
already
installed
in
cluster,
we
we
we
set
the
host
cluster.
Now,
let's,
let's
see
the
environment,
I
prepared.
B
Here
I
have
three:
I
have
three
clusters
and
the
the
aquacity
is
a
cluster
that
I
joined.
The
aquacity
server
and
the
the
host
cluster
is
the
I
installed
the
camera
control
plane
on
on
this
cluster
with
the
the
command
api
server
is.
The
is
here
now.
B
Okay,
the
demo
basically
rhymes
very
similar
with
with
this
one
sorry.
B
Here
you
know,
oh.
B
Yeah,
why
I
this
repo
and
sorry
here
I
focus
on
here
and
make
a
small
change
on
it
from
here.
You
can
see
all
the
changes
I
made
updated
the
replicas
to
3d
and
add
added
a
pro
propagation
policy
api.
That's
the
kamala
api
from
the
api.
We
can
see
that.
B
Select
the
deployment
and
the
service
and
place
them
to
clusters,
and
there
is
a
scheduling,
scheduling
rule
according
to
you,
the
replicas
will
be
divided
according
to
a
weight
list
and
from
the
list
that
we
can
see.
Three
cluster
all
have
a
weight
y.
That
means
the
three
webcasts
will
be
deployment
to
three
classes,
and
each
classroom
has
one
red
replica.
B
B
B
C
B
Okay,
now
you
can
see
deployment
had
been
propagated
to
three
clusters,
and
you
can
see
from
here
this.
This
policy
is,
I
I
I
I
just
showed
here.
B
B
From
this,
this
is
a
resource
funding
api
from
this.
It's
a
internal
api-
and
you
can
see
here,
the
the
deployment
has
been.
B
Yeah,
actually,
we
have
a
another
api
that
didn't
showing
here
called
the
objects
in
another
namespace.
Another
relationship
not
not
defined
by
owner
reference.
B
A
Can
I
ask
a
question
okay,
so
I
know
this
in
the
resource
tree.
I
don't
see
replicas
set
under
the
deployment.
Is
that
just
hidden
or
were
you
able
to
somehow
prevent
that
from
happening?
A
So
sorry,
so
normally,
when
you
create
a
deployment
that
deployment
then
goes,
creates
replica
sets
and
then
the
replica
set
creates
the
pods.
Here
we
see
the
deployment
only
having
a
resource
binding
underneath
so
it
somehow
it
didn't
create
the
replica
set.
How
did
you
change
that
behavior.
B
Yeah,
okay:
here
we
use
re.
We
reused
kubo
api
server
as
the
camada
api
server,
but
the
controllers
is
very
limited
from
here.
You
can
see.
A
Yeah,
oh,
I
understand
so
you
so
the
kamada
api
server
masquerades
as
a
real
deployment
server,
but
really
it's
the
when
it
handles
deployment
objects.
It
goes
then,
and
distributes
that
to
other
clusters.
Okay,
that's
pretty
clever
cool.
B
My
friend
said
they
before
they:
when
the
application
propagated
to
students
of
cluster,
they
will
be,
they
will
create
many
applications
here
and
with
camara.
They
only
need
to
create
a
wine.
A
This
is,
this
is
really
cool.
Did
does
this
support
any
other
workload
types
aside
from
deployments.
D
A
They
propagate
things,
like
I
don't
know
stable
sensor,.
B
All
the
service,
others,
all
the
results-
will
be
supported,
and
here
the
result
selector
you
can
especially
specific
the
api
version
can
and
the
name
so.
A
Okay,
so
it
applies
to
even
like
crds
anything
that
you
just
need
to
propagate
to
many
many
clusters.
D
A
A
Yeah
this
is
this
is
really
interesting.
I've
never
seen
something
quite
like
this.
D
B
I
have
a
question
here
I
see
from
from
here
I
ca
I,
I
can't
get
the
house
status
of
the
reserve
funding,
so
I
wonder
if,
if
we
can
contribute
our
health
checks
screw
here.
A
Yeah,
actually
I
was
going
to
suggest
to
do
that
so
that
I
noticed
when
you
clicked
on
the
resource
finding.
You
know
it
had
health
information
about
each
individual
subcluster
and
it
would
be
useful
in
the
ui
to
be
able
to
say
okay,
one
of
the
clusters
not
being
somehow
being
able
to
propagate,
and
so
the
overall
status
of
that
resource
binding
is,
you
know,
either
progressing
or
degraded
so,
but
you
seem
to
have
already
added
the
health
check
for
the
propagation
policy,
but
not
resource
binding.
Is
that
or
someone
did?
A
A
Oh,
we
know,
I'm
sorry,
that's
not
a
health
check.
That's
just
that's
a
sync
status.
Sorry.
I
took
that
back.
So
there
are
yeah.
There
are
no
health
checks
for
the
carmada
crds
and
the
the
way
you
would
go
about
contributing
them
would
be
to
submit
a
pr
to
the
argo
cd
repo
with
lua
scripts
that
are
able
to
self-assess,
given
a
single
resource,
looking
at
its
status
and
spec
to
return
a
string
that
is
either
you
know
healthy
or
degraded
or
progressing
or
suspended.
B
Yeah,
okay,
all
right.
A
B
A
Are
you
asking
me
or
someone.
B
B
Okay,
I
have
another
question
here
as
we
have
another
resource,
but
he
not
in
the
name
space
with
in
the
resource
binding
name
space.
So
I
can't
it
can't
be
shown
here.
A
Oh,
I
see
and
you
you
you're
asking
for
a
way
is
there:
is
it
possible
to
show
things
which
are
not
owned
by
the
the
tree
of
managed
objects
if
it's
possible
to
show
those
in
the
argo,
cd,
ui.
A
So
there's
two
answers
to
that
question.
The
first
is
argo.
Cd.
Has
the
ability
to
show
you
everything
in
a
name
space
regardless
if
it's
managed
in
the
by
a
git
repo
or
not,
but
I
think
you
just
mentioned
that
you're
the
object
you're
interested
is
actually
in
a
different
name
space
and
so
the
first.
My
my
first
answer
won't
help
you,
because
that
feature
that
we
already
have
won't,
allow
you
to
see
anything
outside
of
the
this
guest
book
main
face.
A
The
second
answer
is:
there
have
been
requests
and
like
there's
already
yeah
there's
been
issues
opened
about.
Can
I
associate
other
resources
which
are
not
really
owned
by
the
object,
but
just
make
them
a
somehow
a
child
of
it
for
display
purposes?
A
So
we're
yeah
we're
open
to
somehow
allowing
users
to
configure
a
child
relationship
to
objects,
even
though
they're
not
technically
owned
by
them.
In
fact,
we
actually
already
do
this
with
some
type
of
objects.
The
one
in
particular
is
a
service
always
creates
endpoints
objects
like
it
always
creates
one,
but
we
make
endpoints
a
child
of
service
just
because
they
live
and
die
together
like
when
you
delete
the
service,
then
this
endpoints
object
becomes
deleted
as
well.
A
A
If
your
thing
is
in
a
different
name,
space,
the
child
resource,
there
are
some
security
things
we
would
have
to
consider
to
to
kind
of
allow
that
presentation
to
happen,
because
if,
if
you,
if
your
end
user
and
somehow
you
can
annotate
your
resources
such
that
you
can
suddenly
see
stuff
in
another
namespace
that
you're
not
supposed
to
that,
would
not
be
allowed.
So
we'd
have
to
honor
our
project,
our
back
somehow
to
to
kind
of
show
to
allow
this
to
happen.
A
So,
in
other
words,
yes,
we
we
would
be
open
to
a
feature
that
allows
customization
of
the
resource
tree
and
relationships
so
long
as
it
honors
project
art
project,
our
back.
B
Yeah
yeah,
the
the
the
arrow
here
is
you
look
at
the
relation
by
the
owner
reference
right.
A
B
A
There
are
like
there
are,
like
maybe
three
exceptions
or
two
or
three
exceptions
to
this.
A
B
A
But
it
won't
help
you
in
the
immediate
future,
because
I
think
we
would
have
to
scope
like
this
would
kind
of
be
a
midterm
feature
and
we
have
to
kind
of
spec
out
the
design
like
how
how
we
think
it
we
would
allow
users
to
control
that
like
if
this
would
be
something
a
system.
A
Administrator
would
configure
or
end
user,
probably
not
an
end
user.
But
these
are
all
the
things
that
we
have
to
consider.
A
All
right
any
questions
people
have
for
hong
kong
is
this:
something
are
people
actually
have?
What
have
this
need
for
multi-cloud
distribution
of?
Actually
it's
more
multi-cluster
than
multi-cloud
multi-cluster
distribution
of
this
using
a
single
deploy?
I
don't
know
if
others
have.
A
Yeah
that
question
was
more
for
the
audience
to
see
if
others
would
be
interested
where,
where
can
people
find
out
more
about
this
through
either
like
a
slack
channel
or
if
they
have
questions?
How
do
they
kind
of
get
involved.
A
That
question
is
for
you,
if
so
anyone
who's
watching
this
video
is
like
they're
interested
in
this.
How
can
they
get
more
information
or
get
involved.
A
All
right,
I
think,
that's
what
I
was
wondering.
A
All
right
thanks
alongside
for
that
this
is
a
really
interesting
project.
I
think
it's
really
creative
how
you
were
able
to
to
accomplish
that
with
just
the
native
kubernetes
types.
A
All
right,
so,
the
next
agenda
item
is
a
it's
not
a
demo,
but
it's
it's
an
overview
of
the
upcoming
rollouts
1.1
features.
A
So
we've
we
have
a
quite
a
bit
of
new
features
actually
for
a
1.1
release,
and
I
wanted
to
just
quick
go
over
each
one
and
explain
what
they're
doing,
because
it
might
be
easier
to
learn
about
it
through
talking
then
and
say:
a
change
change,
log
oops
all
right.
So
the
first
one
is
notification
support
we
actually
already
demoed
this,
maybe
two
or
three
months
back
when
it
first
got
merged
into
the
rollout
code
base.
A
But
if
you're
familiar
with
argos
cd
notifications,
this
should
look
really
familiar.
A
We
are,
we
have
native
support
that
you
don't
have
to
run
a
separate
controller
to
get
this
argo
roll
as
1.1
controller
understands
these
annotations
that
you
can
send
notifications
to,
and
it
supports
all
of
the
notification
providers
that
the
engine
supports,
slack
and
major
the
email
or
so
you
get
all
of
those
things
out
of
the
box,
and
you
have
the
same
exact
configuration
experience
that
you
do
with
argo
cd
notifications,
just
with
the
rollout
inside
the
rollout,
namespace
and
controller.
A
So
this
will
let
you
get
notifications
on
any
kubernetes
event
that
we
emit
and
the
syntax
is
so
if
you
know
the
event,
I
guess
they're
the
event,
the
event
names.
So
it's
like
capital
on
capital,
rollout
step
completed.
If
you
use
sorry
it's
just.
I
say
it's
rollout
step
completed
and
if
you
add
dashes
between
them
and
then
the
word
on
in
front
of
it,
then
you
will
have
like
you.
Can
you
can
notif
notify
on
that
particular
event?
A
So
that's
the
convention
that
you
can
get
so
that's
the
notification
feature
I'll
move
on
since
we
already
talked
about
that.
A
The
second
feature
is
the
ability
to
control
scale
down
of
your
either
your
canary
or
your
blue
green
preview.
When
the
rollout
is
aborted-
and
we
actually
had
this
behavior
already-
but
it
was
very-
it
was
inconsistent,
so
I
think
in
blue
green
right
in
blue
green.
We
just
left
the
preview
up
indefinitely
and
there
was
actually
not
really
any
way.
You
can
say:
okay,
when
I
import
scale
down
the
the
blue
green
preview.
A
So
now
we
made
this
consistent
across
the
board
except
basic
canary,
because
basic
canary
you
have
to
scale
down
when
you're
not
using
traffic
shaping.
But
we
made
this
consistent
across
the
board
where,
by
default
30
seconds
after
the
board,
it
will
scale
down
the
preview
or
the
canary,
and
you
can
make
this
configurable.
So
you
can
say
if
I
don't
want
the
default
of
30,
you
know
make
it
an
hour
or
a
day
or
if
I
don't.
A
If
I
want
to
leave
it
up
indefinitely,
you
can
specify
the
value
of
zero,
which
is
different
than
omitting
it
completely.
But
the
value
zero
means,
like
don't
scale
it
up
down
at
all,
just
leave
it
up
forever.
A
A
So
then
the
feature
that's
here
is
that
at
that
time,
when
it
exceeds
that
deadline
seconds,
you
can
choose
to
actually
abort
the
rollout
and
you
don't
even
need
to
be
using
analysis
to
to
do
that.
Previously.
The
only
way
to
get
automated
aborts
was
to
to
have
an
analysis
run
that
failed,
but
here
we
can
now
allow
progress
deadline
seconds
to
abort
the
rollout.
A
In
fact,
this
is
actually,
as
I
noticed
in
the
kubernetes
upstream
issues,
there's
been
like
an
open
issue
for
a
feature
like
this
for
like
those
for
years,
but
they
this
is
now
something
that
surveillance
roll
outs.
A
We
analysis
we're
in
gc,
so
previously
garbage
collection
of
old
analysis
runs
was
tied
to
the
revision
history
limit,
so
in
other
words
like
in
1.0,
if
you
had
a
revision
limit
of
10,
which
is
the
default,
we
would
leave
around
all
analysis
runs
and
experiments
and
all
those
things
around
us
associated
with
that
replica
set
and
that
just
created
just
this
massive
page
of
objects
to
look
at
when
you're.
A
You
know
looking
at
your
app
in
argo,
cd
or
in
cube
ctl,
and
so
this
is
some
knobs
that
we
are
now
giving
to
let
people
control.
How
much
do
they
really
want
to
see.
You
know
if
I
don't
want
to
see
any
successful
stuff,
I
can
just
delete
them
right
after
they
complete
and
if
I
want,
I
can
only
just
keep
like
the
stuff
that
failed
or
vice
versa,
and
so
these
are
some
knobs
that
make
things
a
lot
cleaner.
A
Oh
yeah,
so
this
this
is
aws
target
group
verification
if
you
are
using
eks
and
along
with
the
aws
cni
and
you're,
using
aws
load,
balancer
controller
that
you
may
be
interested
in
this
feature
here,
there's
a
problem
that
the
aws
load,
bouncer
controller,
has
based
on
the
way
it
was
implemented
in
that.
If
you
change
this,
the
late,
the
surface
selectors
of
something
pod,
readiness
gates
don't
properly
get
injected
to
the
the
pods.
In
fact,
this
is
impossible.
A
So
this
actually,
this
causes
a
problem,
because
if
I
roll
out
it
work
like
blue
green,
it
works
by
changing
the
active
service
selectors
to
point
from
the
blue
to
the
green
on
every
update
and
what
that
ultimately
means
is
that
we're
changing
the
the
service
from
underneath
the
alb
and
they
are
not
getting
target
readiness
gates
injected
into
the
the
pods
so
and,
and
raining
escapes
help
with
the
zero
downtime.
A
So,
in
order
to
allow
us
to
have
this,
this
model
of
switching
service
selectors
from
underneath
the
lb
we
implemented
this
verification
feature
and
specifically
for
aws,
where,
whenever
we
change
selectors
of
services
as
part
of
an
update,
we
we
stop
there
and
we
then
go
verify
in
aws
by
making
api
calls
to
make
sure
all
of
the
the
services
and
points
are
properly
registered
into
that
corresponding
target
group.
A
So
it's
it's
a
complicated
subject,
so
I
I
wrote
a
lot
of
documentation
as
well
as
there's
some
slides
that
you
can
see
to
to
visualize
how
this
works,
but
it's
important
if
you're
using
blue
green
on
on
aws,
using
the
aws
cni
that
and
ip
targeting
all
right
next
cloudwatch.
This
is
still
in
review,
but
it's
it's
very
likely.
A
Gonna
get
emerged
in
the
next
week,
or
so
we
are
now
supporting
cloudwatch
as
a
metric
provider,
and
so,
if
you
have
an
analysis,
template
you'll,
you
know
how
this
stands
out
here,
where
you
can.
This
is
the
same
query
format
that.
A
A
A
It
was
always
split
between
the
canary
and
the
stable,
but
there's
a
use
case
where
you
actually
want
to
launch
an
experiment
with
n
templates
and
you
want
to
give
them
equal
weights
because
your
your
statistical
analysis
can't
be
performed
unless
they
are
apples
or
apples,
so
that
it's,
in
other
words,
people
weren't
able
to
compare
like
a
five
percent
canary
against
a
95
stable,
because
the
metrics
would
just
be
all
out
of
whack
right.
A
So
that's
why
you
actually
want
to
use
experimentation
and
now
with
1.1
you
can
use
you
can
leverage
traffic
splitting
so
that
the
weights
you
can
specify
the
weights
to
that
those
experiment
templates.
So
so,
if
you
see
here
when
we
get
to
this
step,
we'll
send
five
to
this
this
canary
five
to
this
baseline
and
then
ninety
percent
to
the
stable,
and
if
we
had
a
set
weight
step
before
this,
like
it
would
like.
Let's
say
we
had
a
canary
weight
of
like
a
15.
A
and
then
15
would
go
to
the
canary
5
would
go
to
this
thing,
which
we
are
also
calling
canary
right
now
and
then
5,
the
baseline
and
the
75
would
go
disable.
A
A
Oh
yeah
so
dynamic
scaling.
This
is
this
was
one
of
our
most
popular
requests
and
if
you're,
using
canary
with
traffic
shaping
you'll,
know
currently
you'll
understand
that
we
leave
the
stable
scaled
up
for
the
entire
duration
of
the
update
and
then
once
the
update
is
complete.
We
scaled
down
disabled
and
the
reason
we
we
chose
to
do
that
was
we
wanted
the
boards
to
be
immediate
so
without.
A
If
you
leave
this,
if
you
scale
down
the
stable,
that
means
when
you
board,
you
have
to
scale
up
the
stable
and
that
can
take
a
lot
of
time.
A
So
this
this
feature
is
basically
saying,
as
I
increase
weight
to
the
canary
you.
Can
you
allow
the
rollout
controller
to
scale
down
the
stable
to
be
the
inverse
of
the
canary
weight,
and
this
way
this
is
important
for
scenarios
like?
Maybe
you
have
a
bare
metal
set
up
where
you
can't,
you
know,
increase
the
note
size
of
your
thing,
because
it's
physical
hardware
you'll
want
this
feature,
because
your
your
replica
will
always
be
very
close
or
matching
this.
A
This
number
right
here,
let's
see
what
is
there
anywhere
okay,
and
I
think
this
is
the
last
one.
That's
going
on.
D
Yeah,
sorry,
a
quick
question
on
that
last
one,
so
that
pro
looks
like
a
draft
pr
present
is
that
for
sure
going
to
be
in
1.1
yeah.
It.
A
Yeah
this
the
draft
I
need,
I
didn't,
take
out
the
draft
because
I
I
need
to
write
a
lot
more
testing
for
it,
but
actually
I
I
can
probably
take
it
out
of
the
draft,
so
people
can
start
looking
at
the
the
functional
stuff
and
because
the
only
thing
really
that's
left
is
to
write
the
the
end
to
end
tests
and
unit
tests.
For
this.
D
All
right,
great
thanks,
yeah,
just
by
way
of
example,
my
team
definitely
will
take
advantage
of
this
because
we
have
cloud-based
deployments,
but
we'll
soon
have
a
hybrid
model
where
we
have
on-premise
deployments
as
well,
and
we
were
thinking
we
were
going
to
have
to
disable
rollouts
for
the
on-prem
deployments.
But
with
this
we
won't
okay.
A
Great
thanks
for
your
work
and
I
think,
if
you
wanted
to
get
like
early
access
like
this
as
soon
as
this
is
merged,
we
always
build
latest
from
from
the
tip
of
the
the
main
branch
and
you'll
be
able
to
just
re-tag
that
in
your
environment
and
and
try
it
out
early
before
1.1
is
coming,
oh
by
the
way
I
think
with
one
on
one
like.
I
would
we're
trying
to
get
it
out,
hopefully,
in
the
next
two
weeks.
I
think
we
might,
I
think,
we're
on
track
for
that.
A
A
Yeah-
and
I
think
this
is
the
last
one-
there's
been
a
lot
of
improvements
to
istio
support.
The
first
is
in
onenote
zero.
A
We
only
had
support
for
issue
with
http
routes
and
with
101
we
just
we're
adding
support
for
tls
routes
and
so
with
tls,
if
you
specify
either
a
port
number
or
sni
host,
we'll
find
that
tls
route
in
the
issio
virtual
service
and
then
we'll
split,
we'll
modify
it
such
as
splits
traffic
between
your
tls
routes
and
then
the
second
improvement
that
is
made
with
regards
to
istio
is
that
it's,
it
seems
to
be
very
common
to
want
to
to
manage
multiple
virtual
services
for
the
for
the
same
application
in
in
locks
up
and
the
the
feature
that
this
allows
is
that
you
can
now
provide
virtual
services
as
a
list
and
and
specify
multiple
virtual
services,
and
then
we'll
the
rollout
controller
will
go
and
modify.
A
You
know
two
three
four
virtual
services
and
and
make
sure
the
canary
weighting
is
equal
across
the
board,
and
I
think
that's
yeah
that
that's
the
end
of
the
features
upcoming
features.
Oh
I
mean
look
at.
I
see
some
questions.
A
I
don't
know:
okay,
it's
just
going
to
sleep
so
I'll,
open
it
up
to
the
audience
for
any
questions
about
these
upcoming
features.
A
All
right,
if
there
are
no
questions,
let
me
check
if
there
is
anything
else
added
to
the
agenda.
A
All
right,
I
don't
see
anything
else
in
the
agenda,
but
that
doesn't
mean
that
you
can't
ask
any
questions
here.
So
we
have
a
lot
of
the
maintainers
on
and
if
you
you
know
want
to
raise
any
issues,
talk
about
you
know,
features
or
requesting.
Now
it's
your
time.
C
Oh,
I
don't,
I
don't
have
a
ask.
Is
I
think
I
I
helped
and
worked
with
three
new
contributors
recently
and
they
all
had
some
like
two
chain
problem
for
argo
cd,
for
example.
So
I
just
don't
know
whether
the
our
documentation
is
up
to
date,
because
it's
happened
three
times
even
myself.
I
cannot
install
that.
I
think
for
the
ui
development.
I
cannot
install
that
properly
following
that
instruction,
so
I'm
wondering
who
did
recently
can
help
to
maybe
refresh
the
dock,
to
reflect
the
current
state.
E
So
hong
is
this
for
proloads
or
cd,
cd,
okay,
cd,
okay,
yes,.
A
Yeah,
I
think
well,
it
usually
is
like
the
most
efficient.
I
guess
you
can
say
the
person,
the
the
last
person
to
onboard
notices,
mistakes
or
thing
incorrect
information
in
the
or
missing
information
is
most
of
the
time.
If
they
encounter
all
of
that
stuff,
they
they
they
have
to
power
through
it
somehow,
and
that
may
involve
slacking
the
inside
the
the
contributor's
channel
or
stuff,
and
then
once
they
actually
power
through
it,
they
they
should
to
benefit
the
next
person
following
them.
A
They
should
go
and
make
a
pr
with
the
doc
changes
necessary
to
to
get
it,
but
I
would
say
we
should
first
help
that
person
get
through
that,
so
that
they
can
then
take
the
action
to
update
the
documentation,
because
I
don't
think
the
people
who've
been
regularly
doing
this.
Day-To-Day
will
actually
be
the
best
people
to
actually
update
the
docs,
because
they
they
don't
have
the
same
barriers
that
they
will
encounter
as
a
new
contributor.
C
Yeah,
could
I
ask
one
volunteer,
who
actually
did
recently
and
quite
successfully,
so
I
can
connect
with
one
new
new
contributor.
Basically,
he
asked
me
a
lot
of
questions
and
a
lot
of
things.
I
don't
even
have
answer,
then
I
can
ask
this
new
contributor
actually
help
us
to
refresh
block.
C
A
All
right,
if
there
are
no
other
agenda
dimes,
we
give
you
10
minutes
of
your
day
back
and
we
can
have.
D
Meetings.
Sorry,
I
have
a
one
more
question
here
about
issue
number
nine
five.
Eight.
I
can
put
it
in
the
chat
here.
D
Yep,
so
you
had
made
a
comment
in
here
about
creating
a
controller
that
would
monitor
the
config
maps
and
secrets.
Is
there
any?
Has
it
been
any
other
thought
towards
this?
Is
this
a
possibility
for
a
future
release,
or
would
you
be
looking
for
someone
else
to
create
the
pr
to
do
this.
A
So
my
first
question
is
so
I
I
actually
followed.
I
subscribed
to
the
issue
that
I
think
it
got
emerged
already.
A
They
they
actually
improved
their
support
recently
for
rollouts-
and
I
I
would
say
to
you
can
actually
try
this
today,
because
I
think
the
this
this
project
has
now
better
support
for
it.
Since,
since
this,
when,
since
my
last
comment,
I
I
did
observe
that
they
they
updated
to
support
the
new
rollout
version.
So
my
question
is:
is
it
would
that
satisfy
your
use
case,
and
why
not.
D
I
mean
it
mostly
satisfies
it.
I
think
the
the
biggest
remaining
issue
is
just
also
as
an
argo
cd
user.
The
config
diffs
are
are
effectively
not
usable
right,
so
you
know
so
I
mean
it's
it's
a
nice
to
have.
Is
it
a
deal?
Breaker?
Probably
not,
but
it
would
definitely
be
a
nice
to
have
as
an
argo,
cd
user.
A
The
the
config
diff
is
actually
something
we
would
like
to
solve.
So
the
problem
you
describe
the
configurative,
so
anyone
who's
using
customize
and
using
their
config
map
generator
feature
we'll
know
that
it
deploys
new
resources
and
prunes
old,
like
if
you
that's
kind
of
just
the
model
customize
has
with
regards
to
config
management,
so
regardless,
if
you're,
using
deployments
or
rollouts
or
safety
sets
or
anything
that
needs
a
configmap
that
you
have
that
problem
in
argo
cd
and
we
want
to
solve
that
problem.
A
So
I
think
you
know
if
we,
if
we
were
to
only
do
that
in
rollouts,
it
would
not
help
our
deployment
users
who
are
using
customizing
config
map
generators,
but
the
the
idea
that
I
think
is
being
investigated
is
like
can
argo
cd,
diff
across
resources
and
also
maybe
have
some
convenience
in
the
ui
to
just
understand
these
relationships,
and
you
know
provide
that
a
good
user
experience
to
understand
like
okay,
these
config
maps
are
really
one
of
the
like
the
same
thing
and
I
can
just
click
on
the
cross
resource
button
to
understand
what
what's
really
different
yeah.
A
So,
in
other
words,
we
we
want
to
solve
that
diffing
problem,
but
and
we
want
to
solve
the
nargo
cd
and
there's
a,
I
think,
there's
a
way
to
to
kind
of
have
our
cake
and
eat
it
too.
D
A
A
And-
and
let
me
know
if
there's
this
reloader,
if
you
know,
if
someone
has
experience
with
this
and
they
tried
it
out
and
it's
not
really
doesn't.
A
For
the
way
they
wanted
to
like
at
that
point,
I
think
we
should
investigate
if
it's
worth
either
kind
of
improving,
reloader
or
and
throwing
our
hands
up
and
say.
A
Okay,
we
need
this
native
support
in
in
rollouts,
but
my
preference
is
to
kind
of
to
kind
of
improve
this
rather
than
bacon
support,
because
I
think
really
well
where,
but
the
use
difference
in
user
experience
would
be
just
putting
something
in
the
end
of
pod,
annotation
versus
I
mean
an
annotation
on
the
role
versus
something
in
like
the
spec
and
to
me:
that's
not
a
big
enough
difference
to
to
go
and
implement
and
reduce
some
feature
when,
when
this,
this
project
may
already
be
doing
what
we
want.
A
Okay,
yeah
thanks
thanks
for
raising
that
issue,
anything
else
is
open
to
everyone.
A
All
right,
so
thanks
everyone
for
joining
this
month's
meeting
and
a
reminder
we'll
have
the
workflow
and
events
meeting
in
two
weeks
from
today.