►
From YouTube: Argo CD and Rollouts Community Meeting 14th Jul 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everyone
and
welcome
to
the
july
2021
argo,
cd
and
rollouts
community
meeting,
I'm
your
host
jesse
suen
and
I
am
a
principal
engineer
at
intuit
and
one
of
the
maintainers
of
the
argo
project.
As
a
reminder,
these
meetings
are
recorded
and
will
be
uploaded
to
youtube
after
the
meeting.
So
today
we
have
an
announcement
and
a
demo
of
a
proposal
that
alex
is
working
on.
Currently
it's
the
best
name
we
have,
for
it
is
called
headless,
argo
cd,
but
that
might
change.
A
And
as
for
the
announcements,
there
is
a
inaugural
argo
con
that
we'll
be
hosting
in
san
francisco
on
december
and
henrik,
do
you
want
to
say
a
little
bit
about
the
argos.
B
Sure-
and
I
can
even
share
this
real
quick-
I
think
you
can
see
my
screen
so
yeah,
so
we're
really
excited
that
we're
actually
doing
the
first
argo
con
this
year
so
be
a
a
single
day
event
in
san
francisco
here
on
december,
8th,
so
we're
hoping
that
we
can
do
it
in
person,
but
with
everything
going
on
and
it's
still
subject
to
change,
but
we'll
have
a
virtual
part
of
it
either
way.
B
So
registration
is
open.
It's
the
pre-registration!
Now
it's
just
for
a
nominal
twenty
dollars.
So
it's
it's
a
very,
very
cheap
event
to
attend.
If
you
happen
to
be
in
the
area,
there'll
be
a
full
day
of
of
user
sessions
and
it'll
be
some
workshops.
We're
still,
you
know,
nailing
down
the
program.
The
cfp
is
open
for
those
of
you
that
want
to
take
the
opportunity
to
talk
about
what
you're
doing
with
argo
or
something
else
interesting
around
argo
we're
still
not
clear
on
exactly
where
in
san
francisco
we'll
we'll
host
this.
B
It
will
depend
upon
how
many
people
that
are
actually
showing
up,
but
we
have
the
date
the
date
settled.
We
are,
you
know,
working
on
the
program.
B
We
are
targeting
this
to
be
an
in-person
event
and
we're
going
to
have
a
lot
of
interesting
sessions
for
sure
and
there'll
be
some
fun
activities
at
the
end
of
the
day
as
well.
So
if
you
happen
to
be
in
the
area,
if
you
can't
travel
and
if
you
have
any
questions,
you
know
feel
free
to
reach
out,
but
you
know,
I
hope,
to
see
you
all
all
there
and
I'll
post
the
link
to
the
to
the
website
here
in
the
chat.
A
All
right,
thanks,
henrik,
any
questions
about
the
argocon.
C
C
Real
quick:
what's
do
you
have
a
date
for
a
deadline
to
submit
a
proposal.
B
We
have
not
yet
we're
still
quite
some
ways
out
so
we'll
be
probably
at
least
in
probably
open
for
at
least
another
couple
of
months
or
so
I
would
think
awesome.
B
B
A
All
right
thanks
henry
all
right.
So
next
we
have
a
proposal
that
alex
has
been
working
on
on.
Basically,
it's
a
lighter
weight
way
to
run
argo
cv.
Basically,
you
don't
have
to
run
an
api
server,
so
this
is
a
proposal
that
he
has
been
working
on,
but
he
actually
has
a
alex
as
a
working
as
a
poc
and
a
demo
so
alex
you
want
to
take
it
away.
Yeah.
A
D
Okay,
awesome.
Thank
you,
so
yeah,
as
jason
mentioned
all
right.
Let
me
start
from
introducing
myself,
so
I'm
alex
my
name
is
alex.
I'm
a
software
engineer.
I
work
for
intuit
and
I'm
also
a
maintainer
of
argo
and
I
work
on
different
argo
projects,
but
pretty
much
right
now.
I'm
100
focused
on
argo
cd
and
I'm
sorry.
It's
the
best
time
to
start
noise.
D
D
So
the
name
is
headless
and
subject
to
change,
and
I
will
explain
shortly
why
it's
headless,
and
maybe
it
will
you
know,
maybe
you
will
be
able
to
suggest
a
better
one
in
the
discussion
and
I'm
just
going
to
go
through
the
problem
that
we're
trying
to
solve
the
proposal
and
the
demo.
So
let's
jump
into
the
problem
first,
so
I
want
to
give
a
little
bit
of
context.
D
First,
for
just
in
case
you
know
if
someone
doesn't
know,
but
basically
rbcd
provides
a
set
of
features
that
enable
multi-tenancy
and
by
multi-tenancy
we
mean
that
you
can
install
one
instance
of
argo
cd
in
a
cluster
and
then
kind
of
give
access
to
that
instance
to
different
teams,
and
these
teams
can
use
that
instance
even
without
knowing
about
each
other,
and
they
can
have
different
set
of
permissions.
They
can
have
access
to
different
clusters
and
it
will
be
safe
thanks
to
multi-tenancy
features
of
argo
cd
and
that
picture
kind
of
trying
to
explain.
D
You
know
what
what
how
it
is
achieved.
So
we
have
a
back-end
features:
a
back-end
components
of
argo
cd
that
includes
repo
server.
That
generate
manifests
controller
that
compare
cluster
and
the
state
defined
in
git,
and
we
have
ipi
server
the
head
of
argo
cd,
that
powers,
the
cli
and
ui
plus.
It
provides
the
way
to
authenticate
users
and
define
access
control
so
and
basically,
to
start
using
this
api
server.
D
You
must
expose
it
outside
of
a
cluster
and
once
you
expose
it
outside
of
the
cluster,
you
can
get
the
url
and
give
that
url
to
to
all
your
teams,
and
basically,
as
soon
as
it's
exposed,
it
must
be
protected
and
argo
cd
offers
features
like
ss
integration
and
it
has
its
owner
back.
It's
no
kubernetes
server,
it's
a
argo
cd
role
based
access
control
and
you
can
use
it
to
if
you
know
to
basically
define
boundaries
and
specify
what
end
users
can
do
and
what
they
cannot
do,
and
this
is
not
a
problem.
D
It's
a
great
set
of
features.
A
lot
of
existing
cargo
cd
users
really
like
these
features,
but
we
also
have
users
who,
like
rubber
cd
and
they
do
not
like
multi-tenancy
features.
They
don't
like
to
deal
with
sso
in
our
bug
an
example
of
such
users,
cluster
admins
and
I'm
hoping
that
maybe
we'll
have
a
discussion
at
the
end
of
this
presentation.
D
We
can
identify
more
types
of
users
who
wants
to
use
rcd
and
don't
need
multi-tenancy
and
just
to
make
it
more
clear.
So
imagine
if
you
are
admin
who
has
full
access
to
to
the
cluster
and
you
just
need
a
tool
to
manage
resources
in
the
cluster
and
right
now.
Here
are
a
couple
bullet
points
that
illustrates
the
the
problem
for
such
users.
D
So
if
your
admin,
you
have
full
access
control
as
soon
as
you
install
rbcd,
if
you
are
following
getting
started,
get
you
have
to
deal
with
accounts
and
passwords,
argo
cd
accounts
and
passwords,
so
we
usually
suggest
to
follow
some
instructions
to
extract
the
default
password,
auto
generated
password
or
built-in
admin
account.
Next,
we
strongly
suggest
to
disable
admin
account
before
you
expose
our
cgi
base
server.
D
And
the
second
point
here
is
that
we
don't
really
have
a
good
way
to
resolve
it.
So
if
you
want
to
use
cli
and
ui,
somehow
you
need
to
access
api
server
and
the
only
way
right
now
is
to
expose
it
outside
of
the
cluster.
And
then
you
have
to
deal
with
this
protection
and
the
best
way
to
protect
is
to
configure
sso
or
you
know,
use
built-in
accounts.
D
So
it's
not
perfect
and
we
want
to
improve
user
experience,
and
so
the
proposal
is
to
introduce
a
new
mode
of
using
cargo
cd
called
headless
and
that
picture
kind
of
illustrates
what
we
mean
by
that.
We
want
to
kind
of
remove
the
head
of
our
ocd,
which
is
a
rbcd
api
server
and
move
it
out,
remove
it
from
the
cluster
and
kind
of
move
it
into
into
the
client.
And
I
will
explain
you
know
in
details
what
we
mean
by
by
that,
and
you
will
see
it
in
action
here-
are
a
couple
links.
D
So
we
have
a
proposal
pr
and
it
explains
kind
of
in
details.
What
we're
trying
to
do.
We
have
implementation.
That
happens
to
be
not
so.
It
basically
happens
to
be
a
very
late
late
set
of
changes,
so
we
don't
have
to
change
much
to
achieve
it
and
that's
why
it's
pretty
much
implemented
and
waiting
for
you
know.
D
D
D
So,
first
of
all,
we
want
to
make
sure
that
it's
really
easy
to
install
just
the
back
end
of
rvcg.
It's
not
we
don't
want.
You
know
to
explain
to
to
basically
redirect
user
to
documentation
and
explain
which
components
has
to
be
installed
one
by
one.
Instead,
we
want
to
provide
a
bundle,
which
is
basically
one
yaml
file,
that
you
can
cube
ctl
apply
into
your
cluster
and
it
will
install
a
required
set
of
components
next
kind
of
important
point.
D
I
didn't
mention
it
in
in
the
slides,
but
in
case
of
multi-tenancy,
we
strongly
recommend
to
install
ak
version
of
our
ocd
has
basically
means
high
availability,
and-
and
that
means
you,
if
you
use,
manifests,
you
would
install
several
replicas
of
every
component,
for
you
know
for
reliability,
because
you
don't
want
to
affect
a
lot
of
users
during
upgrades.
D
So
in
case
of
headless,
we
kind
of
assume
there
is
no
multi-tenancy,
and
so
we
can
install
a
very
lightweight
version
of
each
component,
which
I
I'm
pretty
sure
this
version
of
argo
cd
can
see
would
use
less
than
one
cpu
one
and
less
than
one
gigabyte
of
memory.
It
easily
kind
of
fits
into
a
mini
cube.
D
There
is
no
need
to
install
the
dex
component
that
translates
non
oedc
providers
to
odc
yeah
so
and
basically
installation
should
be
as
simple
as
possible
and
the
next
change
we
want
to
make
is
we
want
to
make
it
make
it
really
easy
for
end
user
to
keep
using
argo,
cdcli
and
ui
without
api
server,
and
the
proposal
is
to
create
basically
a
new
flag,
a
headless
that
you
can
supply
and
if
that
flag
will
basically
explain
to
cli
that
there
is
no
idea
server
and
it
has
to
talk
directly
to
the
kubernetes
cluster
and,
as
you
will
see
during
the
demo,
there
is
no
need
to
specify
this
flag
for
each
and
every
command.
D
D
D
It
will
just
start
api
server
on
localhost
and
you
can
access
ui
on
localhost
and
the
last
change
that
we
are
proposing
is
basically,
you
might
not
even
know,
but
we
have
a
very
useful
cli,
that
kind
of
admin
cell
life,
administrators
called
argo,
cd,
util
and
it
it
has
a
set
of
commands
that
you
can
use
to
manage
argo
cd
in
a
cluster
and
that
that
cli
was
created
first
kind
of
for
maintainers.
It
was
hidden
from
end
users
in
last
release.
D
D
All
right,
that's
the
set
of
changes
and
I
can
just
go
ahead-
demonstrate
it
live
and
demonstration
should
not
take
a
lot
of
time.
Basically,
it's
a
getting
started
guide
for
our
cd
headless
and
it's
supposed
to
be
simple.
So
that's
why
I
decided
to
kind
of
be
brave
and
really
start
from
you
know
from
from
zero
state.
D
D
D
D
Bundle
that
I
mentioned
so
it
exists
only
in
my
pull
request,
so
I'm
basically
demonstrating
you,
so
it's
not
yet
available
in
master,
so
you
won't
be
able
to
repeat
it
unless
you
download
my
for
fargo
cd
and
a
set
of
components
was
created.
If
you
installed
argo
cd
before
you
will
notice
that
it's
shorter
than
usual,
basically
than
usually
so,
it
creates,
as
I
mentioned,
the
backend
components,
plus
it
install
cluster
level.
Permissions
that
give
rbcd
access
within
the
cluster
and
the
reason
is.
D
We
think
that
users
who
wants
that
feature
the
most
are
argo
cd
administrators-
and
this
is
subject
to
you,
know
to
change
as
well.
We
need
to
discuss
what
kind
of
bundles
we
want
to
have,
so
maybe
we
need
a
namespace
bundle
as
well,
so
components
are
there
and
next
they
can
configure
rbcdcli
using
argo,
cd,
login
headless
command
and,
as
you
can
notice,
it
takes
no
additional
arguments.
D
D
And
it's
working
even
without
any
back-end
components,
because
it
simply
talks
to
argus
to
kubernetes
api,
and
I
didn't
have
to
run
any
kind
of
commands
commands
to
run
argus
the
api
server
locally.
It's
basically
magically
done
by
the
cli
itself.
Every
time
any
of
these
commands
executed.
It
starts.
D
D
And
finally,
I
want
to
demonstrate
application
creation
to
do
that.
I
need
the
kind
component.
At
least
I
need
replay
server
so
that
it
can
generate
manifests.
So
I
just
run
this.
You
know
rollout
status
command
to
make
sure
it
was
started
successfully
and
next
I
can
use
you
know
just
a
normal
cla,
argosy
dcla
command
to
create
an
application,
so
it
was
created
successfully
and
let's
make
sure
it's
there
using
ccli.
D
First,
they
can
say
adversity,
app
guest
book
and
it
works,
and
the
final
bit
which
I
want
to
demonstrate
is
why
so
I
can
run
our
ocd
admin
dashboard
and
it
starts
ui
on
localhost
brings
me
a
url
here.
So
if
I
just
copy
paste
the
url
into
the
browser,
I
should
be
able
to
see
the
typical
I
mean
the
normal
ocd
user
interface
and
yeah,
so
it
just
works.
D
Everything
is
deployed,
I
believe,
because
I
deployed
the
guestbook
application
before
and
if
I
got
to
delete
it,
so
I
just
picked
up
previously
deployed
resources.
That's
it
as
I
promised.
Demo
is
short
and
it's
supposed
to
be
short
so
yeah,
because
the
goal
of
this
feature
is
to
make
it
really
easy
to
start
using
cargo
cd
for
in
case
of
no
no
multi-tenancy,
and
thank
you
thank
you
for
listening.
So
please
ask
questions,
give
any
comments
about
the
feature.
If
you
don't
have
any
questions
now.
D
D
So
an
example
I
can
imagine
is
in
case,
if
someone
don't
really
have
cluster
access
and
just
want
to
use
argo
cd
to
manage
a
single
namespace
and
I'm
really
curious
like
would
you
want
to
install
argo
cd
into
your
namespace
just
to
manage
resources
in
your
namespace?
So
this
is
one
possible
use
case.
D
Second,
is:
maybe
you
have
suggestion
to
you
know,
for
better
name.
Headless
is
the
best
we've
got
so
far,
but
we're
not
really
happy
about
it.
A
Thanks
alex
yeah
sure
demos
are
good
because
short
means
it's
simple,
yeah.
D
B
And
another
question
as
well:
is
these
things
just
you
start,
you
start
small
and
then
you
grow,
so
one
one
interesting
thing
to
know
was
as
well
like:
how
do
how
do
you?
What
do
you
think
about
you
know
moving
this
to
a
multi-tenant
environment,
because
when
you
often
start
with
something
small
like
this
and
then
you
grow
and
grow
and
then
suddenly
realize
hey,
I
need
the
full
thing
after
all,
so
if,
if
anyone
has
any
thoughts,
you
know
on
that
it
would
be
good
too.
A
I
guess
the
first
question
is
like
does
any
who
would
use
this?
Does
anyone
here
have
a
need
where
they
need
one
like
a
lighter
weight?
Argo
cd,
you
know
that
doesn't
have
an
api
server,
but
they
access
through
direct
access
to
the
kubernetes
api
server
is
something
people
would
use.
C
Yeah
I
I
I
want
to
comment
that
I'm
I'm
intrigued
by
it.
I'm
not
sure
if
I
have
a
use
case
for
it,
because
we
our
current
existing
argo
cd
implementation,
is
for
our
own
ecosystem
that
that
we
host
that
we
manage
and
we
do
have
sso.
We
do
have
all
the
the
mechanics
that
we
need
to
have
to
leverage
the
full-fledged
argo
cd.
C
I
I'm
I'm
gonna
have
to
do
a
little
soul-searching
to
consider
whether
this
opens
additional
avenues
for
complex
application,
delivery
to
customer
managed,
kubernetes
clusters
and
if
there's
some
advantage
to
be
had
there
and
it's
it's
quite
possible
that
this
would
be
a
very
valuable
mechanism
to
have
a
very
lightweight
complex
cluster
configuration
ecosystem.
C
It
would
be,
it
would
be
right
for
that
scenario.
Potentially.
So
I'm
I'm
very
intrigued
by
it.
I
think
it's
it's
a
nice
piece
of
tech,
I'm
not
sure
if
I
have
a
use
case
for
it
yet.
B
Is
there
anyone
here,
that's
using
it
that
has
like
an
iot
or
telco
use
case
I
mean,
in
my
opinion,
this
is
something
that
would
lean
itself.
You
know
very
well
to
you
know
highly
distributed
very
large
number
of
clusters
architecture.
I
don't
know
if
anyone
here
is,
is
doing
anything
along
those
lines.
A
Yeah,
I
was
about
to
mention
a
similar
use
case.
I
think
that
the
place
where
you
see
this
fitting
in
is,
if,
if
you
have
a
lot
of
clusters
that
don't
need
a
lot
of
people
accessing
it,
so
you
have
no
need
to
set
up
some
tenancy
for
those.
A
Then
you
know
just
keep
cto
access
to
that
thing
is
enough,
and
then
you
get
the
same
experience
that
you
get
with
the
the
whole
multi-cluster
argo
cd,
but
just
for
that
cluster
and
using
normal
kubernetes
access
that
that's
kind
of
like
the
use
case.
We're
trying
to
address.
You
know
when,
like
one
of
the
knocks
of
vargo's
cd
has
been
like,
I
don't
need
all
those
features,
but
they
you
know
people
still
like
the
ui
and
people.
A
Okay,
well,
the
maybe.
D
I
just
I
I
I
got
a
question
in
the
background
from
henrik
about
adam
create
pass,
and,
yes,
I
think
it's
worth
mentioning
that
it's
really
easy
to
upgrade
from
headless
to
aha,
literally
just
cube
ctl
apply
of
you
know
version.
So
it's
like
it's
one
of
the
goals
of
the
proposal
to
make
it
that
simple
to
upgrade
yeah.
Oh.
A
Also
alex
they're
not
they're,
also
not
mutually
exclusive,
like
I
think.
Yes,
you,
if
you
already
have
argo
cd
set
up
in
the
normal
way,
but
you
have
kubernetes
access
to
the
kubernetes
cluster
is
running
on.
You
could
actually
run
the
cli
in
the
headless
mode
where
and
then
access
it
using
just
normal
kubernetes
credentials.
That's
right!
That's.
D
Right
there
is
no
back-end
changes
at
all,
like
basically
what
I
demonstrated
it
was
installing
stable
version
of
back-ends.
You
know
controller
and
deep
server,
and
I
just
built
cellular
locally,
so
yeah
you
can
just
use
it
to
access
any
other
cd
right
now,
yeah,
and
I
really
it
was,
I
think,
it's
kind
of
coincidentally,
it
was
really
easy
to
implement
and
the
reason
is
we
as
developers,
we
were
kind
of
running
headless
rvcd
for
development,
so
that
code
existed
already.
D
A
All
right
so
yeah
so,
like
alex
said,
the
proposal
is
on
on
in
the
pull
requests
in
argo
cd.
That
is
being.
A
Then,
if
you
have
any
other
thoughts
after
the
meeting
feel
free
to
chime
in
there
with
your
use
cases
or
any
other
thoughts,
you
might
have
thanks
so
okay,
so
that
that
was
the
end
of
the
agenda
items
that
we
had
for
today
and
then,
after
the
meeting
we
like
to
open
up
for
any
other
open
issues
or
questions,
people
have,
for
you
know
or
discussion
topics.
E
Hi
snake,
rather
from
magnet,
we're
a
ad
tech
company.
E
We
use
all
the
argo
projects,
archer
city,
rollouts
events
and
workflows,
and
we've
encountered
some
differences
between
our
use
case
and
what
seems
like
the
general
communities
and
just
wanting
to
raise
some
of
the
issues
that
are
rather
large
problems.
We've
encountered
and
things
we've
considered
switching
to
different
projects
for
sure.
E
So
we
would
have
to
switch
to
something
other
than
our
rollouts.
For
that
it
does
look
like
that
was
actually
picked
up
for
version
1.1
of
our
rollouts,
but
just
wanted
to
see.
If
that.
D
A
A
canary
with
traffic,
routing
or.
A
A
Okay
and
in
which,
which
traffic
routing
provider
are
you
using
just.
A
Okay
yeah,
so
there
is
a
feature.
That's
trying
to
address
your
your
problem,
which
is,
as
you
may
or
may
not
know.
We
currently
have
a
feature
called
where
it
allows
you.
It's
called
set
canary
scale
and
basically
lets
you
set
the
canary
scale
differently
than
the
the
weight
and
the
the
use
case
it
was
trying
to
address
is
like
okay.
I
want
to
start
the
rollout,
but
I
want
to
scale
up
the
canary,
but
not
actually
get
traffic
to
it,
so
that
I
can
do
things
like
test
it
before.
A
It
receives
production
traffic
and
the
the
complementary
feature
that
needs
to
be
implemented
is
called
set
stable
scale,
and
the
idea
is
that
we
would
have
a
way
to
set
the
stable
replica
size
to
be
the
inverse
of
the
weight.
A
So
in
other
words,
if
you
have
100
a
rollout
with
100
replicas,
the
weights
would
sorry
the
replica
set
size
would
be
proportional
to
the
weight,
and
so
in
that
case
you
don't
have
to
double
up
the
size
of
the
replica
counts
to
be
200
during
the
middle
of
the
alb
canary
update,
and
so
that
that
is
a
feature
we
are.
We
want
to
implement
so
that
and
it's
gotten
a
lot
of
popularity
on
the
issue.
So
we
know
that
there's
a
demand
for
it.
A
Is
the
one
that
is
the
that
would
let
you
not
have
to
double
up
on
your
replica
accounts
during
an
update,
yeah
the
reason
it
was
implemented
this
way
by
the
way
is
that
we
we
wanted
a
board
to
be
instantaneous
and
in
order
for
it
to
be
instantaneous,
you
have
to
have
the
stable
stack.
A
You
know
at
the
ready
so
to
speak
so
that
the
only
thing
that
needs
to
be
done
during
an
import
or
rollback
is
that
you
just
change
the
weight
back
to
100
stable.
So
there
just
know
that
if
you
do
use
a
feature
when,
when
there
is
an
abort
you're
subject
to
just
pod
scheduling,
delays
like
if
I
don't
know
if
you
have
to
auto
scale
or
whatever,
but
that
would
be
one
consideration,
but
that's
probably
something
you're.
You
want
you're,
okay,
living
with.
E
A
Yeah,
so
so
that's
something
you
should
follow.
Let
me
I'll
try
to.
Let
me
see
if
I
can
find
the
issue
number.
A
Okay
yeah,
I
found
it,
it
actually
looks
like
we
targeted
for
1.1,
which
is
the
next
release
and
like,
hopefully,
that
will
be
in
an
august
time
frame,
but
definitely
definitely
before
cube.
Finally,
I
mean
our
track
record
is
about
quarterly
releases
for
our
robots.
A
Yeah
and
I
link
the
issue
in
the
zoom
chat,
much
appreciated.
E
I
do
actually
have
more,
but
I
want
to
leave
some
time
for
other
people.
Other
people
things
that
wanna
rise.
A
E
E
E
E
E
But
what
kind
of
sucks
about
that
is?
The
diff
is
a
whole
new
file.
There's
it's
practically
useless
other
than
saying
that
we're
going
to
once
the
canaries,
once
the
rollout
is
complete,
we're
going
to
delete
the
existing
config
map
and
create
a
new
one.
E
It
would
be
really
nice
to
be
able
to
see
the
exact
lines
that
are
changing
in
that
config
map,
so
something
like
scheduker
reloader
might
help
with
that,
but
we
would
lose
the
ability
to
scale
the
existing
canary,
I
think,
or
the
existing
replicas,
that
while
the
canary
is
going
on,
maybe
I'm
wrong
about
that.
A
Oh,
I
see
what
you're
saying
so
your
choice,
your
your
complaint,
is
that
if
our
choices
are,
we
either
use
brand
new
config
maps
or
we
use
the
same
config
map
which
is
updated
in
place.
The
problem
with
the
latter
is
that
any
new
pods
that
get
created
that
reference
that
configmap
are
using
the
new
values,
which
is
not
what
you
want
yeah
in
case.
We
are
in
the
middle
of
an
update
and
you
need
to
scale
both
the
old
and
the
new
that
they
capture.
A
With
the
the
brand
new
config
maps
is
that
I
can't
tell
what's
different,
because
they're
they're
completely
different
config
maps
and
we
don't
show
diffs,
okay,.
D
Maybe
I
I
did
want
to
mention
kind
of
related
issue
we
didn't
get
to
to
this.
We
have
the
same
kind
of
users
who
basically
chose
to
create
new
config
maps.
They
also
use
reloads
and
they
basically
stepped
on
even
a
more
kind
of
severe
problem
like
if
you,
if
you
you,
cannot
execute
prune.
Basically,
the
problem
was
related
to
cabbage
collection
of
old
config
maps,
and
I
I
wanted
to
mention
another
feature
called
prune
lust.
Oh.
E
E
E
Thank
you
for
that.
That
does
actually
help
a
lot,
but
the
diff
is
still,
and
I
I.
D
Feel
like
maybe
I
would
not
commit
to
implement
it
right
away,
but
I
feel
like
it's.
It's
maybe
just
a
ui
feature,
argo
cd
kind
of
have
the
knowledge
that
it's
an
old
version
of
config
map
and
it
should
be
able
to
it
shouldn't
be
too
difficult
to
compare
old,
configmap
and
new
config
map
and
just
have
it
in
ui
yeah.
Do
you
think
it
would.
A
I
was,
I
was
about
to
mention
this
same
thought.
You
know
just
like
how
github
in
github
you
can
diff
across
file
or
diff
across
revisions
or
branches,
and
I
do
think
a
ui
feature
in
argo
cd,
which
I
I
don't
necessarily
think
we
have
to
implement
smart
detection
like
oh
and
wreck.
You
know
understand
like
these
are
config
maps
coming
from
the.
B
A
Same
source,
but
at
least
just
provide
a
way
like
I'd
like
to
dip
this
against
that
and
like
explicitly,
and
that
would
be
a
very
simple
feature
to
implement
in
the
ui
to
show
this,
but
I
I'm
totally
open
to
a
ui
feature.
That
is,
that
would
allow
people
to
do
that.
A
I
think
at
one
point,
in
fact,
we
were
talking
about
diffing
across
applications,
because
people
had,
like
you
know
a
stage
environment
or
in
the
prod,
and
they
wanted
to
kind
of
see
before
I
I
want
to
see-
or
I
just
want
to
see
the
difference
between
two
entire
environments
and
that
was
kind
of
an
ambitious
diffing
feature
we
were
thinking
of,
but
differing
across
two
files
in
the
same
application
would
be
pretty,
I
think,
easy
to
implement.
Since
we
have
all
the
information
there
already.
E
Yeah
that
could
work
as
long
as
that
would
also
get
included
in
the
arcgis
city
cli.
That's.
D
D
D
A
So
would
you
like
to
file
an
argo
cd
proposal
for
enhancement
profile
for
this,
so
that
we
can
track
this?
Maybe
I
don't
think
there
is
one
for
diffing
across
files
in
the
same
application,
but
I
actually
agree,
for
there
is
a
strong
need
for
people
to
create
new
config
maps
every
update,
it's
basically
using
like
configmap
generator
and
customize,
and
then
so
you
can't
always
just
automatically
tell
people
just
use
the
same.
A
Just
update
the
existing
config
map
in
place,
because
that
just
doesn't
work
for
a
lot
of
you
cases
and
so
for
people
who
do
generate
new
config
maps
every
release.
A
So
so
yeah
follow
the
issue
and
then
oh
we'll
see
what
we
can
do
sure
we'll
do.
A
All
right,
if
there's
nothing
else,
I
think
we'll
end
a
little
bit
early.
Oh
there's
a
question
from
karthik
and
the
chat:
is
it
possible
to
integrate
gta
clusters
with
argo
cd
which
is
installed
on
eks?
So
the
answer
is
yes,
the.
As
long
as
the
argo
cd
can
access
the
other
kubernetes
api
server,
it
should
be
able
to
manage
it.
The
caveat
is
that
let's
say
you
were
doing
it
the
opposite
way
like,
let's
say
your
argo
cd
was
hosted
on
gke
and
the
managed
cluster
was
in
aws.
A
In
that
case,
you
wouldn't
be
able
to
leverage
iam
authentication
to
the
adiress
aws
cluster,
because
it's
not
in
the
aws
network
and
ecosystem.
So
that
would
be
one
caveat.
Is
that
if
you
want
to
leverage
iam
auth
to
to
manage
clusters,
then
your
your
control
plane,
argo
cd,
should
be
in
existing
in
aws.
A
I'm
not
sure.
If
gke
has
something
similar
to
I
that,
like
that
works
the
same
way
as
amazon.
Does
anyone
know?
Yes?
Yes,.
D
It
has,
but
unfortunately
I
just
I
know
about
it,
because
we've
got
a
pull
request
that
tries
to
bundle.
D
A
Yeah,
so
it's.
A
So
you
would
be
managing
the
clusters
through
bearer
tokens.
That
would
be
the
caveat
about
these.
A
All
right,
thanks
for
the
questions,
we
actually
usually
don't
get
many
questions
I
like
to
to
hear
people's
thoughts.
A
All
right,
I
think,
that's
the
end
of
this
today
at
today's
meeting
and
I
think
the
workflow
meeting
might
be
next
week.
If
I'm
not
mistaken,
we
delayed
this
one
by
one
week
because
the
intuit
was
out.
So
I
think
next
week
will
be
the
workflows
meeting.
B
A
It
is
all
right
thanks
everyone
for
joining
and
we'll
see
you
again
either
in
a
week
or
in
a
few
weeks
for
the
august
cd
meeting
thanks.
Everyone.