►
From YouTube: Argo Contributor Experience Office Hour 11th Mar 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Let
me
start
sharing
my
screen
because,
finally,
today
we
have
a
lot
of
agenda
items,
so
we
wanted
to
talk
about
application
set
controller.
We
have
a
proposal
for
config
management
plugins
enhancements
and
we
wanted
to
talk
about
argo
cd
to
the
zero
milestone
and
I
assume
that
application
set
was
added
by
shobik.
Shobik
is
not
in
that
meeting,
but
yeah.
I
added.
B
Cool,
so
for
those
that
aren't
familiar
application
set
controller
is
a
a
new
project.
Sub
project
of
our
go
cd.
The
application
set
controller
itself
is
installed
alongside
argo
cd.
B
It
adds
a
new
crd
and
the
crd
plus
the
controller,
generate
argo
cd
applications
using
template
data
with
the
template
data
coming
from
ymo
from
cluster
secrets
from
git,
which
basically
makes
it
easier
to
target
a
single
cluster
with
multiple
applications
to
target
multiple
clusters
or
to
allow
self-stop
self-service
style
access
to
argo
cd
applications,
so
we're
working
towards
our
first
release,
which
is
coming
up
very
soon,
targeting
releasing
alongside
argo
cd
2.0.
B
B
One
was
talking
about
really
release
candidate
testing,
and
then
one
was
gathering
a
list
of
contributors
that
have
worked
on
the
zero
one,
zero
release
over
the
last
year,
so
first
up
was
just
a
quick
status
update
on
what
we've
been
working
on
over
the
last
month.
B
New
features
related
to
adding
support
for
arbitrary
key
value
pairs
to
the
list.
Generator
a
bunch
of
bug
fixes
because
we've
been
busy
getting
ready
for
our
zero
one.
B
Zero
release
we've
mainly
been
focused
on
bug,
fixing
adding
new
tests,
some
release
specific
miscellaneous
tasks,
but
a
bunch
of
stuff's
gone
in
over
the
past
month,
whether
it
be
fixes
to
the
git
file
generator
to
prevent
parameter,
generating
issues,
whether
it
be
adding
support
for
different
kinds
of
application,
types
to
the
git
directory
generator,
adding
support
for
targeting
the
local
cluster
with
the
cluster
generator
or
just
generally,
preventing
the
application
set
controller
from
creating
invalid
applications,
which,
when
it
does
argo
cd,
really
doesn't
like
fixes
to
leader
election.
B
We've
switched
the
base
image
of
the
application
set
controller
to
ubuntu
2010,
for
the
exact
same
reason
that
argo
cd
did,
which
was
related
to
vulnerability,
scan
issues
a
bunch
of
good
stuff,
probably
the
biggest
thing
that
we
did
was
a
bunch
of
new
documentation
that
I
wrote
and
all
of
that
is
on
our
fancy.
New
read
the
docs
page.
B
If
you're
interested,
you
can
go
to
the
application
set
repo
and
look
for
the
read
the
docs
link
and
yeah
see
all
the
new
topics
that
we
wrote:
installation
instructions
getting
started,
detailed
descriptions
of
the
new
concepts
that
are
in
application
set,
how
they're
used
a
bunch
of
new
examples,
lots
of
good
stuff
in
there.
B
So
with
that
in
mind
with
all
that
good
stuff
in
that
makes
us
ready
to
do
release
candidate
testing.
So
I've
put
up
a
new
application
set
0
1
0
release
candidate
rc
build
for
testing
it's
up
to
date
with
the
latest
commits
from
app
from
the
applications
at
repo
and
targets.
The
argo
cd
v2
release,
so
we'd
love
to
get
some
eyes
on
the
rc.
Some
testing,
especially
for
folks
who
have
worked
with
application
set
in
the
past
and
kind
of
have
an
internal
understanding
of
how
it
should
work.
B
We
really
would
like
to
make
sure
that
we
haven't
regressed
anything
and
that
it
continues
to
work.
The
way
that
you
all
expect,
but
of
course,
if
you're,
if
you're
new
to
application,
set
controller
and
would
like
to
see
how
it
works
and
to
give
it
a
shot
and
to
give
us
your
feedback,
that
would
be
great
as
well
so
information
about
how
to
test
it.
B
Where
you
can
find
the
release
candidate
image
is
all
on
the
probably
the
easiest
way
to
get
that
is
on
the
argo
cd
app
set
slack
channel,
which
I
believe
is
linked
on.
The
yep
is
linked
on
on
the
schedule
there.
B
So
yeah
definitely
check
that
out
and
give
us
feedback
sooner
rather
than
later,
because,
like
I
said,
we're
we're
looking
to
get
that
out
alongside
the
argo
v2
release
whenever
that
comes
out-
and
I
think
alex
will
be
talking
about
that
shortly
and
then
the
final
final
thing
was:
what
I'm
doing
is
I'm
gathering
a
list
of
folks
that
have
contributed
to
the
application
set
controller
over
the
last
year
or
nine
months
or
so
that
folks
have
been
working
on
it
want
to
make
sure
that
everybody
that's
worked
on.
B
B
So
if
you
are
already
on
that
list,
I'm
great
what
I
need
you
to
do
is
make
sure
that
the
name
that
I
have
listed
for
you,
the
github
id
that
I
have
listed
for
you
is
correct
and
they
are,
as
you
would
like
them
to
be
in
terms
of
how
you
are
referred,
if
they
are
great
you're
good
to
go
plus,
if
there
is
anyone
that
you
know
that
worked
on
the
application
set
controller,
whether
it
be
code,
design,
doc,
planning,
review
issues
inception,
let
me
know
one
thing
I
would
point
out
is
I
haven't,
I'm
still
trying
to
figure
out
how
to
get
the
list
of
contributors
to
the
google
doc.
B
It
seems
like
at
least
with
the
privilege,
privilege
level
I
have
to
the
google
doc
I
it
isn't
obvious
who
actually
contributed
so
the
list
of
names.
There
is
pulled
primarily
from
the
github
contributions.
B
In
any
case,
if
you
know
of
someone
that
is
not
on
the
list
that
absolutely
should
be
on
that
list.
Please
reach
out
whether
it
be
on
slack
in
the
github
issue
itself,
in
the
app
set
slack
channel
and
just
let
us
know,
because
I
want
to
make
sure
that
everyone
that
contributed
gets
a
shout
out.
C
Yeah,
I
really
think
alex
did
contribute
a
lot
to
the
original
google
doc.
So,
okay.
B
B
Perfect
yeah,
so
once
I
get
the
access
to
that
provenance
data,
I
should
be
able
to
update
the
list,
as
is,
but
that's
how
it
is
now
just
based
solely
on
github
commits
github
issues
and
so
on.
So
I,
if
you're
on,
if
you
are
on
the
list,
great
make
sure
that
I
have
the
correct
details
for
you.
If
you
are
not
on
the
list
and
should
be
reach
out,
so
I
can
put
you
on
the
list
and
that's
it
for
me.
A
A
I
had
one
quick
question
about
testing
which
which
version
of
argo
cd
should
be
used
for
testing.
Should
it
be,
does
it
have
to
be
2.0
for
the
current
master
or
it
can
be
1.9.
D
Good
the
question:
if
is
this,
the
one
that
would
be
shipped
with
2.0.
D
Right
in
that
case,
I
know,
master
could
be
unstable,
but
then
our
primary
focus
should
be
testing
with
2.0.
I
don't
know
what
do
you
have
to
think.
A
About
it,
if
I
was
kind
of,
I
was
going
to
propose
the
same
so
last
time
we
talked
about
2.0
release.
I
was
hoping
that
by
today
we
would
have
a
list
of
you
know
spreadsheet.
That
helps
us
to
do
testing
it
didn't
happen
because
we
had
to
fix
bugs.
So
how
about
we
try
again,
and
maybe
we
can
have
a
spreadsheet
that
includes
2.0
rbcd
features
plus
upsets,
and
then
we
cut
2.0
release
candidate
and
we
have
a
set
release
candidate
already.
B
Yeah
sounds
good
yeah.
It
would
be
best
to
test
with
the,
like.
You
all
said,
2.0,
because
that's
the
one
that
we're
shipping
alongside,
but
the
application
set
controller
is
self-contained
enough
that
it
does
also
work
with
the
138.
A
All
right,
so
it
feels
like
it
depends
on
the
time.
If
we
really
have
release,
can
you
date
soon,
which
we
should
think?
Maybe
we
can
talk
about
it
at
the
end
of
this
meeting,
then
we
can
just
it
makes
sense
to
wait.
You
know
for
day
or
two
for
release
candidate
or
fargo
cd
and
then
test
both
of
them.
At
the
same
time,
right.
B
A
E
Okay,
so
today,
like
we
are
I'm
going
to
talk
about
the
proposal
for
conflict
management,
plugging
2.0,
so
I'll
start
out
with
the
brief
discussion
description
of
config
management
plug-in
how
it's
working
today
and
shortcomings
like
the
motivation
behind
implementing
2.0.
So
currently,
argo
cd
provides
the
first
class
support
for
native
plugins,
such
as
hem
customize,
json
and
case
on
it.
The
support
includes
bundle,
binary
user
can
overwrite
the
parameters
in
uicli.
E
The
applications
can
be
auto,
subject:
auto
discovered
and
auto
suggested
during
creation
in
ui
and
the
performance
optimization.
So
argo
cd
has
received
multiple
requests
in
past
to
provide
the
similar
support
for
additional
tools
such
as
cdk
tanker,
jk,
etc.
So
the
current
shortcoming,
the
shortcomings
with
current
approach
is
that
first,
the
installation
is
a
bit
complex.
So
if
a
user
wants
to
use
an
additional
tool,
he
would
have
to
do
add
a
he
would
have
to
add
a
entry
in
config
map.
E
Argo
city,
config
map
of
support,
like
the
entry
name,
would
be
the
two
additional
tool
that
he
wants
to
use
and
then
then,
then
he
would
have
to
add
the
binary
in
the
argo
cd,
repo
server
pod,
but
sometimes
adding
just
a
binary
is
not
sufficient.
So
in
that
case
he
or
she
would
have
to
build
the
binary
like
really
a
customized
repo
server,
with
additional
tools,
bundled
in
it,
like
additional
dependency
bundle
dinner.
E
So
this
approach
is
kind
of
error-prone
manual
and
requires
learning
so
and
then
with
discovery
so
ago,
city.
Currently
this
auto
detects
the
native
plugins
and
selects
the
appropriate
tool
given
the
path
of
the
git
repository.
E
So
the
selection
is
based
on
the
well
known
files
such
as
chat.yaml
and
customization,
but
similar
support
like
for
some.
Similarly,
if
someone
wants
to
use
a
plugin
additional
plugin,
then
then
he
or
she
would
have
to
explicitly
specify
the
plugin
name
that
that
would
be
used
to
render
the
manifest.
E
So
we
want
to
provide
the
similar
first
class
support
for
additional
plugins
as
well
and
for
the
parameter
support.
Currently,
the
configuration
management
plugin
allows
specifying
only
the
list
of
environments
and
it
cannot
override
the
parameters
in
uic
and
cli
similar
to
how
how
it
is
being
done
for
the
native
plugins.
So
we
want
to
improve
that
functionality
as
well.
E
E
That
image
would
run
as
a
sidecar
in
repo
server
deployment,
and
it
would
have
shared
access
to
get
repositories
and
then
that
image
will
also
have
a
yaml
specification
file,
which
will
tell
you,
which
will
tell
repo
server
have
to
render
which
will
tell
how
to
render
the
manifest
and
then
its
entry
point
would
be
a
lightweight
new
config
management,
plugin
api
server
that
will
receive
requests
from
the
main
rivo
server
to
render
the
manifest
based
on
the
specification
file.
So
what
benefits
this
approach
bring?
E
Is
that
so
plugin
owner
will
have
complete
control
over
the
execution
environment
and
it
would
like
they
can
package
whatever
dependencies
they
want.
And
then
the
operator
will
not
have
so
operator
will
not
have
to
build
the
customized
repo
server
binary
anymore
as
the
their
environment
like
they
can,
whatever
dependency
they
want,
they
can
add
it
to
their
site
card,
so
that
would
improve
as
well
and
then
the
plug-in
image
will
be
running
in
a
separate
com
container
from
the
main
repo
server.
So
what
so?
E
Overall,
the
experience
will
be
like
so
for
installation
plug
in
to
install
a
plugin
user
will
have
to
patch
the
repo
server
to
run
the
config
management
plugin
as
a
sidecar.
So
this
it
would
look
like
some
something
like
this.
So
let's
say:
if
someone
wants
to
install,
if
someone
wants
to
use
cdks
to
render
manifest,
then
they
can
add
it
as
a
side
card
to
the
ergo
cd
repo
server.
The
entry
point
would
be
the
new
lightweight
config
management
plug-in
server,
and
then
they
would
have
to
volume
on
this.
E
This
would
be
a
shared
volume
between
the
repo
server
and
the
underside
car,
and
then
alpha
cd
will
change
the
repo
server
spec
deployment
spec.
So
this
spec
would
be
added
by
default
to
repo
server,
whether
if
someone
is
going
to
use
the
additional
plugin
or
not
so
what
it
has
it.
It
has
a
in
it
container
which
will
which
why
do
we
have
it?
E
Because
we
have
it
because
we
want
to
copy
the
lightweight
the
new
api
server
to
the
sidecar,
so
it
would
it
would,
and
we
would
volume
it
to
this
volume
and
then
the
this
is.
This
will
be
the
volume
which
will
be
used
to
either
so
in
later
I'll
talk
about
the
socket
files,
so
this
would
either
be
used
by
socket
files
that
repo
server
will
use
to
communicate
with
each
plugin
or
they
get
or
it
would
have
the
get
shared
git
repositories.
E
Okay,
so
that
was
the
installation
experience
and
then
for
the
configuration
like
the
plugin,
the
plugins
are
config.
The
plugins
will
be
configured
via
the
config
management
plugin
specification
file.
It
would
look
something
like
this,
so
it
looks
a
lot
like
kubernetes
spec,
but
it's
not
a
crd.
E
It's
just
following
these
conventions
of
kubernetes
spec
file.
So
it
will.
This
file
will
be
placed
at
a
well-known
location
in
the
in
the
sidecar
binary
in
that
location
like
it
can
be
something
like
slash
plugin.yaml,
so
agocity
doesn't
care
how
that
file
is
is
stored
there.
It
can
either
be
baked
into
the
plug-in
image
via
docker
build
or
it
can
be
volume.
It
can
be
all
replaced
there
by
volume,
mapping
or
the
file
through
the
config
map.
E
So,
overall,
this
file
will
have
the
name
of
the
plugin
and
then
the
version
of
the
plugin
and
the
init
command
generate
command,
which
will
tell
how
to
generate
the
manifest
and
the
discovery
command
or
discovery
command.
I
can
talk
later
so
what
is
the
config
management
plugin
api
server?
So
it
will
be
a
new
argo
cd
component,
whose
sole
responsibility
would
be
to
run
the
generate
command
inside
the
plugin
environment
to
generate
the
manifests
at
the
request
of
repo
server.
E
So
to
run
this,
and
so
this
this
server
would
just
would
expose
the
following
api
to
the
repo
server,
which
is
generate
benefits.
So
it
will
return
the
amal
using
the
plugin
tooling
and
is
supported,
which
returns
whether
or
not
the
given
path
is
supported
by
the
plugin,
oh
and
at
startup.
The
server
will
look
for
the
specification
file
and
the
config
management
specification
file
will
tell
him,
like
they'll,
tell
the
plugin
how
to
render
the
manifests.
C
There,
oh,
I
added
that
so
so.
Actually,
that's
a
good
question
because
we,
the
the
repo
surfer,
has
to
decide
or
figure
out
who
can
handle
this
path.
So
today
it
looks
at
a
path
and
it
sees
a
chart
that
yaml
customization.
C
or
json
that
files-
and
it
says
okay,
this
must
be
a
helm
chart
and
it
just
picks
that
so
with
plugin
in
order
to
auto
detect
which
tool
can
support
it.
One
of
the
ways
we
can
consider
doing
that
is
that
the
cmp
server
itself
advertises
the
fact
that,
yes,
I
can
handle
that
and
the
the
repo
server
kind
of
pulls
everyone
in
and
to
understand.
Okay,
these
plugins
can
handle
this
path.
That's
one
way
to
do
it.
C
I
think
the
other
option
was
that
the
repo
server
is
taught
how
to
figure
that
out
for
themselves
and
rather
than
ask
the
plugins
by
by
advertising
to
the
repo
server
like
okay.
If
you
see
this
file,
then
yes,
it's
a
that
plug-in
can
handle
it
all
right.
So
I
don't
think
that's
you
know
set
in
stone.
I
think
we're
gonna.
D
A
I
think
it's
application
path,
it
says,
inside
of
the
repo
you
can
have,
I
don't
know,
slash
pro
slash
app.one
and
it
could
be
a
helm-based
application,
customize
or
plug-in
based,
and
that
method
is
supposed
to
you
know
answer:
are
you
based
on
that
plugin,
yes
or
no,
and
if,
if
plugin
says
yes,
it's
based
on
me.
That
means
you're
supposed
to
use,
generate
manifest
method
of
that
particular.
A
I
feel
like
object
paused,
as
basically
plugin
itself
would
not
implement
it.
It's
this
ipi
server
is
still
going
to
be
part
of
our
ocd,
we're
literally
going
to
add
a
sec
one
command
into
repo
server,
and
that
command
is
going
to
start
a
web
server
that
has
two
methods
generate,
manifests
and
is
supported,
and
it
will
use
configuration
bundled
into
the
plug-in
image.
C
Okay,
ideally
a
plug-in
author,
the
only
thing
that
per
person
has
to
do
is
build
an
image
with
the
tooling
necessary,
because
once
that's
available
the
entry
point
to
that
sidecar,
it
will
be
our
own
cmp
server
that
we
copied
in
at
runtime
to
become
the
entry
point
so
and
that
that
server
is
the
thing
that
just
it
implements
those
methods.
A
I
just
wanted
to
highlight
that.
Basically,
it
was
hard
to
me
to
understand,
so
I
want
to
repeat
it.
Currently,
we
already
have
a
way
to
kind
of
through
configuration.
You
get
generate,
manifests
method,
it
doesn't
work
because
imagine
if
you
need
to
run
javascript
and
it's
not
in
you
cannot
just
copy
something
into
a
repo
server
from
unit
container
and
it
won't
get
you
javascript
engine.
Basically,
you
need
to
run
something
like
apt-get
install
blah
blah
blah,
and
that
was
the
problem
that
this
appeaser
were
supposed
to
solve.
D
Plugin
got
it
yeah,
I
I
think
that
makes
sense.
So
I
think
so.
I
think,
if
supported
path
is
something
that's
going
to
be
invoked
by
the
repo
server.
It's
basically
the
cmp
server
where
this
function
is
available
and
that
cmp
server
is
going
to
pull
different
plug-in
site
card
containers
that
have
been
configured
to
check
where
this
is
going
to
work.
I'm
just
kind
of
wondering
that
who
answers
this
question
that
is
supported
like
from
from
where
does
information
come.
Is
that
something
that
the
plugins
themselves
advertise
that
hey?
C
Wrong
about
your
assumption
that
there
is
many
cmp
servers,
one
per
plugin,
it's
a
it's
a
one.
So
basically,
if
I
let's
pretend
for
a
second
that
we
we
did
all
of
our
natives
support
as
plugins,
which
theoretically
should
be,
I
mean.
Ideally,
we
should
make
it
possible
to
do
that.
What
would
happen
is
that
we
would
have
a
helm,
plug-in
a
you
know,
customized
plug-in
and
case
on
it.
They
would
be
three
side
cars
to
the
repo
server.
C
It's
entry
point
for
all
three
with
bcmp
server
and
they're
all
running
as
sidecars
as
and
they
they
all
answer.
The
question
is
supported
for
a
given
path.
They
all
answer.
The
question
is:
generate
man
manifest
for
a
certain
path.
D
C
Yeah,
I
I
think
that
is
supported
is
the
confusing
part,
because
the
al
alternative
way
we
could
have
done
that
is
that
is
that
the
repo
server
is
told.
Okay,
if
you
see
this
file
signature,
it
is
is
associated
with
this
plug-in
or
if
you
see
this
other
file,
it's
associated
with
this
other
plug-in.
That's
that's
the
alternative
that
we
were
also
considering.
D
Right,
I
mean
so
I
see
it
is
that
that
that
well-known
path
or
file
could
be
something
that
is
part
of
the
manifest,
which
means,
if
you
have
a
manifest,
which
says
I
accept
customize.yaml
files,
then
if
supported
or
whatever
it
is,
should
say
a
yes
for
that
and
that
specific
plugin
is
allowed
to
handle
it.
D
And
since
this
is
the
operator
who's
going
to
configure
that
we
shouldn't
have
a
situation
where
people
are
putting
in
random
things
in
there,
which
we
should
protect
from
anyway,
but
that's
different
conversation,
but
then
the
the
plug-in
itself
should
say
that
hey
I'm,
I
am
somebody
who
knows
customized,
email
or
chat.yaml,
so
isn't
the
plugin
who's
doing
that
or
who's
doing
that
saying
that,
yes,
I
accept
this.
Maybe
I.
A
Think
sharma,
can
you
please
scroll
up
back
to
configuration
it's
it's
yeah
this
is
it
find
and
check
like
two
kind
of
right,
okay
and
fifa
introduce
that
explain.
C
Yeah
and
then
the
person
who
does
it
is
that
that
can
we
can
choose
either
the
plug-in
that
looks
at
this
file
and
says:
okay,
I,
the
glob
of
main.ts,
means
I
I
can
handle
this
this
this
directory,
or
this
information
is
communicated
the
repo
server
and
the
repository
one
who
looks
at
okay.
I
see
main.ts.
D
Right,
the
information
could
either
be
on
the
repo
server
side
or
on
the
that's
kind
of
like
inside,
but
yeah.
I
think
that's,
that's
fine.
So
so
my
other
question
to
confirm,
if
I
have
my
understanding
right
so
let's
say
customize
today
wasn't
part
of
the
whole
thing
as
a
first
class
thing
and
I
was
bringing
that
as
a
plug-in
and
I'm
the
author
of
customize
and
I
publish
an
image
for
customize,
which
has
an
entry
point
to
use,
customize
etc.
D
Do
I
need
any
changes
to
that
image
to
get
this
working
too
right
for
the
manifest
right.
C
So
the
the
one
change
so
off.
In
theory,
people
should
be
able
to
take
off
the
shelf
images
like
that
are
produced
by
just
by
projects
that
have
that
know
nothing
about
argo
cd
right
and
we
should
be
able
to
make
them
side
cars
to
the
repo
server
and
volume
map,
a
plugin.yaml
into
the
root
into
that
that
that
container
and
make
it
and
then
make
our
rcmp
server
the
entry
point.
And
then
it
would
just
work
so
well.
Really.
C
The
only
thing
we
need
from
the
to
be
a
plug-in
is
the
environment,
the
image
that
contains
all
the
necessary
tooling
to
to
run
customize
or
or
cdks
or
whatever.
D
E
So
that
we
can
discuss
like
everything
together,
yeah,
okay,
yeah,
so
so,
basically,
how
would
the
repo
server
knows
know
that
these
are
the
plugins
available,
so
for
that
we
will
have
a
registration
process,
so
repo
server
needs
repository
needs
to
understand
what
all
plugins
are
available
to
do.
So
all
these
site,
cars
will
register
them
themselves
as
available
plugins
to
the
repo
server
by
populating
the
name
software
file
in
the
shared
volume
between
repo
server
and
the
cmp
server.
So
let's
say
this
is
the
shared
volume.
E
This
is
the
shared
volume
and
then
all
these
sidecars
that
are
available,
like
all
the
plugins
that
are
available,
would
register
themselves
here.
So
the
name
socket
file
will
indicate
the
plugin
name
and
to
discover
all
the
plugins.
The
repo
server
will
simply
list
the
this
directory
to
disk
to
discover
what
all
site
cards
are
running
and
to
communicate
with
the
plug-in
repo
server
will
simply
need
to
connect
to
the
socket
file
and
make
the
grpc
call
to
the
cmp
server
running
there.
E
Okay,
that's
the
registration
part
and
for
the
discovery
like
auto
selection
of
the
tool.
So
I
think,
for
that,
that
part
will
be
running
on
the
main
repo
server.
The
logic
will
be
running
on
the
main
depo
server,
and
so
so
from
in
all
the
listed
plugins.
E
Repo
server
will
run
this
discovery
like
this
fine
or
check
and
whichever
plugin
responds
to
the
first
will
be
selected
as
the
plugin
like
which
we
need
to
use
to
render
the
manifest
that
more
of
it
like
that's
it,
but
I
think
the
there
are
small
things
like
so,
let's
say:
if
we
want
to
support
the
two
two
versions
for
a
plug-in
like
two
versions
for
cdk,
then
there
would
be
two
different
image
for
each
version
so
which
means
that
two
different
side
cards
for
each
version
of
cd
kits
and
for
the
parameter
support
we
can
like
we'll
discuss
about
this
later,
like
in
v2
of
this
project.
D
Thank
you.
I
think
this
was
a
very
good
big
summary
plus
tiny
discussion.
D
We
had,
I
think,
one
one
improvement
I
would
rather
suggest,
and
that
might
help
us
with
our
registration
and
discovery
as
well,
which
is
given
the
fact
that
we
said
that
we
should
be
able
to
use
anything
off
the
shelf
and
and
with
it
add
additional
metadata
to
it,
which
is
the
manifest
that
we
have
if
we
could
model
this
around
the
crd,
that's
a
plugin
where
we
take
an
image
and
everything
that
you
have
in
the
spec
manifest
is
part
of
the
crd
itself,
so
which
means,
if
I,
as
an
operator,
need
to
support
five
new
plugins.
D
I
create
five
plug-in
crs
and
then
some
controller
picks
those
up
and
registers
them
automatically
as
sidecar
containers.
The
way
we
have
here
and
everything
else
remains
the
same.
In
that
case,
we
don't
need
to
number
one,
have
an
api
definition
as
a
manifest,
which
is
not
a
kubernetes
crd
number.
Two.
D
We
don't
need
to
have
a
differential
way
of
registering
and
discovery,
because
if
the,
if
there
are
five
crs
in
the
argo
cd
name
space,
it
is
assumed
that
we
should
be
able
to
watch
any
of
the
crs
and
configure
our
workloads
accordingly,
and
so
this
config
magazine
plugins,
this
api
looks
great,
so
the
config
management
plug-in
could
be
a
crd.
You
could
have
another
section
called
spec.image
where
you
provide
the
off-the-shelf
image.
Everything
else
remains
the
same.
D
If
I
want
to
plug
in,
I
create
the
cr
on
the
cluster
and
something
else
picks
this
up
and
registers
itself,
and
then
the
repo
server
uses
the
same
mode
of
communication.
What
do
you
think.
A
Well,
one
kind
of
I
think
both
approaches
has
like
one
way
or
another.
We
will
have
to
inject
side
cars
into
a
reboot
server,
and
so
we
thought
that
basically,
we
really
wanted
to
make
installation
as
easy
as
possible.
So
and
I
felt
like
so,
we
have
two
options:
either
we
go
with
this.
You
know
catalog
approach
and
basically
maintainers
of
these
plugins
for
argo
cd.
They
could
basically
create
customized,
patches
and,
and
then
people
can
install
plugins
by
adding
customized
remote
bases.
A
So-
and
this
is
you
know,
that's
the
way
of
kind
of
bundling
the
plugin
with
current
approach,
and
I
think
what
you
describing
it
works
too,
it's
possible
to
have
you
know
kind
of
catalog
of
crs,
but
in
this
case
the
ripple
server
will
have
to
manage
itself
ap
server
or
something
will
have
to
notice
that
hey
and
ucr
got
created.
So
I
need
to
go
and
change
repo
server
deployment
and
inject
more
side.
Cars
and
I
feel
like
I'm
not.
I
feel,
like
both
approach
and
approaches
are
not
perfect.
D
D
So
I
think
the
main
reason
I
kind
of
propose
later
is
that
the
way
we
are
configuring,
the
site
cars.
That
is
a
detail.
I
would
avoid
exposing
operators
to
for
a
couple
of
reasons.
One
is,
it
may
evolve
over
time
and
I
wouldn't
want
to
kind
of
expose
that
information
to
operators
at
the
point
at
this
point
rather
have
some
kind
of
you
know,
configuration
controller
that
we
ship,
which
looks
at
these
config
plugins
and
does
it
over
time.
The
only
thing
that
I
mean
not
over
time,
I
think
forever.
D
The
only
thing
operators
should
be
exposed
to
is
how
that
cr
looks
like
that
will
be
versioned,
that's
their
only
api
interface
to
putting
in
a
new
plugin
how
they
get
patched
as
site
car
containers.
That's
totally
an
implementation
detail
that
you
know
as
project
developers
we
should
be
burdened
with,
but
as
an
operator,
I
should
not
be
burdened
with
how
to
make
my
site
card
or
workload
yaml
look
like.
D
A
Do
you
think
like
what
is
the
best
way
to
achieve
to
create
this
idea?
So
one
way
to
do
it
is
to
simply
support
plugins,
the
way
it's
described
in
the
document
and
then
next
step
we
can
create
crd
and
then
crd,
manages
rip
server,
deployment
or
other
way
is
to
kind
of
build
in
that
knowledge
about
side
cars
into
rbo
cd.
A
C
Yeah,
I
was
one
thing
the
I
think
we,
although
we
structured
the
spec
to
say
like
this,
is
not
a
crd
and
it
it's
just
a
file
in
the.
I
don't
think
we've
closed
the
door
on
it
becoming
a
crd
in
the
future
and
it
was.
C
It
was
actually
structured
this
way
to
leave
that
door
open,
because
I
think
we
do
want
to
consider
that
in
in
the
future,
but
there's
yeah,
I
think,
there's
also
the
the
mvp
that
we
want
to
build
and
and
the
future
improvements
meaning
this.
C
We
can
prove
out
this
technique
with
just
the
sidecar
approach
and
the
manual
sidecar
approach
that
as
a
first
step,
at
least
so
that
people
just
run
side
cars
themselves
or
they
modify
the
repos
over
deployment
themselves
and
others.
But
in
the
future
there
could
be
a
configuration
operator
that
understands
these
objects
and
does
that
for
them.
C
I
think
that's
that
we
haven't
closed
the
door
on
that
ability.
It's
just
whether
or
not
we
want
to
do
it
with
the
first
go-around
which
I
to
me,
I
think
we
there's
enough
work
that
we
we
shouldn't
make
that
the
the
mvp.
I
mean
a
viable
product.
D
D
It
says
that
if
we
release
this
in
documentation
to
users
they're
going
to
start
using
this,
and
then
I
don't,
we
shouldn't
be
in
a
situation
where
we
say
in
a
couple
of
reasons:
hey
stop
using
it,
because
we
will
do
it
for
you,
especially
because
these
are
pretty
involved
interfaces
which
I
would
avoid
operators
to
get
into
rather,
like,
let's
say,
for
example,
we
change
the
implementation
of
how
this
happens
over
time.
Operators
should
not
be
bothered
at
all.
They
should
just
be
talking
to
one
interface.
D
That
is
something
very
external,
a
cr
or
something
else.
I
think
that's
one,
and
I
think
the
second
thing
is
if
we
are
at
we,
we
do
have
a
manifest,
which
I
know
looks
very
much
like
a
city,
but
it's
not
a
crd
at
this
point.
If
we
do
have
people
writing
out
plugins
with
that
manifest
they're,
effectively
kind
of
adhering
to
an
api
construct
which
is
great,
but
then
if
we
change
that
it
may
not
work
in
the
future
releases.
D
So
so,
yes,
I
think
I'm
I'm
completely
happy
to
ensure
we
at
least
prove
this
out
and
have
an
mvp
that
this
works,
but
then,
before
we
ship
this,
I
my
strong
opinion
on
this
would
be.
We
should
provide
an
easier
interface
for
people
to
configure
this.
D
It
does
solve
the
problem
of
the
fact
that
we
had
to
bundle
these
things
into
the
argo
cd
server
image,
but
then,
with
this
approach
it
does
introduce
a
problem,
there's
a
lot
of
involved
methods
here
to
ensure
it
gets
running
for
operators,
so
just
calling
that
out.
But
I
totally
like
this
approach,
I
would
say.
A
C
Let's,
but
I
think
what
we
should
do
is
there's
a
future
improvements
section
in
this
document.
I
think
we
should
mention
that
the
cmp
config
management
plug-in
configuration
is,
you
know,
going
potentially
going
to
be
its
own
crd
with
this,
a
controller
that
manages
the
repo
server
or
inject
side
cards
based
on
the
presence
of
these.
A
I
feel
like
this
discussion.
Also,
it's
not
the
first
time
we
have
exact
same
conversation.
We
recently
spoke
about
using
crd
to
manage
repositories,
and
probably
I
feel
like
we
should
be
consistent.
We
could
do
same.
We
could
do
the
same
thing
with
repositories.
We
can
have
a
repository
crg
and
two
ways
that
crg
can
be
used.
One
way,
argo
cd,
just
aware
of
it
of
it
and
uses
the
crd
or
another
way,
crd
manages
secret
that
and
kind
of
encapsulates
convention.
You
know
internal
implementation,
details
of
fargo
city
honestly.
I
like
this
second
way.
A
C
Okay,
so
with
that
in
mind,
I
think
that
while
we're
doing
this,
we
have
to
always
make
sure
that
we
consider
the
fact
that
cmp
plug-in
server
may
become
a
crd.
So
here's
a
good
example
of
how
we
would
need
to
leave
room
if,
if
it's,
if
the
cmp
plug-in,
is
a
crd,
it
actually
has
to
now
specify
the
image
right,
because
image
is
currently
the
sidecar
of
the
that's
in
the
repo
server
side
card.
C
D
And
in
addition,
don't
you
think
I
mean
I'm
just
wondering
because
this
we
haven't
implemented
that
would
it
also
help
with
simplifying
our
discovery
to
an
extent
if
we
model
it
in
a
way
where
your
discovery,
whether
us,
whether
you
know,
let's
say
there,
is
something
for
customize
the
process
for
that
is
as
simple
as
go
and
tell
me
what
crs
are
there,
one
of
which
would
support
customize.
C
Well,
the
only
person
who
needs
to
do
discovery
at
the
moment
is
the
main
repo
server.
So
there's
I
think,
there's
there's
not
a
controller
necessary
right
at
this
point
that
needs
to
perform
discovery
that
that's
the
api
kubernetes
api
server
for
that
right.
D
C
C
They
run
user,
actually
user
executed.
Sorry
user
defined
commands.
D
C
Well,
like,
like,
I
said
that,
the
only
so
so,
no
matter
what
the
the
main
repo
server,
I
think
who
wants
to
discover
plugins,
I
think
that
you
know
populating
the
socket
file
into
the
shared
volume
serves
two
purposes.
C
One
is
this
is
to
advertise
like
okay,
I
I
see
all
these
plugins
available,
but
two
that
socket
files
is
its
communication
mechanism
and
the
so
either
way
we're
gonna
we're
gonna,
be
doing
that
for
the
for
discovery
and
and
for
communication.
A
Yeah
there
is
kind
of
like
the
to
discover.
What's
on
the
repo
some,
you
know,
the
the
component
that
performs
discovery
must
have
the
tripo
cloned
and
it's
a
right.
Now.
It's
only
reaper
server
so
and
the
only
way
to
get
access
to
the
you
know.
File
system
is
through
volume,
mounts
and
you
know
and
share
it
directly,
and
I
think
that's
that's
why
it
kind
of
the
only
option
we
have
is
a
sidecar
inside
of
ripple
server,
port.
F
E
F
A
Yeah
and
I
think
another
we
have
assumption
that
it
will
be
easy
to
package
into
a
simple
yaml
file
that
can
be
just
cube,
ctrl
applied,
but
it's
not
yet
proofed.
I
feel
like
if
it's
possible,
that
means
in
theory
we
can
build
a
catalog
of
plugins
and
then
have
community.
A
I
mean
basically,
it
kind
of
opens
the
door
to
you
know.
Maybe
maintainers
of
customize
would
be
interested.
Maintainers
of
cdk
will
be
interested
to.
You
know,
maintain.
A
Specific
configuration
so
it's
important,
but
I
I
don't
I
feel
like
as
part
of
mvp.
We
must
create
at
least
one
such
yaml
and
make
sure
it's
really
easy.
You
know
to
keep
it
it's
possible
to
simply
keep
ctl
apply
it
and
if
it's
possible,
then
you
can
say
confidently
that,
yes,
installation
is
easy
for
end
user.
It's
not,
you
know
extremely
complex,
and
I
think
it
also.
A
Basically,
if
we,
if
we
make
it
really
easy
to
install
for
operators
of
fiber
cd,
then
it
is
less
important
to
maintain
the
spec
stable,
because
we're
not
going
to
have
infinite
number
of
such
steps.
Specs
we're
going
to
have
one
for
cdkf,
one
for
like
cassand,
for
example,
or
so.
Basically,
one
per
config
management
like.
D
D
But
if
that's
only
the
interface
that
the
operator
is
talking
to,
then
it
could
be
just
using
a
different
off
the
shelf
image,
assuming
the
commands
work.
Of
course,
yeah.
A
Actually,
as
part
of
that
effort,
we
wanted
to
improve
versioning
as
well,
because
even
fiscal
support
right
now
doesn't
really
play
well
with
multiple
versions.
So
in
the
underdog
you
will
see
that
version
is
kind
of
baked
in
into
metadata
file.
So
we
were
thinking
that,
if
you
need
to
support
cdkf,
v1
and
v2,
you
would
have
two
set
cars
with
two
different
versions
and
two
different
metadata
files
and
metadata
file
kind
of
handles
the
difference
between.
A
A
Okay,
yeah,
I
was
going
to
you,
know
just
share
my
feeling,
and
I
want
to
hear
from
you
as
well,
about
current
state
of
two
to
zero.
So
this
is
how
milestone
looks
like
it
has
quite
a
lot
of
open
issues
like
19,
and
this
is
how
much
of
these
a
bugs
so
seven
open
bugs,
and
I
basically
I
felt
like
so.
The
feature
requests
got
added
into
2.0
one
stone
for
different
reasons,
because
some
of
them
were
in
work,
some
of
them.
A
So
my
current
feeling
is
that
all
the
features
that
are
not
yet
resolved
in
two
to
zero
milestones
are
not
blocking
us
and
can
be
carried
over
to
the
next
release,
and
so
everything
which
is
not
back
back
right
now
is
you
know,
I
don't
feel
terribly
bad
to
you
know,
don't
have
it
and
have
it
in
in
next
release
and
and
here's
the
list
of
bugs
that
are
not
yet
closed,
and
I
kind
of
I
can
speak
about
every
bug.
A
A
Cool
yeah,
so
if,
if
you
basically,
if
everyone
agrees,
we
should
just
focus
on
on
bugs.
So
here
is
the
remaining
list,
and
there
are
a
couple
bugs
that
I
wanted
to
mention
and
yeah,
and
these
bugs
are
sink
waves
and
hooks
so
sinkhooks.
A
Basically,
a
lot
of
users
reported
that
sync
waves
of
work
differently
in
1.8
release,
in
particular,
if
you
have
up
of
ups
button
and
sync
waves
and
it's
basically
the
current
it's
just
for
your
information,
I
keep
watching
the
ticket
and
seems
like
it's
still
not
fixed
and
no
one
knows
how
to
reproduce
it.
So
I
know
that
jan
worked
on
the
ticket.
I
worked
on
the
ticket
so
far.
We
have
no
kind
of
clear
understanding
of
when
exactly
it
broke,
and
you
know
yeah.
A
D
Them
quick
question:
I'm
just
asking
this
quick
because
we're
running
out
of
time,
so
the
bugs
which
have
no
assignees
is,
can
we
consider
that
they
are
up
for
grabs
so
that
somebody
could
start
working
on
them?
Yes,
that's
the
first
question.
The
second
question:
are
they
critical
bugs,
which
means
if
for
a
long
time,
nobody
picks
them
up?
Nobody
fixes
them,
which
of
those
have
to
be
part
of
the
next
milestone
and
which
could
be
dropped
right
yeah.
I
think
we.
A
Basically,
we
can
discuss
each
one.
I
personally
feel
like
we
should
keep
trying.
You
know
to
resolve
these
two,
the
one
that
we
just
talked
about,
because
it's
we
just
you
know
this
time
we
get
like
sooner
or
later,
some
of
the
users
who
can
reproduce
it
give
us
this
critical,
missing
piece
of
information,
and
then
it
explains
like
oh
this
is
that
was
broken
because
of
abc.
A
A
So
it's
kind
of
these
bugs
are
not
no
one
assigned
them
no
one's
self-assigned,
because
it
makes
no
sense,
even
if
you
are
saying
it
to
yourself,
there
is
nothing
you
can
do
right
away.
You
know
you
can.
All
you
can
do
is
just
sit
and
think
hard
and
eventually
yeah.
Eventually,
you
know
how
to
fix
it.
A
Okay
and
the
other
bug
which
also
kind
of
opened
for
many
months
already,
this
tls
handshake
back.
So
I
feel
like
we
spoke
about
it
last
time,
but
basically
what
we
did.
We
just
upgraded,
google
and
version
and
we're
hoping
to
create
a
release
candidate
with
a
new
golan
version
deployed
to
you
know
and
see
if
we
can
reproduce
it
again.
So
the
problem
with
dead
bug
is
that
it
never
happens
on
your
mini
cube.
It
only
happens
if
you
have
a
lot.
A
And
that's
it.
I
think
that's
this
this
that
that's
all
the
critical
bugs
we
have-
and
this
is
cosmetic,
minor
thing
which
will
be
resolved
pretty
sure
in
like
today
or
tomorrow,
and
this
is
one
more
bug
that
it's
like
almost
the
day.
Zero
bucket
exists
for
a
pretty
long
time
and
I
think
it's
in
progress,
it's
just
not
assigned.
We
need
to
fix
it.
A
I've
I
need
to
you
know,
double
check
who
was
working
on
it,
but
basically
we
know
how
to
fix
the
bug
and
it's
not
extremely
difficult
bug
to
fix
all
right
yeah.
So,
just
to
summarize,
we
have
several
bucks
won't
be
closed
and
all
the
other
bugs
that
we
know
how
to
reproduce,
have
straightforward
fix
and
think
really
close
to
you
know
just
get
it
done.
It's
like
matter
of
days
yeah.
A
D
And
one
last
question-
and
I
know
we
have
one
minutes
fast,
as
per
from
the
maintainers
and
project,
leads
perspective
that
you
yon,
do
you
have
a
rough
eta
or
week
when
you
think
you
would
be
cutting
the
release
yeah?
I
was
trying
to.
A
Say
as
soon
as
we
you
know
we
have
like,
I
I
pretty
much
expected.
We
need
to
fix
these
two
bugs
and
that's
it,
and
it
should
take
one
or
two
days
to
get
them
fixed.
Okay.
As
soon
as
it's
done
yeah,
we
should
go
ahead
and
create
really
exciting
edit.
That's
true.
E
C
Everyone
have
access
to
that
doc
until
they
can
comment,
and
where
do
we
share
that.
C
So
well,
we'll
polish,
that
we'll
add
the
stuff
about
the
consideration
of
the
this
as
a
crd
and
then
I
think,
once
it's
a
little
more
polished,
then
I
think
we'll
want
to
share
that
out
more
broadly
to
the
community
and
then
present
it
at
the
next
community
meeting
yeah.