►
From YouTube: Argo Contributors Office Hours Mar 10th 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everyone
welcome
to
the
contributors
meeting
today.
I'm
gonna
be
your
host,
my
name
is
leo
and
let's
get
started.
Let
me
share
my
screen.
A
All
right,
I'm
sharing
the
the
contributors
document
starting
off
with
the
triage
for
this
week
we
had
dan
and
kate.
Are
you
on
call?
B
I
am,
I
was
secondary,
so
I
can
give
you
a
brief
recap:
okay,.
B
Anything
you
saw
yeah
yeah.
I
was
just
looking
at
the
issues
and
since
last
week
I
saw
like
13
enhancements,
open
and
15
bugs
some
of
them
do
not
have
a
lot
of
detail,
so
I
asked
for
more
information.
B
One
of
them
was
a
ui
issue
regarding
the
retry
options
in
the
crate
panel
fix
that
one
and
yeah
thanks
for
thanks,
alex
for
quickly
reviewing
it
and
merging
it,
and
I
have
a
lead
on
another
fix
too
on
one
one
of
the
other
issues,
another
bug
but
yeah,
that's
basically
it
just
trying
to
get
more
information
and
yeah.
A
Cool
thanks,
kate,
okay,
so
now
we
need
to
decide
who's
gonna,
be
on
the
triage
for
next
week,
any
volunteers
we
need
a
primary
and
a
secondary.
A
A
No
okay,
so
maybe
I
can
go
next
week,
so
I'm
gonna
be
the
primary.
A
All
right
moving
forward
javi,
it
seems
you
you
want
to
discuss
a
rollouts
issue.
D
Yeah
so
the
issue,
the
issue
is
that
so,
when
analysis
metrics
are
enabled
in
arguables,
if
the
data
is
not,
you
know
it
is
not
enabled,
or
when
the
data
is
not
retrieved,
it
could
be
that
there
is
no
traffic
on
the
application.
D
So
whenever
we
look
for
http
traffic
and
other
things
to
conclude
on
whether
the
new
deployment
that,
when
we
use
canary,
if
the
new
deployment
is
successful
or
not,
if
you
want
to
do
such
kind
of
analysis,
if
there
is
no
traffic
on
the
application
at
the
point
at
a
point,
when
the
application
team
deploys
a
newer
version,
then
it
is
considering
that
as
a
failure
and
reverting
back
to
the
older
version,
so
the
so
the
ratio
here
raised
is
to
see
if
we
can.
D
If
there
is
no
value
that
we
are
getting
from
the
application,
can
we
take
it
as
consider
that
as
a
rollout
to
be
passed
and
when
the
traffic
comes
back
to
the
application,
can
we
start
resuming
the
steps.
A
Yeah,
I
think
it's
a
valid
concern.
I
don't
remember
I
heard
if
you
have
a
configuration
to
make
the
step
wait
until
it
gets
some
traffic,
not
something.
D
Secondary
the
other
issue
that
can
happen
is
if
we
wait
for
beyond
progressive
deadline.
Seconds
dollar
will
anyway
be
marked
as
failure
overall,
but
we
can
keep
those
values
based
on
the
application
traffic
patterns.
D
It
can
get
in
yeah,
it
can
get
into
a
pass
state,
but
once
you
know
the
analysis.
E
F
I
I'm
I'm
not
an
expert
in
a
lot,
but
I
I
remember
we
have
inconclusive
status
and
I
just
look
up
the
description
in
documentation
and
basically
inconclusive
is
in
a
lot.
You
can
provide
success,
condition
and
failure
condition,
and
so
basically,
if
you
write
these
conditions
in
a
way
so
that
none
of
them
returns
true,
then
roll
out
became
inconclusive
and
here
is
the
documentation
there.
If
you
open
it,
it
will
help.
D
D
Are
getting
the
metric
back,
but
if
we
are
not
even
getting
our
trick
back,
I
think
that
is
considered
as
a.
F
F
Is
it's
difficult
to
get
inclusive
result
when
there
are
no
metrics,
because
you
would
have
to
kind
of
handle
this
error
in
both
success
and
failure
conditions,
but
maybe
we
can
improve
it,
and
if
we,
if
we
have,
for
example,
inconclusive
condition
expression
that
check
for
presence
of
matrix
and
there
are
no
matrix,
it
would
just
return
true
and
then
it
will
auto
pass
reload.
So
I
guess
we're
already
very
close
to
required
here.
We
just
need
this.
D
F
D
Maybe
we
can
yeah,
we
can
add
an
additional
condition
and
close
your
limit
so
that
if
there
is
no
metric
that
we
are
getting,
then
we
can
make
the
progressive
rollouts
get
into
an
income
closure
screen.
F
Just
to
validate
this
idea,
I'm
assuming
that
when
we
have
no
metric,
then
in
this
example
that
we're
seeing
right
now
the
result
would
be
empty.
It
would
be
like
no
object,
and
maybe
we
could
have
inconclusive
condition,
configuration
right
here
and
then
it
would
check
the
result
and
return.
True
if
it's
empty
and
in
this
case
parallel
should
be
considered,
inconclusive.
F
That's
actually,
there
is
another
option.
I
I
I'm
not
expert
in
that
language,
but
basically
we
can
repeat
the
same
check
in
both
success.
Condition
and
failure.
Condition,
like
you
can
say,
result
is
not
empty
and
result
is
more
than
something.
And
then,
if
you
have
this
check
here,
then
both
success
and
failure
condition
would
return.
False
and
robot
will
be
inconclusive.
So
it's
kind
of
already
possible,
I
think,
but
not
not
convenient.
Okay,
that's
yeah.
D
Sure
I'll
check
that,
for
now
I
was
thinking
that
for
the
inconclusive
runs,
we
need
to
get
a
value,
but
yeah.
F
D
It
will
land
the
rollout
into
a
post
state
and
the
human
or
manual
intervention
is
required
to
get
it
back
into.
You
know
the
next
state
right.
D
Can
we
build
some
kind
of
intelligence
in
the
rollout
such
that
if
the
metrics
are
flowing
in
it
can
automatically.
F
F
A
Yeah,
the
only
problem
I
see
with
this
approach
is
having
the
rollout
resource
running
for
for
a
long
time
right.
It
might
happen
that
this
event
never
occurred
or
we
never.
We
never
received
that
in
in
rollouts,
so
we
it
will
mean
that
the
resource
is
it's
constantly
running
and
never
ending.
So
I
guess
that's
why
it
is
designed
to
design
it
this
way,
so
we
can
have
like
a
timeout.
Maybe.
D
We
can
have
a
timeout
yeah.
A
A
Maybe
yeah,
but
something
something
to
consider.
I
understand
your
your
your
suggestion.
A
I
guess
it's
something
to
consider
in
the
proposal
I'm
I
like
alex.
I
also
agree
that
we
can
do
something
to
improve.
I.
A
Thank
you
all
right,
so
moving
forward
to
the
next
topic,
hisham.
E
C
I
am
bringing
this
proposal
to
decide
on
the
approach
to
merge
the
application
set
code
base
into
the
argo
cd,
since
the
application
set
was
created
with
the
intention
that
it
would
in
future
would
come
into
argo
cd
and
it
has
matured
enough
and
and
also
we
have
bundled
the
installation
with
the
argo
cd,
so
yeah,
it's
right
time
to
merge
the
applications
set
in
code
base
into
arbor
city
with
the
bundling
of
the
applications
set
installation
with
the
argo
cd.
C
We
have
also
get
into
issues
list.
If
I
list
two
of
them,
one
is
the
circular
dependency
with
respect
to
release
and
other.
We
have
two
different
docker
images
that
is
being
created,
one
for
argo
cdn,
one
for
application
set
yeah.
So
to
overcome
this,
we
need
we
need
to
decide
on
to
merge
the
application
set
code
based
into
our
cd.
C
So
on
top
of
it,
if
I
see
we
can
merge
it
in
two
ways:
maybe
merging
the
application
set
controller
with
the
argo
cd
application
controller
and
then
add
the
api
support
and
grpc
support
into
the
existing
set.
Existing
argo
cd
server
like
existing
arbo
cd
is
and
another
option
would
be
to
treat
the
application
set
as
a
separate
microservice
yeah.
So
I
have
created
this
proposal
and
have
listed
some
of
the
pros
and
cons
that
came
to
my
mind.
C
Yeah
so
would
like
to
receive
some
feedback
also
merging
the
application
set
into
this
application,
set
controller
into
the
app
into
the
argo
cd
application
controller.
We
can
follow
two
approach.
One
approach
can
be
like
the
kubernetes
controller
manager
gives
it,
but
it
has
the
multiple
controllers
in
it
and
basically
it
invokes
and
looks
after
it.
We
can
follow
that
approach
or.
C
D
E
C
How
sharding
is
done
if
that
part
could
be
considered?
C
So
I,
like
I,
have
mentioned
if
we
are
changing
from
how
sharding
is
done,
like
moving
from
how
based
on
number
of
managed
clusters
to
number
of
crds
application
crds.
C
F
Yes,
and
then
is
writing
right
now
done
based
on
clusters,
because
that's
the
most,
because
that's
how
we
can
win
performance
in
improvements.
Basically,
if
you
charge
by
number
of
applications,
then
controllers
so
amount
of
work
controllers
are
doing
it's
mostly
based
on
clusters,
not
applications
yeah.
That's
why
charging
it
done
the
way
it
is
done
right
now,
so
it
will
be
difficult.
I
just
know
that
it
will
not
be
easy
to
change
it.
C
F
All
right
and
next
question
we
just
we
just
decided
like
when
we
merged
the
notification
controller
into.
Obviously
we
kind
of
we
sacrifice
sacrifice
came
in
history,
we
decided
not
to
bother
and
but
I
know
it's
it's
possible
to
preserve
it,
but
I
don't
know,
I
think
it.
It
depends
on
what
people
who
worked
in
applications
that
feel
like
like
jonathan
and
michael.
Do
you
feel
like
it?
F
Do
you
want
to
preserve
that
history
and
do
you
feel
it's
important
or
you're,
okay
to
just
move
files,
copy
them
and
then
all
the
track
of
your
work.
E
A
A
C
Yeah
so
it
would
be
added
to
the
same
same
docker
file.
A
new
binary
will
be
created,
and
apart
from
that,
we
will
have
additional.
When
we
add
the
back-end
support
for
application
set.
Maybe
we
will
have
additional
server
to
serve
the
request
of
application
set,
and
this
also
has
a
con
of
delegating
the
request.
So
maybe,
when
we
develop
cli,
we
might
submit
the
request
to
our
cd
server
and
they
may
delegate
to
three
request
to
the
our.
E
Okay,
I
I
would
just
add
my
expectation
is
the
application
set,
would
probably
be
part
of
the
actual
argo
cd
binary
in
the
same
way
that
repo
server
and
application
controller
and
the
server
and
the
other
components
are.
Might
I
don't
see?
I
don't
think
we
would
be
different
from
any
of
those.
A
F
Do
you
feel
it's?
It
makes
sense
to
talk
about
trying
to
reuse
repo
server
to
clone
git
repository
once
we
merge
application
set.
We
have
this
kind
of
opportunity
to
make
a
change
in
repo
server
and
expose
apis
that
application
set
needs
to
inspect
the
repository,
and
if
those
apis
are
available,
we
no
longer
need
to
have
it's.
You
know
we
don't
application,
set
no
longer
needs
to
clone
the
repository
itself.
You
know
in
the
application
set
port
and
instead
it
could,
it
can
rely
on
only
server.
E
Yep
yeah
agreed.
We
probably
want
to
do
that
like
that
in
a
couple
of
stages,
like
first,
do
the
merge
and
start
the
integration
and
yeah,
I
would
say
something
similar
for
any
other
big
integration
work
that
we
want
to
do
with
application
set,
probably
good
to
start
small
get.
It
integrated
make
sure
that
the
unit
tests
are
passing
the
end
to
end
tests
are
passing
everybody's
happy
working
together
and
then
start
looking
at
some
of
the
other
integration
work
for
the
repo
server.
E
I
think
at
the
moment
we
there's
not
a
way
there
isn't
an
api
on
repo
server
to
retrieve
the
contents,
like
the
full
contents
of
a
git
repo.
It's
just
like
the
ability
to
get
the
manifest
from
a
repo,
so
you
would
probably
need
something
like
oh
give
me
the
full
contents
of
the
git
repo.
Something
like
that,
and
I
remember.
F
We
were
discussing
to
it.
Apis
such
as
I
don't
know
around
the
shell
command
and
then
return
the
results
on
the
ap
server,
and
I
think
we
decided
not
to
do
it
because
it
would.
It
would
be
strange
because
ripple
server
would
have
apis
that
are
not
used
by
rvcd
and
only
used
by
projects
somewhere
else,
but
yeah
now,
once
we're
merging
two
product
projects
together
it
you
know
now
it's
much
easier.
We
can
just.
F
H
F
Yeah,
but
I
mean,
I
think
what
I
meant.
I
mentioned
that
improvement,
because
we
started
talking
about
application
set,
but
I
I
feel
like
yeah.
I
would
merge
it
as
this
first
without
making
changes,
because
it
would
be.
G
As
if
yeah
as
a
fallout
like
yeah,
I'm
just
not
just
saying
that
that
was
a
good
idea.
H
We
can
have
all
these
simple
mergers
first
and
then
all
this
improvement
all
this
refactoring
moving
performance.
I
think
we
might
wanna
capture
in
the
proposal
as
well.
H
A
Okay,
so
from
this
proposal,
I'm
understanding
correct
me
if
I'm
wrong,
I'm
understanding
that
there
is
a
preference
for
going
with
option
two
approach
and
that
we
are
going
to
implement
it.
The
simplest
way
possible
and
optimizations
are
going
to
be
implemented
in
future
proposals.
H
Yeah,
that's
what
I
hear.
I
think
even
this
proposal
I
think
pusha
is
going
to
put
in
more
details,
probably
capture
some
of
the
discussion
but
yeah
and
that's
think
option.
Two
you'll
have
applications
set
in
the
separate
workflow
and
the
controller.
H
I
think
that's
that's
the
the
current
consensus
right
now.
H
To
the
google
server
like
john
johnson's
edit,
I
think
this
will
we
need
to
put
this
in
this
proposal
as
well.
A
Okay,
that
makes
sense
anything
to
add
anyone.
C
F
So
basically
you're
saying:
do
we
need
code
freeze
or
we
try
to
you
know,
keep
developing.
We
keep
accepting
prs
in
application
set
and
and
start
merging.
Is
it
the
main
question.
C
C
Once
the
merge
is
done,
then
we
do
the
code
and
make
additional
merges
that
is
being
contributed
to
master.
B
C
E
Weeks,
for
example,
it
should
be
pretty
minor
and
the
unless
we
accept
a
gigantic
contribution
in
that
time,
which
we
there
haven't
been
too
many
of
those
in
the
history
of
application
sets,
they
would
tend
to
be
pretty
minor
changes,
and
you
know
scoped
to
a
small
number
of
files.
E
Unless
someone
contributed
something
large
like
a
generator,
we
we
could
probably
just
hold
off
on
merging
that,
but
small
stuff
would
probably
be
straightforward.
E
I
my
recommendation
would
be
to
continue
to
accept
prs,
but
just
make
sure
that
they're
small
and
fairly
simple
rather
than
freezing
and
then
once
the
integration
with
arc
cds
completes
to
merge
any
outstanding
prs
that
were
added
to
application
set
after
the
the
code
cut
off
and
integration.
A
Pr
template
is
one,
and
I
would
also
suggest
that
you
could
also
change
the
readme
file,
put
something
on
the
top
in
the
application
set
readme
file,
saying
that
there
is
a
on
on
ongoing
process
to
merge
this
and
yeah
just
describing
the
the
yeah
the
procedure
there.
A
D
A
H
Yeah,
so
this
one,
I
think
it's
been
discussed
has
been
proposed.
Just
I
can
click
on
it.
The
link
there
it's
basically
allowing
application
to
be
created
outside
of
our
city
namespace
as
the
other
cities
instance.
H
So
I
am
basically
asking
you
know
people
who
who,
if
to
review
it
and
then,
if
you
haven't
reviewed
it
and
then
maybe
you
need
to
close
that
soon
it's
been
yeah.
The
proposal
has
been
created
quite
up
quite
quite
some
time.
I
think
we
need.
We
should
visit,
closing
closing
on
closing
on
it.
I
I
would
vote
to
merge
it.
F
I
think
if
we
don't
merge,
we
will
like,
even
if
there
are
unanswered
questions
I
think
we
and
in
my
understanding
everyone
agreed
that
it
should
be
implemented.
Maybe
some
you
know
details
should
be
clarified,
but
I
would
rather
merge
the
proposal
and
then
start
building
it,
and
we
can
always
I
mean
as
we
build
it.
We
can
improve
it.
So
then
that's
my
opinion.
H
Okay,
I
saw
I
saw
this,
I
had
a
lot
of
comment
and-
and
I
think
john
already
addressed
them,
and
so
I
think
we
just
need
people
to
together
some
stuff
and
yeah
get
it
in
yeah.
F
I
feel
like
I
will
just
I
will
approve,
and
then
I
encourage
someone
else
to
approve
as
well
and
then
once
we
have
two,
we
can
merge
it.
It's
my
kind
of
just
my
observation.
I
have
a
feeling
that
we
are
not
really
carefully
reviewing
the
proposal
until
we
until
it's
kind
of
a
priority
like
if,
if
we
plan
to
work
on
on
the
feature
that
you
know
in
the
next
release,
then
we
rush
to
review
and
you
know,
discuss
as
much
as
possible
the
proposal
and
then
I
think
it
will
help
this
particular
feature.
F
H
Chicken
problem,
I
think
I
think
yeah
I
think
we
have.
We
have
people
once
it's
been
approved.
We
have
people
who
okay,
we'll
work
on
it
and
see.
If
we
get
into
the
next
release
all
right,
then
I
I.
F
I
yeah,
I
suppose,
to
do
cool
and
yeah.
We
need
something
I
wish
we
have
more
maintenance
here,
but
at
least
you
know
approve
my
from
myself
and
then
being
everyone
who
I
know
to
to
also
either
deject
and
speak
up
and
then,
if
we
won't
get
any
objections,
then
merge.
It
sounds
good.
D
But
yeah,
essentially
I
wanted
to
talk
about
the
notification
engine,
so
what
is
happening
is
we're
contributing
the
notification
engine
the
additional
integrations,
such
as
pager
duty
and
recently
contributed
to
pagerduty
there.
D
One
of
the
challenges
has
been
the
documentation,
for
it
is
still
being
referred
to
our
question
notifications
right
so
should
we
move
out
of
that
and
notification
engine?
Has
you
know
its
own
documentation
and
because
arc
is
referring
to
the
notification?
Engine
can
be
referred
to
the
notification
engine
documentation
directly
for
the
arc
reloads.
A
I
guess
from
my
understanding,
sorry
you
from
my
understanding.
Your
suggestion
is
to
move
the
notification
engine
documentation
inside
argo
cd
documentation
is.
Is
that
what
you're
saying.
D
No,
no,
no.
We
should
move
the
documentation
for
all
the
notifications
into
notification
engine
documentation
and
make
that
as
a
standard,
rather
than
our
facility
notification
recommendation,
because
they're
contributing
features
into
notification
engine
that
can
be
leveraged
by
pulse
record,
rollouts
and
stuff
like
that.
D
Is
getting
is,
I
think,
argo
city
notifications
is
not
getting
updated
too
frequently.
F
D
F
I
think
it's
just
just
for
everyone:
just
to
you
know,
introduce
some
context,
basically
notification,
engine
repo.
It
has
a
bunch
of
markdown
files
that
describe
common
functionality
and
then
there
is
a
small
framework.
So
all
the
projects
that
use
engine-
and
there
is
a
cogent
task
that
lets
you
copy
those
files
into
your
own
documentation,
but
the
problem
is
like
typically
in
documentation.
You
want
to
provide
some
examples
and
then,
to
give
an
example
we
just
chose.
F
I
mean
we
just
we
kept
using
our
good
cd
application
as
an
example,
and
it's
not
the
best
choice.
I
think,
because
it's
pretty
much
became
margo
cd
notifications,
documentation.
Not
I
don't
know
how
we
can
improve
it.
Like
one
option,
is
to
introduce
some
kind
of
templating
and
make
it
possible
to
override
examples
in
on
the
consumer
side.
For
example,
like
argo
cd
can
import
the
documentation
from
notification,
engine
and
overwrite
examples,
but
I
do
not
know
how
to
do
it.
We,
you
know,
we
use,
read
the
docs
and
I
don't
even
know.
F
I
will
kind
of
rewrite
all
the
documentation
to
use
sample
like
imaginary
crd
and
that
imaginary
id
it
would
not
be
neither
argustic
application,
but
that's
not
the
best
user
experience,
because
user
will
have
to
like
user
will
see
imaginary
c
id
in
their
allowed
documentation
or
in
rbcd,
so
yeah
at
least
that's.
This
is
why
things
the
way
they
are
right.
Now
we
have
the
choice
between
infant.
Like
imagine,
researching
versus
some
serious
that
people
already
know,
and
we
chose
application
clg
that
we
use
in
examples.
D
Right
should
I
understand
that
what
I,
what
I'm
thinking
is
at
least
for
projects
that
are
using
notification
engine.
Can
we
have
separate
documentation
for
that
and
let
them
generate
the
docs
out
of
the
notification
engine
docs.
Maybe
we
can
for
now
isolate
the
documentation.
D
We
can
rewrite
it
in
the
notification
engine
instead
of
generating
from
the
hardware
security
notifications,
so
the
projects
which
start
using
notification
again
can
generate
the
documentation
out
of
notification
engine
and
which
refers
to
our
facility.
Notifications
can
still
go
back
to
that
at
least
temporarily
that
will
solve
until
we
have
a
long
term
solution
to
have
one
thing
in
place.
Maybe
we
can
we
can
explore
those
options.
D
Right
what
what
I'm
trying
to
say
is:
we
have
two
repositories,
essentially
a
notification
and
arguably
notifications
right.
So,
let's
maintain
independent
documentation
for
each
one
of
them
and
most
of
the
applications
either
choose
one
of
them
right
and
whichever
project
is
consuming
either
of
them.
F
D
Specific,
I
mean
generators
can
be
used
in
that
particular
project
like
rollouts,
can
use
notification
engine
documentation
and
generate
the
docs
in
the
rollouts
and
rbcd
can
continue
to
use
our
city.
D
Yeah,
so
we
have
only
two
repos
where
this
is.
You
know
this
is
being
used
right.
One
is
notifications,
engine
notification
again
and
the
other
one
is
our
saving
notifications.
D
F
F
Today,
it's
it's
supposed
to
be
primary
source
of
truth
right
right
now,
but
like
people,
oh,
I
would
maybe
I
need
to
clarify
what
do
you
mean
by
that?
Do
you
want
users
to
basically
go
to
notification
engine
repo
and
read
the
documentation,
then?
Is
it
no?
No,
so.
D
F
Project
right,
oh,
this
is
it's
what
you
think
right
now.
I
think
it's
about
it.
We
just
forgot
to
delete
this.
Like
you
know
this
lens
from
from
micro,
they
should
just
yeah,
because
I
don't
think
we
generate
or
we
don't
run
it
anymore,
like
basically,
we
introduce
the
exact
same
target
in
argus
identifications
so
and
it's
and
that
targets
pull
the
dogs
from
notification
engine
and
then
kind
of
inject
them
into
argus
identifications,
and
that's
why
we
have
dogs.
F
We
have
notification
description
inside
of
argosy,
documentation,
yeah,
and
but
maybe
this
time
can
I
show
the
screen-
hey
sure,
yeah
I'll
just
I.
I
just
want
to
show
you
the
problem
that
I
was
referring
to.
So
this
is
the
so
we
have
this.
F
So
in
my
as
I
understand,
that's
the
only
problem,
like
those
examples,
are
specific
and
right
now,
for
example,
if
you
navigate
to
algorithms
documentation,
you
will
see
this
this
as
an
example,
but
it
doesn't
make
sense
for
allowed
yeah,
but
yeah
I
mean
those
files
that
I
was
showing
the
service
service.
Specific
documentation
are
only
in
the
only
available
in
notification
engine,
but
we
no
longer
use
files
from
you
know
those
files
from
argus
identifications
or
from
from.
F
D
All
right,
then,
that's
fine,
I
I
I
mean
it
is
for
me
if
this
is
so
software
that
that's
good,
that's
what
I
I
don't
know
we
should
come
up
with
us
yeah
the
the
problem
as
such
is
still
yet
to
be
solved.
F
F
F
We
can
just
avoid
mentioning
any
kind
of
cld
at
all.
We
can
just
limit
to.
I
mean
this.
This
particular
example
can
be
just
limited
to
annotation
so
that,
for
example,
this
snippet
is
just
trying
to
explain
what
kind
of
annotation
you're
supposed
to
put,
but
no
one
forced
us
to
have
this.
We
can
just
get
rid
of
it
and
then
it
will
get
the
service
cd
specific
and
same
here.