►
From YouTube: Argo Contributors Office Hours Aug 28th 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
welcome
to
the
contributors
meeting,
I'm
gonna
be
your
host
today,
I'm
Leo
and,
as
usual,
we're
gonna
start
with
triage
in
discussion.
We
had
Michael
and
Ishita
this
week.
Michael
anything
worth
mentioning.
I
saw
you
added
two
topics
here.
You
want
me
to
click
and
open
the
issues.
B
Yeah
we
can
run
through
those
real
quick
I'll
mention
briefly
2.9.orc
one
is
out.
I
haven't
seen
anything
particularly
dramatic
in
terms
of
bugs
on
that
so
y'all,
please
run
those
run
the
RC
internally
and
let
us
know
if
you
hit
trouble
on
this
one
I,
don't
remember
what
this
is.
Oh
yeah,
this
is
cool,
so
people
who
run
Argo
CD
in
a
multi-architecture
environment
so
like
they
have
pods
running
both
arm
and
AMD.
B
They
noticed
that
they'll
get
404s
in
the
UI
when
they
request
the
UI
bundle
and
that's
because
sometimes
they'll
they'll
hit
an
arm
server
it'll
give
them
the
hash
for
the
arm
version
of
the
bundle
and
then
the
request
for
that
bundle
will
be
handled
by
an
AMD
pod
which
doesn't
have
the
arm
version
of
the
UI
bundle.
B
So
one
option
folks
have
is
to
like
pin
I,
think
pin
sessions,
I
forget
exactly
the
the
mechanism,
but
basically
make
sure
that
the
same
API
server
handles
all
the
requests
for
a
given
user.
Another
option
is,
we
could
just
make
sure
that
the
same
bundle
is
produced
for
all
images
by
removing
the
feature
that
shows
on
the
download
page
what
architecture
the
CLI
download
is
for,
and
instead
just
direct
them
to
the
releases
page
to
download
the
CLI
for
themselves.
B
Yeah
I,
don't
know
that
that
didn't
seem
like
a
terrible
idea
to
me.
I,
don't
know
how
attached
people
are
to
like
showing
the
architecture
in
the
the
UI
for
the
CLI
download,
but
that's
one
option.
One.
A
Thing
that
came
to
my
mind,
I
was
digging
into
this,
not
specifically
this
issue,
but
something
related
to
Intuit
this
week
and
I
found
that
there's
a
feature
in
article
City
that
automatically
downloads
the
client
right.
So
the
Argo
City
client
get
can
be
downloaded
from.
B
A
Related
to
that
I'm
not
sure
but
I'm
just
mentioning
that,
because
it
could
be
something
yeah
related
to
that
specific
feature.
B
It's
slightly
related,
so
we
we
do
offer
the
ability
for
a
user
to
hit
the
API
server
and
say:
hey.
Send
me
the
CLI
and
the
CLI.
That
sent
is,
of
course,
the
one
that's
built
for
whatever
architecture
the
image
is,
and
that's
fine
so
like,
if,
if
like
into
it,
mostly
runs
AMD
pods.
So
if
I
download
the
image
I
can
expect
it
or
work
on
a
Linux
AMD
machine.
B
That's
all
fine!
The
problem
happens
when
we
want
the
UI
to
indicate
to
the
user
what
architecture
the
the
image
is
running,
because
that
requires
a
build
time.
Toggle
in
the
webpack
bundler
and.
B
Yeah
API
server
image
or
just
the
Dargo
CD
image,
so
we
have
build
time,
webpack
logic
that
changes
the
bundle
depending
on
the
architecture.
C
C
Yeah,
so
we
maybe
we
just
stop
using
statically
compiled
stuff
in
in
the
bundle
there
I'm
proposing
it,
because
we
run
into
the
same
problem
and
try
to
solve
it
with
special
sessions
thickness.
It
seems
to
be
working
most
of
the
time,
but
I
guess
it
doesn't
like.
There
is
no
100
currency.
That's
the
very
first
request.
So
basically,
when
user
open
the
page
for
the
first
time,
browsers
sent
a
bunch
of
requests
in
parallel
and
sometimes
we
still
get
404.
B
And
we
can
so
we
can
tell
them
based
on
an
API
server
response,
rather
than
a
build
time,
toggle
exactly
yeah,
that
we
might
still
be
lying
to
them,
depending
on
how
requests
are
being
served
because
like
if
oh
yeah,
if
you're
randomly
picking
a
back-end
server?
Who
knows
which
one's
going
to
answer
the
API
request
versus
the
CLI
request,
but
like
relying
to
them
now
so
lying
via.
This
is
not
really
that
that
much
worse,
we.
C
Also
have
configurable
versions
right,
it's
possible
to
volume,
Mount
files
into
image
and
configure
so
I
think.
If
someone
is
running
you
know
Argo
CG,
on
mixed
environment
and
want
to
make
sure
the
user
gets
the
same
binary.
C
But
if
there
was
yeah
so
I
guess
my
point
is
maybe
maybe
a
default
Target
installation
should
not
even
guarantee
it,
and
you
know
the
whoever
is
working
on
on
the
setup
I
think.
Maybe
we
already
provide
tools
that
required
to
reliably
serve
the
same
binary.
Even
if
you
have
different
nodes
with
different
reconstruction.
B
Yeah,
okay,
then
so
maybe
what
I
recommend
is,
let's
get
rid
of
the
build
time.
Toggle
instead
serve
it
up
from
the
API,
and
then
just
people
are
still
going
to
have
the
problem
of
like
unexpected
architectures
on
download
and
a
mixed
architecture
environment,
but
they
should
still
just
do
what's
described
in
this
issue,
which
is
whatever
pinning
mechanism
make
sure
you
stay
with.
The
same
architecture
seemed
like
reasonable
First
Step
anyway,.
E
A
All
right,
thanks
Michael
next
one
Argo
City,
triggering
resources,
application
warning
and
Cloud
providers.
B
Yeah,
this
one's
maybe
a
little
bit
tougher,
so
AWS
gks,
gke
anyway,
Google
and
Amazon.
They
monitor
requests
for
different
resource
kinds.
If
you're
requesting
a
version,
that's
up
for
deprecation
it'll
show
you
warnings
and
potentially
I
think
even
block.
Your
upgrade
folks
are
noticing
that,
like
they're
kinds,
that
they're
not
even
using
Argo
CD
for
that
are
now
blocking
upgrades,
because
Argo
CD
is
watching
everything
I
pointed
out
resource
exclusions.
B
That's
fine
ish,
but
maybe
you
don't
want
to
exclude
an
entire
resource
just
due
to
one
version
upgrade
and
I
also
pointed
like
resource
exclusions.
Also,
don't
work
if
you
have
a
bunch
of
kinds
and
like
have
to
con
constantly
go
back
and
add
new
exclusions.
B
So
all
that
to
say
it
would
be
dope
if
there
was
some
way
Argo
CD
could
know
like
I've,
never
actually
interacted
with
this
kind.
I've
never
seen
a
resource
managed
of
this
kind.
I've
never
seen
one
in
a
resource
tree
of
this
kind
and
just
stop
watching,
but
I,
don't
know
how
difficult
that
would
be
to
do.
D
No,
but
we
so
this,
this
problem
is
very
familiar
too
to
us,
because
openshift
has
like
a
similar
thing
right
like
it,
it
warns
you
about
deprecations
and
you
know,
trigger
trigger
the
user
warning
for,
like
hey
you're,
going
to
upgrade
to
next
to
the
next
version,
but
you're
using
these
duplicated
apis
right,
but
you're
not
we're
not
using
them.
Just
the
application
controllers
watching
them
it
is.
It
is
annoying.
I
agree.
F
But
I'm
curious
how
it
blocks
upgrades
though,
like
you
have
a
controller
which
is
trying
to
watch
things
sure
it
shouldn't
be
watching
it,
but
it's
probably
annoying
to
see
those
things
in
the
logs.
It's
probably
distracting,
but
I'm
wondering
how
does
it
block
upgrades.
F
D
If
you
run
the
that
cluster
and
get
this
warning-
and
it
says
you
hey
you're
using
this-
and
so
this-
this
confuses
a
lot
of
people
right
so
because
they
they
would
assume.
Oh,
oh
wow,
I'm
using
this,
this
deprecated
thing
and
oh
no,
no
I'm
using
it,
but
the
I
will
see
the
application
controller
is
using
it.
So
you
know
the
question
everyone
asks
is
hey.
Will
this
block
my
update?
You
know
why.
A
And
correct
me
if
I'm
wrong,
but
the
watch
doesn't
specify
the
version.
So
what
specifies
the
version
is
the
is
is
what
is
provided
in
git.
So
as
far
as
I
remember
the
logic
in
in
Argo
City
it.
It
checks
the
API
version
defined
in
the
desired
state
in
git,
and
asks
the
API
server
the
cube
API
server
for
that
resource.
In
that
specific
API
version
and.
B
B
E
D
Yeah,
basically,
what
happens
is
that
the
application
controller
will
will
list
all
available
apis
right
during
cluster
cache
buildup
and
it
discovers
the
Pod
security
policy
in
the
API
Group
Policy
V1
beta1
and
watches
it
so,
which.
D
Right
and
the
version
is
right:
it's
it's
not
a
real,
it's
you
know
it
it.
It's
not
a
bug
per
se
like
it's,
not
preventing,
it
doesn't
affect
any
function,
it's
just
annoying
because
it's
exposed
to
the
user.
If
they
have
like
you
know
if
they
have
These
Warnings,
if
they
monitor
them
and
the
lights
somewhere.
B
D
Yeah,
well
you
you
already
mentioned
it
right,
so
the
you!
You
can
teach
the
application
controller
to
ignore
this
particular
API.
So
you
have
to
put
it
into
the
exclusion
list
and
then
it
won't
watch
that
API
anymore
and
this
particular
warning
is
not
issued
anymore
and
I.
D
I
would
even
say
that
this
is
not
a
a
complex
workaround
to
implement,
because
these
API
duplications,
like
I,
think
we've
seen
them
for
the
like
in
the
in
the
last
two
years,
so
kubernetes
deprecates,
two
or
three
apis
per
release,
something
like
that,
maybe
maybe
less
for
a
particular
release,
maybe
four
or
five
for
another
release.
But
you
you
don't
have
to
carry
this
configuration
around
once.
D
D
E
A
The
the
problem
of
not
not
allowing
Argo
City
to
be
up
updated
in
eks
in
gke
clusters
and
what
the
user
is
seeing
as
a
warning
in
Argo,
City
logs
I'm,
not
sure
those
two
things
are
related.
F
A
F
A
This
particular
issue
is
the
user
is
concerned
about
those
warrants
right,
but
this
is
a
false
warning,
as
we
already
discussed
the
thing
about
not
allowing
argosity
to
be
updated
in
gke
AKs
clusters,
it
might
be
something
else
so
I
maybe
would
go
in
that
direction
with
the
user
to
really
understand
why
eks
and
gke
clusters
are
blocking
maybe
they're
using
a
specific
tool
which
analyzes
logs.
Maybe
it's
a
possibility,
but
with
the
information
that
we
have
in
this
ticket
I'm
not
able
to
tell.
B
I
do
think,
there's
potentially
an
interesting
performance
enhancement
to
be
had
because
suppose,
they're,
you
know
a
thousand
different
kinds
or
API
version
kinds
on
a
cluster
and
Argo
CD
is
watching
all
of
them,
but
our
manifests
only
ever
touch
20
of
them
and,
like
the
children,
resources
of
those
20
only
ever
touch
20
more.
It
would
be
interesting
for
Argo
CD
to
kind
of
sleep,
the
watches
on
all
of
the
other
types.
Until
we
encounter
a
need
to
monitor
it.
D
It's
also
complex
because
you
don't
know
like
you
know,
take
for
example
deployment.
You
will
have
a
deployment
in
your
in
your
git
right,
but
that
will
result
in
a
replica
set
and
a
pot
that
obviously,
basically
don't
know
about,
because
it's
not
in
git
right
so
and
the
same
probably
is
true
like
take,
for
example,
an
operator
right,
you
you
have
you
put
the
operand,
so
you
have
the
operand,
you
don't
know
what
this
operand
spawns
right
like
it
could
spawn
config
Maps.
B
D
A
B
D
Cool
interesting
idea,
so
it's
just
complex
yeah.
B
Let
me
try
to
get
a
sense
from
the
folks
on
this
thread,
how
like
blocking
it
is,
and
then
we
can
try
to
like
identify
the
shortcomings
of
the
resource
exclusions
and
try
to
you
know,
find
a
little
bit
more
usable
way
for
folks
to
get
past
these
messages
or
blockers
or
whatever,
but
I
think
this
will
be
a
conversation,
not
not
something
that
we
can
really
solve
on.
This
call
yeah.
A
Yeah
just
to
add
a
little
bit
more
piece
of
information
Michael
and
into
it
I
know.
Platform
folks
are
using
some
Library,
some
binaries
to
help
identifying
a
applications
using
deprecated,
API
versions.
Maybe
the
problem
that
IKS
and
gke
cluster
are
are
referring
to
are
using
those
those
those
binaries,
those
those
applications,
and
it
will
be
a
matter
of
updating
those
applications.
I'm
trying
to
remember
exactly
how
the
application
works,
the
one
used
by
by
into
it
I
I,
but
I'm
failing
to
remember
so
it
could
be
something
related
to
that.
B
Right
but
maybe
for
other
folks,
it's
a
third
party
tool.
A
Oh
yeah
anything
else
for
this
one
nope.
D
I
I
have
another
issue
for
triage
that
isn't
on
the
list.
Sorry,
probably
a
quick
one.
Let
me
paste
it
into
yeah
into
the
chat,
find
it.
D
So
he
did
that
Alex
just
left,
because
I
would
love
his
opinion
on
that.
So
this
is
about
the
sync
retries
and
basically,
if
you,
if
you
have
auto
sync,
enabled-
and
you
have
the
sync
retries
on
it's-
it's
a
rather
old
issue
but
I
just
stumbled
over
it,
because
there
was
some
recent
discussion
and
it
this
Behavior
annoyed
me
as
well.
D
So
what
happens
is
that
when
you
have
an
error
in
your
source,
the
retry
will
never
succeed,
because
it
won't
take
into
account
any
new
commit
to
git
it.
So
I
think.
If
when
when
the
retry
feature
was
was
developed,
it
was
like
you
know,
assuming
that
you
have
a
CR,
a
new
application
and
the
crd
does
not
exist.
It
will
come
by.
It
will
be
installed
by
another
application,
for
example.
D
D
It
would
change
the
default
Behavior
right,
so
I
was
I
was
thinking
that
maybe
a
new
toggle
in
the
retry
stance
that
would
be
would
be
good,
but
that
would
be
a
security
change
and
would
prevent
us
from
cherry
picking
into
into
previous
releases
right
so
yeah,
but
yeah
Alex
Alex
is
unfortunately
gone
because
I
I'm
not
sure
what
kind
of
side
effects
this
change
would
have.
So,
but.
D
I
just
fixed
it
right
before
this
meeting,
because
I
have
locally
in
my
development
branch
through
just
two
lines
of
code.
A
D
Oh,
the
change
just
resets,
so
there
is,
if
you
look
into
the
application
status
it.
There
is
an
a
sync
State
field
and
it
keeps
the
commit
Shah
that
should
be
synced
against
so
the
retry.
Never
you
know
it
will
use
the
very
same,
commit
Shaw
for
all
retries
and
that's
why
it
doesn't
take
any
changes
that
that
are
committed
to
get
like
to
fix
an
issue
into
account
and
the
change
just
resets
the
commission.
So
the
next
retry
will
perform
a
refresh
of
the
application.
A
E
A
D
Yeah
that
that's
what
I
thought,
maybe
you
know
just
just
a
Boolean
that
says
refresh
true
false
or
something
like
that,
but
yeah,
so
that
that
basically
would
only
be
a
forward
fix
rate.
So
we
not
be
able
to
cherry
pick
it
back
into
into
the
support
releases.
The.
D
G
D
G
Like
a
there's
like
a
status,
if
you
interrupt
a
rollout,
isn't
there
like?
If
you,
if
you
start
a
change
and
then
you
make
a
new
commit
before
that
change
is
able
to
complete
rolling
out
it
I'm
thinking
of
maybe
roll-offs.
That
does
this.
That
shows
up
as
like
an
interrupted
state
or
something,
and
then
it
just
pushes
forward
because
it
doesn't
seem
like
it
should
update
an
existing
sync.
That's
on
a
retry.
G
It
seems
like
it
should
terminate
that
sync
and
then
start
the
new
one
right,
because
if
you
have
like,
if,
if
it
gets
stuck
and
it's
on
a
retry,
it's
not
gonna
redo
Precinct
hooks
right
but
like
if
you're
rolling
to
a
new
commit
Shaw.
You
may
have
Precinct
hooks
that
you
want
to
operate
so
shouldn't
it,
not
update
the
sync
that
it's
on
and
retry.
It
should
terminate
that
sync
and
then
let
the
new
one
start
do
I.
Have
this
mixed
up.
A
But
is
it
true
that
Ruby
dry
won't
execute
Precinct
hooks
on
on
whenever
it
tries
to
sync
again
I'm,
not
I,
never
tested
this
scenario
so
I
try
to
think
something
failed
it
this.
My
my
application
was
configured
with
a
Precinct
hook
and
retry
will
the
precinct
hooks
always
be,
will
be
executed
or
not?
That's
that's
the
scenario
you're
you're
referring.
D
It
it
probably
really
depends
on
where
the
sync
fails
right.
If,
if
it's
like,
if
it's
failing
in
the
dry
run
phase
like
you,
you
have
a
missing
crd
in
the
cluster
or
you
know
you
have
a
wrong
Yammer,
then
probably
the
the
precinct
hooks
are
not
executed,
because
the
swing
is
not
really
running
right.
It's
just
in
the
in
the
precinct
in
the
sorry
in
the
dry
run
phase.
D
But
if
you
have
like
something
like
admission
web
hook,
that
would
fail
to
admit
the
resource
during
the
hot
face
of
the
sink
right,
yeah
then,
then,
probably
yeah,
that's
at
least
yeah.
You
need
to
make
sure
you
probably
need
to
roll
into
investigate
this
yeah.
That's
a
that's
a
very
good
point.
A
And
then,
if
you
add
this
additional
configuration
when
where
you
think
this
should
be
added
yeah
in
the
application
resource,
yes.
D
A
Makes
sense
sounds
good.
Yeah
I
was
just
thinking
if
we
want
to
have
that
globally,
applied
in
in
the
controller
as
well,
but
I
mean
we
can
discuss
that
yeah.
D
Not
really,
there
is
a
there
is
a
related
issue
but
think
we
we
can
discuss
this
offline
so.
A
A
Right:
okay,
any
topics
for
discussion:
someone
wants
to
bring
up
last
minute.
B
Ede
tests
they've
been
flaky
lately,
thanks
to
Leo
and
Zach,
for
pointing
out
some
ways
to
mitigate
that
boiled
down
to
independently
running
e
to
e
tests,
so
that,
if
one
flakes
it
doesn't
kill
all
of
them
and
second
introducing
some
per
test.
Retry
logic,
long
story
short:
you
should
see
fewer
failures.
B
A
B
Bumped
it
out
to
60
Minutes
instead
of
the
current
45
minutes
limit
for
ede
tests,
since
retries
could
push
the
the
time
further.
I
mean
obviously
we're
we're
kind
of
kicking
the
can
down
the
road
on
these
flakes,
but
I
I,
just
don't
have
the
time
right
now
to
to
properly
fix
any
tests
so
kicking
the
can.
It
is.
A
B
The
test
should
be
fixed
since
they're,
relatively
unflaky.
A
Yeah,
so
unit
tests
are
not
expected
to
be
flaky
and
if
they
are
I,
think
it's
we're
better
served
with
just
commenting
the
the
test
out
and
removing
it
from
the
from
the
from
our
suite
of
tests
instead
of
retrying
and
kind
of
hiding
it
yeah.
In
case
we
have
a
unit
test
that
is
failing
for
a
real
issue
that
is
going
to
be
retrying.
Forever
Until
it
reaches
a
timeout.
Is
that
correct.
B
The
test
should
retry
up
to
five
times
it's
configurable,
so
what
I
can
do
is
for
unit
tests
I'll,
just
configure
it
to
zero
retries.
Okay,.
A
Yeah
I
agree:
that's
a
good
idea,
thanks
Michael
yeah,
and
it
was
really
great
to
bring
it
up
because
it
changes
a
little
bit
how
UT
tests
are
going
to
be
execute
executing
in
the
build.
It's
good
that
everybody
is
aware.
E
A
Guess
that
wraps
up
for
today,
I
meet
you
next
week
thanks
everybody
thanks
see
ya
foreign.