►
From YouTube: Argo Contributors Office Hours Dec 2nd 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody
welcome
to
the
contributors
meeting
today.
I'm
gonna
be
your
host,
so
let's
get
started
with
the
triage.
So
pasha
was
a
primary
for
this
week.
So
pasha.
If
you.
B
Yeah,
sorry,
I
just
joined
hi
everyone.
So
during
this
week
I
covered
few
items,
but
I
wanted
to
mention
some
specific
item
that
people,
as
I
understand
people
asked
before
for
this
feature,
but
now
it's
appear
again
the
they
want
to
be
able
to
define
a
hard
refresh
mode
for
application
as
a
default
one.
So
in
case,
if
you
just
refresh
application,
even
each
three
minutes,
you
want
to
do
it
in
hard
mode
for
some
specific
application
and
we
don't
support
such
option
today.
It's
always
normal.
B
What
you
can
do,
you
can
add
annotation,
but
after
each
refresh
this
annotation
disappear,
so
they
opened
back
on
that
I
created
pull
request
and
after
discussion
with
alexandrian.
I
understand
that
it
cannot
be
like
this,
because
it's
create
some
additional
risk
that
it
can
cause
problem
in
their
environment.
C
C
A
Seven
seven:
seventy
four
is
that
right.
Yes,
I'm
gonna
open
here.
E
D
D
I
also
think
that
so
the
only
use
so
the
use
case
behind
that
or
the
problem
that
the
user
has
is,
I
think
it
would
be
the
that
they
edit
the
hum
charts
right.
So,
for
example,
they
erase
the
image
version
the
image
tag
or
something
like
that,
but
they
don't
actually
update
the
hand
chart
version
right.
So
it's
basically
something
like
going
for
the
latest
tech
on
images.
D
Something
like
that
and
if
you
refresh,
if
you
do
a
soft
refresh
in
argo
cd,
it
won't
pick
up
the
change,
because
the
the
version
is
the
same.
It's
in
in
the
cache
and
after
24
hours
it
will,
it
will
eventually
rebuild
the
cache,
but
until
then
they
will
have
to
do
a
hard
refresh
to
re-render
the
the
chart
with
the
same
version.
G
F
C
Yeah,
my
main
concern
was
that
if
we
let
end
user
to
request
this
hard
refresh,
it
can
be
surprised
to
admin,
because,
let's
say
yeah,
you
know
what
you.
Basically,
you
want
you,
if
you're
an
argo
cd
for
many
tenants
and
then
one
tenant
says
okay,
I
want
to
do
hard
refresh
all
the
time
and
then
everyone
else
suffers.
So
at
least
I
feel
like.
I
can
understand
that
it's
probably
this
is
what
people
do
and
I
was
hoping
to
help
them
somehow
yeah,
but
I'm
also
I'm
on
offense.
C
If
that
feature
even
supposed
to
be,
are
they
doing
something
wrong?
Probably
maybe
they
should
not
use
moving?
You
know
they
should
use
exact
version,
it's
kind
of
it.
It's
maybe
they're
doing
something
wrong,
but
I
I
don't
know
for
sure.
E
Yeah,
sorry,
in
general,
sorry,
I
think
like
since
customize
supports
you
know
referencing
an
external
url
people
end
up
using
that
and
then
the
expectation
is
when
that
changes.
I
should
also
see
those
changes
in
my
cluster
and
they
end
up
doing
that,
but
yes
you're
right
unless
we
can
like,
unless
we
are
able
to
say
that
this
hard
refresh
is
going
to
only
happen
within
the
sandbox
of
my
resource
consumability.
E
This
is
risky
because
if
everyone
in
the
cluster
now
suddenly
says
hey,
I
want
a
heart
refresh.
That's
like
the
worst
case.
You're,
definitely
going
to
have
a
problem.
If
even
one
person
says
that
I
want
to
have
a
hard
refresh
that
can
lead
to
resource
issues
too.
So
it's
really
hard
to
decide.
Sorry.
B
But
but
maybe
they
should
just
provide
such
option
on
system
level
for
administrator
and
say
and
just
provide
warning.
Okay,
you
enable
this.
You
take
this
risk
if
you
need
it,
okay,
so
it's
in
the
end,
we
just
should
say:
okay,
it's
risky.
It
can
create
problems
for
your
cluster
entirely
and
be
careful.
We
already
do
it
for
another
things
like
replace
operation
during
sync
and
so
on.
It's
yeah.
I.
D
Yeah,
I
think
so
what
users
are
going
to
do
if
they
don't
have,
that
option
is
probably
they
will
set
up
a
cron
job,
something
like
that
which
uses
argo
cd
app,
get
dash
dash
hard
dash
refresh
right,
so
that
that
would
basically
be
their
workaround
and
and
probably
this
causes
as
much
confusion
for
for
an
argo
cd
admin
and
it's
probably
even
harder
to
track
down.
I'm
not
sure
I
feel
like
if
users.
C
C
C
I
I
know
that
I
I
there
is
open
pull
request.
Actually
it's
not
all
requested.
It
was
merged.
We
have
a
feature
now
to
bulk
refresh
applications.
E
If
quick,
quick
question
on
the
hard
refresh
does
the
hard
refresh,
also
refresh
the
cluster
state,
cache?
No,
no,
it's
it's
only
right
now,.
E
Okay,
I
mean,
if
it's
only
the
gate.
If
the
impact
is
not
super
high,
I
would
say
like
it
is
high,
but
it's
not
super
high,
like
it
gets
higher
and
higher.
If
almost
every
user
in
the
system
requests
for
a
hard
refresh
in
there,
yeah.
G
E
When
it's
high
so
yeah,
I
mean,
I
think
it's
one
of
those
features,
and
I
kind
of
tend
to
agree
with
pasha
here
that
if
you
as
an
admin
you're
allowing
it
you're
effectively
taking
a
risk.
E
But
if
the
admin
knows
what
she's
doing.
That's
probably
okay,
because
I
do
see
the
use
case
as
real,
where
somebody's
referencing
a
remote
customize
url
and
that
probably
wouldn't
get
refreshed
before
the
next
24
hours.
And
this
effectively
ensures
that
you
get
it
finished.
And
the
next
heartbeat.
F
One
thing
with
this
approach
like
this
is
philosophically
it
rollbacks
are
kind
of
meaningless
now
with
when
you're
referencing,
something
under
that
changes
underneath
you,
the
git,
commit
hash
that
you
are
using
to
you
know,
sync
to
or
roll
back
to.
That
is
no
longer
like
what
you're
rolling
back
to,
because
the
thing
that
the
rem,
the
moving
target
underneath,
has
changed,
meaning
that
was
one
of
the
rationales
for,
or
at
least
trying
to
convince
others
like
this
is
not
a
good
practice.
E
Agreed
you
could
roll
back
to
the
previous
git
repository
sha,
but
at
the
same
time
I
mean
you
could
roll
back
to
a
sink
for
a
previous
kit
repository
shop.
But
then
the
underlying
reference
is
probably
totally
different
and
unrelated.
How
do
you
even
make
sense
of
the
fact
that
change
or
did
not
change,
yeah.
D
Yeah
but
there
you
can
specify
the
remote
reference
using
attack,
correct
sha,
and
you
should
all
right
right.
F
Encourage
and
but
then
they
they
wouldn't
have
this
problem
if
they
were
doing
it
that
way
right
all
right
and
help
try
this
a
little,
it's
kind
of
like
a
similar
problem
like
so
you
have
an
umbrella
chart
and
then
maybe
you
have
actually
is
that
the
what
they
are
doing
here.
It's
they
don't
know
the
exact
thing
they're
doing
with
the
hometown.
Well,
this
is
the
scenario
that
it
moves.
It
changes
underneath
them.
F
So
we
we
have
a
24
limit
for
across
the
board.
F
F
No,
no,
I
mean
just
to
solve
this
used
case
that
basically
the
reason
it's
the
it
lasts
for
24
hours
is
because
we
cached
it
for
24
hours
right.
If
we
cache
it
for
shorter,
then
you
you
effectively
get
a
hard
refresh
for
you
know
you
could
be
like
10
minutes,
but
you
might
be
hesitant
to
do
that
across
the
board
and
then
at
the
application
level,
you
could
do
like
a
10
minute
experience.
F
It
would
I
mean
it's
kind
of
like
a
similar
result,
but
achieved
a
different
way.
F
F
B
But
my
question:
as
far
as
I
understand,
we
already
got
a
few
requests
around
this
feature
before.
Is
it
like
bugs
that
opened
frequents,
some
like
from
time
to
time
or
it
just
was
or
just
opens
like
one
time
per
year?
What
is
frequency
of
such
I.
C
B
B
B
C
Yeah,
I
know
that
you
know
basically
the
hesitation
to
add
additional
setting
is,
at
least
in
my
opinion.
We
have
so
many
settings
already
and
that's
why
every
new
setting
it's
just
it
just
adds
up.
Even
if
it's
small
yeah
and
I
was
trying
to
find
something
like
some
compromise,
what
do
you
think
if
we
introduce
an
administrator
level
setting
that
admins
can
specify
cash
duration
per
repository,
and
maybe
they
can
use
it
for
other
use
cases?
C
For
example,
they
might
say
this
weapon,
never
changes,
or
you
know
I
feel
like
very
confident
about
about
the
cheaper
and
you
can
cash
for
longer
and
someone
else
can
say
for
this
particular
repo
cash
for
for
the
shorter
amount
of
time.
So,
basically,
if
we
can
find
something
which
is
not
specifically
for
this
use
case
but
useful
for
it
as
well,
we
can
be
more.
B
C
F
So
oh
yeah,
one
of
the
technical
hesitations
we
had
with
this
was
that
the
pr
that
originally,
I
think
it
was
yeah
eight
months
ago
that
came
in
about
this-
wanted
hard
refresh
every
reconciliation
right.
I
think
this
is
a
similar
request,
if
I'm
not
mistaken,
and
that
that
was
one
thing
technical
thing
we
didn't
like,
which
was
we,
we
expect
to
be
able
to
reconcile
early
and
often
and
with
no
consequence
to
an
application.
F
So
a
shorter
expiry
is
almost
like
a
compromise,
because
if
they,
if
they
change
it
to
10
minutes
or
five
minutes,
even
before
the
the
cache
expires,
at
least,
if
we're
doing
a
burst
of
reconciliations
on
the
same
applications
they
will
allow
get
through
during
that
five-minute
window
period
and
reconcile
early
and
often
worse
versus
the
original.
F
C
That
was
the
hesitation
I
I
agree
with
you
like
we
didn't
want.
We
just
knew
that
if
you
do
a
hard
refresh
on
every
reconciliation
it
it
will
just
not
work,
it
can
only
work
if
you
have
like
one
application
or
like
10
up
to
10,
and
then
it
would
be
super
slow,
but
if
we
have,
if
we
just
expire
cash
early,
I
think
it
would
be
good,
for
you
know
it
will
work.
C
F
So
that
one
I
would
be
more
amenable
to,
rather
than
a
setting
to
always
hard
to
refresh
of
the
it
places
the
burden
less
on
the
controller
and
more
on
the
the
repo
server
I
mean.
Well
sorry,
I
that's,
I
made
a
mistake.
It
spreads
the
load
across
time.
I
guess
it
reduces
the
load
on
application
controller
for
sure.
B
C
C
Could
you
repeat
what
the
proposal
is
ability
to
control
duration?
You
know
that
how
long
is
cash
is
going
to
be
preserved,
purely
positive.
E
Oh
okay,
yeah,
and
and
where
would
that
configuration
be
by
the
admin
right?
I
expected.
C
C
There
will
be
questions
like
how
come
it's
not
part
of
application
configuration.
So
what
I
was
thinking,
something
very
right.
You
know,
like
maybe
one
more
one
environment,
one
additional
environment
variable
which
which
has
a
repo
specific
yeah
setting,
and
this
way
we
basically
we're
not
advertising
that
as
a
first
class
feature
it's
for
kind
of
edge
cases
for
users
who,
for
whatever
reason,
have
this.
You
know
comp
chart
that
updated
without
changing
version,
but
it's
a
good
workaround
for
them.
So
it
feels
like
a
compromise.
F
I
I
suggested
application
because
it
felt
like
the
the
use
cases
were
to
have
this
happen
in
the
application,
but
I'm
I'm
okay.
I
don't
have
a
strong
opinion
on
where
the
setting
is
so.
E
I
think
I
think
jesse
on
that
note,
I
would
say
I
think,
since
the
setting
is
actually
on
the
repository,
because
that's
what
we
are
caching
and
refreshing
like
in
a
situation
where
there
are
multiple
applications,
referencing
the
same
repository.
F
Okay,
so
let's
take
this
example
for
this.
I
think
they're
referencing,
a
helm
chart
right.
That
is,
can
you
scroll
up?
F
Okay,
so
I
don't
know
saying
next:
is
some
nexus
helm
chart
that
seems
to
have
a
moving
target
underneath
it?
So
no
one
else
is
going
to
no
other
repo.
I
guess
you
could.
I
guess
you
would
only
sing
about
this.
Repo
is
the
use
case
right,
so
I
guess
it
would
work
for
this.
F
Okay,
so
it's
I
mean,
I
guess
a
repo
setting
would
work
for
for
this
specific
use
case.
F
Yeah
again,
I
don't
have
a
strong
opinion
where
the
setting
is.
I
just
think
the
the
approach
of
doing
it
by
x
bring
cash
earlier
is
a
little
better
for
the
controller
than
it
is
like
having
an
app
setting.
That
always
does
hard.
So.
C
F
Yeah,
they
need
to
understand
that
the
the
practice
of
moving
targets
underneath
the
referenced
repo
means
they
won't
be
able
to
roll
back
correctly.
F
A
So
I
guess
we
reach
an
agreement
here
now
to
move
forward
on
this
issue,
so
pasha
anything
else
on
the
triage
for
this
week.
I
wanted
to
mention.
A
Yeah,
it
seems
nobody,
we
can
hear
you
well,
you
can
go
ahead.
G
Okay,
yeah
try
to
ignore
the
tunnel
yeah,
so
I
have
we've
gotten
a
great
pr
from
michael
to
the
application
set
repo
in
the
merge
generator,
but
what
it
looks
like
we're,
hitting
the
issue
of
coupe
ctl,
not
supporting
applications
of
resources
that
are
over
a
certain
size.
So
in
our
case
we
have
a
custom
resource
definition
which,
because
of
this
pr,
because
of
the
way
the
changes
to
the
crd
are
structured.
It's
causing
us
to
overflow
what
a
standard
coupe
ctl
apply
can
do.
G
So
this
has
implications
for
applications
that
install
specifically
the
applications
that
install
instructions
and
also
the
ergo
cd
install
instructions
say
if
you
want
to
install
application
set.
If
you
want
to
install
argo
cd,
you
do
coupe
ctl
apply
and
then
the
installable.
G
Unfortunately,
this
pr
would
break
that,
because
again,
that
crd
is
over
the
262.
I
think
that's
256
kilobytes,
that
is
supported
by
kubectl
and
so
would
require
us
to
switching
to
something
like
coupe
ctl
replace.
G
So
I
guess
my
question
is
one:
have
other
folks
hit
this
in
the
past
and
have
a
good
strategy
for
dealing
with
this
and
two?
If
not,
how
against
switching
to
something
like
coop
ctl
replace,
would
we
be.
C
It
yeah,
I
think
it's
because
we
kind
of
we
have
we
already
have.
You
know.
G
C
F
Oh
okay,
well
one
rollout,
I
think,
might
have
hit
this
at
one
point,
but
we
did
drop
the
descriptions
to
help
get
past
it
is,
I
was
going
to
ask,
is:
are
descriptions
of
every
field
included
as
part
of
the
crd.
G
They
are,
I
don't
know
if
they're
included
in
what
I
don't
know
if,
when
the
metadata
is
applied,
if
they're
included
in
that,
like
I
know
there
is
some
shrink,
there
is
a
shrink
that
occurs
to
ensure
that
what
is
included
in
the
resource
annotation
is
smaller
than
the
actual
crd
itself.
I
don't
know
if
descriptions
are
removed
from
that,
but
we
we
we
haven't,
tried
removing
the
descriptions
if,
if
there's
an
easy
way
to
do
that,
I'd
I'd
be
curious.
F
Yeah,
cube
builder
has
a
a
way
to
set
description
length
to
zero.
I
think,
okay,
it's
like
a
it's
a
one
line
option
and
I
think
that's
cube
builder.
The
sierra
controller
gen
is
the
tool
that
generates
the
series.
So
it's
a
one
line
change
to
drop
descriptions
that
will
that
should
help,
because
I
think
descriptions
are
included
in
the
last
supplied
annotation.
Everything
is
it's
like
a
duplicate
of
the
whole
thing.
F
C
C
So
it's
possible
that
we
can
just
define
an
additional
field
type,
and
you
know
we
can
store
definition
of
every
operator
as
a
separate
type
and
then
just
reference
this
type
in
the
merge
operator,
and
this
way
we
don't
have
to
double
yeah.
F
F
It
inlines
it
every
time
it's
where
it's
used,
rather
than
references
it
every
time
it's
used,
which
wouldn't
surprise
me,
because
even
with
rollouts,
the
we
use
controller
gen,
but
then
we
do
a
lot
of
the
post
processing
of
the
cr
to
to
massage
it.
The
way
we
need
it
like
we
have
extra
kubernetes
annotations.
F
We
have
to
drop
some
of
the
things
it
generated
because
of
I
think
some
of
the
validation
is
not
correct,
so
it
wouldn't
surprise
me
if
controller
gen
is
like
duplicating
and
being
inefficient
about
the
types
that
it
embeds
inside
the
open
api
spec
that
that
would
be
something
I
would
check
too.
F
I
don't
I
I
don't
know
I
I.
I
know
that
it's
possible
to
define
types
and
just
reference
them,
but
I
just
I
don't.
F
I
I
G
Gotcha
so
it
sounds
like
there
are
maybe
some
strategies
to
reduce
that
around
defining
an
open
api
type
and
then
referencing
that,
rather
than
doing
duplication,
is
there
a
way
to
do
that
without
needing
to
manually,
maintain
the
open
api
portion
of
the
crd
like
to
still
rely
on
the
ability
to
automatically
generate
crds
from
go
code
with
coup
builder
tanks.
F
Yeah
we
I
mean
we
could
be
totally
wrong
about
it.
Like
code
controller
tools
might
be
doing
the
right
thing,
and
then
we
it's
a
it's
a
valid
size
problem
of
your
cdr,
in
which
case
I
think
we
can
look
at
description
dropping,
but
it
should
be
quick.
You
should
be
able
to
check
really
quick
if
it's
doing
something
dumb
and
then
I
do
recommend
trying
to
use
controller
gen
to
get
you
90
of
the
way
there
and
then
post
processing
like
we
use
a
mini,
go
application
that
just
manipulates
gmo.
I
Yeah,
I
remember
the
reason
you
can't
have
recursive
types
in
crds
is
something
about
not
implementing
references,
so
it's
possible
that
even
if
the
swagger
doc,
which
alex
just
links,
can
contain
references,
it's
possible
that
that
doesn't
also
apply
to
the
open
api
spec,
that's
embedded
in
the
crd
itself,
but
I'll
check
that
just
seems
possible.
C
And
maybe
like
the
last
resort,
let's
say
nothing
works
johnson.
How
do
you
think
if
we
just
drop
the
merge
generator
spec
part?
What
that
means
is
it
will
be?
You
know
it
would
be
like
a
disadvantage
of.
It
would
be
limitation,
we
would
be
saying:
oh,
if
you
use
mirror
generator,
you
would
not
get
the
validation.
Is
it
an
option
at
all.
F
Yeah,
you
have,
you
probably
have
a
different
problem
if
you
drop
validation,
that
your
controller
then
has
to
be.
F
So
yeah,
so
this
is
actually
a
problem
with
this
because
we
didn't,
I
think
we
didn't
have
good
validation
for
a
while.
So
if
you
allow
not
unvalidated
objects
to
enter
the
the
system,
your
controller
has
to
be
able
to
marshal
every
single
object
correctly.
F
Unless
you
switch
to
an
informer
that
deals
with
in
unstructured
objects,
which
is
a
is
a
big
undertaking.
That's
actually
what
both
the
workflow
controller
and
the
rollout
controller
do,
because
they
didn't
have
validation
for
some
time
but
yeah.
So
the
consequence
of
allowing
malformed
objects
is
that
if
one
of
them
doesn't
marshal
correctly
to
your
data
type,
it
basically
hoses
your
entire
controller,
because
the
informer
framework
just
can't
deal
with
that
problem
like
it
can't
deal
with
one
malformed
object.
F
So
that's
why
validation
is
kind
of
important.
But
if
you
choose
not
to
use
validation,
you
have
to
use
an
informer
that
deals
with
unstructured
object
and
then
switch
during
reconciliation.
You
switch
you
convert
the
unstructured
to
a
typed
definition
and
it
only
affects
one
app
or
one
rollout
when,
when
the
bad
thing
happens,
instead
of
the
all
of
the
roles.
C
C
Basically,
I
think
we
kind
of
stuck
a
little
bit
because
there
was
a
bug
that
it
wasn't
clear
how
to
reproduce
or
fix.
So
the
problem
was
related
to
our
bug.
Caching,
that
we
used
to
improve
api
performance,
and
so
basically
we
could
not
reproduce
it
ever.
It
happened
once
and
then
there
was.
It
was
not
clear
how
to
fix
it,
and
basically
the
solution
that
I'm
proposing
right
now
is
that
I
start
blaming
caspian.
C
I
I
started
suspecting
that
maybe
caspian
itself.
The
caspian
is
a
library
that
we
use
for
urban
implementation.
Maybe
the
library
itself
had
a
bug,
has
a
bug
and
I
looked
into
list
of
changes
related
to
our
buck
cash
and
king
caspian,
and
there
were
fixes
related
to
multi
trading,
and
so
what
I
just
did
I
upgraded
has
been,
and
now
I'm
hoping
the
issue
is
solved
and
we
basically
we're
still
using
you
know
into
it
as
a
way
to
validate
releases.
C
C
Of
holding
the
release
so.
C
It
was,
I
mean
it
it
basically,
we've
got
false.
So
what
happened
is
that
user
created
the
project
and
project
has
a
correct
urban
policy
but
requests.
We
were
failing
with
the
permission
denied
error
incorrectly,
so
we
kind
of
we
cached
false
response,
and
we,
you
know,
ipad
server
was
saying
it
was
incorrectly
responding.
No
and
then
the
iphone
restarts
fix
the
problem.
C
C
F
F
Look
for
okay!
I
think
I
might
squeeze
in
one
of
the
tuning
stuff
that
I
needed
to
expose
the
get
ups
engine
tuning
the
to
the
argo
cd,
so
they
can
be
controlled
by
environment
variables.
I
didn't
get
to
that
yet,
but
I'll
do
it
by
this
week.
A
Okay:
okay,
thanks
everybody!
Yes
we're
over
two
minutes,
so
I
moved
yen's
topic
to
next
week's
meeting
and
see
you
all
next
week,
thanks
bye.