►
From YouTube: Argo Contributors Office Hours May 26th 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Yeah,
can
you
hear
me
okay?
Yes,
we
can
thank
you
sweet
yeah,
so
for
myself,
fairly
uneventful,
pretty
standard
set
of
questions
asked
in
the
discussions
so
answered
those
with
pointing
folks
to
the
relevant,
bugs
or
relevant
documentation
topics
and
for
issues.
None
of
them
that
stand
out
as
worth
bringing
up
specifically
here
jumped
in
on
a
couple
and
saw
some
other
folks
on
the
team
jumped
on
on
a
bunch
as
well,
so
that
was
great,
so
yeah,
I'm
pretty
uneventful
nothing
to
bring
up.
A
Okay,
thanks
jonathan
ishida,
anything
to
add.
A
A
Thanks,
michael
all,
right,
okay,
so
first
topic
of
the
day,
regina
added
one
of
the
issues-
and
I
guess
it
was
mainly
to
see
if
there
was
any
update.
Regina
are
you
in
the
call
items.
A
Doesn't
seem
so
but
yeah
open.
I
think
the
issue
is
related
to
argo
city
not
being
able
to
connect
to
external
repos,
while
using
customize.
A
If
the
external
repo
is
private,
so
this
the
ticket
is
opened
and
she
was
well,
there
was
some
triage
already
done
and
it's
an
open
ticket,
I'm
not
sure
if
we
should
bump
this
the
priority
it's
it
might
have
some
security
implications
as
well.
A
But
I'm
not
100
sure
why
she
brought
up.
Maybe
it
was
requested
by
someone
at
red
hat,
not
sure.
A
E
Yeah
definitely
so
for
those
who
are
unaware
in
kubernetes
version,
1.24,
there's
been
a
switch
where
basically
in
previous
versions
of
kubernetes,
when
you
create
a
service
account,
basically
a
secret
service
service
account
token
type
is
created,
and
that
is
then
populated
by
kubernetes
with
a
bearer
token,
which
can
then
be
used
for
authentication
purposes,
but
in
1.24
this
is
no
longer
going
to
happen
automatically.
E
I
mean
this
was
done
as
a
means
of
encouraging
basically
frequently
rotating
this
bear
token,
because
these
tokens
can
be
long
lived.
So
obviously,
this
is
very
convenient
from
a
development
standpoint,
because
you
can
consistently
reference
the
secret
and
get
that
bearer
token.
But
obviously
we
don't
want
to
encourage
having
these
tokens
be
things
that
are
not
regularly
rotated.
E
So,
basically
an
issue
came
in.
I
believe
it
was
last
week
where
someone
was
trying
to
add
a
1.24
cluster
and
the
gist
of
it
is
that
there's,
basically
a
timeout
that's
happening,
because
the
cluster
authentication
package
that
argo
cd
has
basically
is
looking
for.
This
service
account
secret
with
the
bearer
token,
and
it's
just
basically
no
longer
there
so
the
work
around
for
this
is
basically
that
people
can
manually
add
this.
E
E
So
this
would
require
probably
the
the
most
minimal
amount
of
code
changes,
but
kubernetes
is
basically
recommending
that
folks
sort
of
move
towards
this
token
request
api
approach,
which
is
just
basically
a
way
of
directly
requesting
a
token
for
the
service
account
through
this
token
request
api.
E
So
I
wanted
to
just
kind
of
bring
this
up
and
make
sure
that
one
folks
aren't
already
see
just
make
sure
that
no
one
is
already
looking
into
this
issue
already
and
just
kind
of
gather
some
thoughts
around.
Whether
or
not
the
goal
in
the
short
term
should
be
towards
to
kind
of
move
towards
this
token
request
api
and
maybe
even
look
into
supporting
both
approaches
or
whether
or
not
we
want
to
go
with
an
approach
of
you
know,
kind
of
preserving
the
existing
workflows
and
creating
this
service
account
secret.
A
Okay,
I
see
that
in
the
doc
you
created
a
research
document.
Is
this
document
linked
in
the
issue?
What
somewhere.
F
G
Daniel
I
I
also
looked
into
it.
I
have
one
question:
do
you
think
this
token
request
api
would
get
implemented
in
client,
go
in
a
future
version
and
we
might
benefit
that
we
just
need
to
upgrade
the
upgrade
the
version
of
it
that
we
use
in
our
cd.
G
E
It's
it's.
It's
actually
already
available.
The
the
token
request.
Api,
I
think,
is
actually
a
feature
that
was
added
a
very
long
time
ago
in
kubernetes.
It's
just
not
very
commonly
used,
obviously
because
of
the
creation
of
these
secrets
automatically,
but
it
should
be
ready
to
go
just
based
on
the
the
current
version.
A
All
right,
so,
if
I'm
understanding
well
daniel
the
the
discussion
is
between,
I
guess
we
need
to
decide
which
direction
to
go.
If
we
keep
the
the
current
approach
or
if
we
go
with
kubernetes
suggestion
of
using
a
short-lived
tokens
for
cluster
communication.
H
E
It
looks
pretty,
it
looks
pretty
straightforward.
To
be
honest,
I
mean,
as
far
as
so
the
the
token
request
object.
Is
you
know
it's
essentially
just
it
has
a
configurable
expiration
time
on
it
and
you
can
detail
other
things
like
binding
it
to
a
particular
object
and
specifying
particular
audiences
that
the
the
token
is
relevant
to,
but
I
think
for
our
purposes,
it
would
really
only
require
us
to
specify
the
expiration
time
and
then
it's
at
this
point.
E
I
think
it
would
only
really
affect
the
the
get
cl
cluster
token
function.
That's
part
of
cluster
auth
and
then
also
the
function.
That's
responsible
for
rotating,
that
off
right.
H
H
E
Yeah-
and
that's
that's
mainly
also
why
I
brought
it
up
here.
I
wanted
to
make
sure
I'm
not
overlooking
any
other
places
but
yeah.
It
seems
pretty
not
heavily
used
throughout
the
code
base
cool.
A
A
Yeah,
that's
a
good
point.
I
see
that
in
the
ticket.
I
guess
that's
what
you
were
referring
to
as
a
manual
process
daniel.
I
see
this
user
provided
the
step-by-step
of
how
to
manually
configure
it.
So
it
seems
kubernetes
allows
still
allows
us
to
define
an
annotation
in
the
in
the
secret
and
then
the
token
gets
automatically
created,
and
then
we
have
to
manually
link
that
secret
in
the
service
account
from
what
I
read
here.
A
E
Yes,
the
the
token
controller
will
will
still
populate
these
secrets
as
as
was
done
before,
but
yeah
there's
just
a
manual
step
now.
F
D
Yeah,
that
seems
worth
fixing.
You
know
quickly.
A
Yeah,
I
agree
so
let's
maybe
discuss
this
offline
in
and
decide
who's,
picking
it
up
and
maybe
go
for
the
fastest
fix
and
make
sure
we
cherry
pick
this
for
the
for
the
coming
release
and
probably
it's
a
good
idea
to
also
update
a
documentation
for
previous
argo
cd
versions
using
newer
kubernetes
version
and
how
to
proceed
with
the
manual
step.
Unless
we
want
to
patch
release
previous
argo
city
versions
as
well.
H
H
So
it
oh
sure
when
creating
we
that
makes
sense,
but
that's
fair
yep,
okay,.
E
So
yeah
this
is,
this
is
just
for
the
the
adding
and
the
bar
back
on
external
stuff.
Okay,.
A
I
I
I
I
Yeah
yeah
that
that's
like
okay,
so
I
basically
wanted
to
get
feedback
on
this.
So
this
one
is
we
wanted
to
get
a
metric
emitted
when
there
is
a
rollout
aborted
for
the
for
the
version
2
because
of
analysis,
failure.
I
All
right,
so
so,
basically,
the
this
is
the
peer
that
earlier
worked
on
here
me
and
hari
we
were
discussing.
It
would
be
good
to
have
a
metric
to
be
emitted
when
rollout
analysis
fails,
and
then
it
reverts
back
to
the
stable
version
right.
So,
in
order
to
do
that,
what
I've
done
is
I've
updated
the
code
in
the
sync
dot
curve
here,
where
there
is.
This
is
aborted
section
that
gets
evaluated
so
in
this
section
of
code
in
the
calculate
rollout
conditions.
I
Here
we
evaluate
the
conditions
of
rollout
why
it
is
aborted
and
we
update
the
message
here.
This
is
what
get
reflected
back
in
the
status
object.
So
if
I
go
back
to
the
status
object
here,
so
the
the
status
message
that
we
put
here
is
coming
from.
This
is
about
a
block
of
this
function.
I
Right
now-
and
this
is
a
piece
of
code
that
gets
executed
and
validates
for
various
conditions,
and
in
this
one
we
are
adding
a
metric
like
we
are
creating
a
metric
with
rollout
aborted,
with
analysis,
failure
and
then,
as
part
of
this
event,
the
metric
gets
emitted
when,
when
this
event
gets
run,
so
we've
created
this
new
status
condition
and
as
we
pass
this
condition,
this
gets
updated
in
the
in
the
event
field,
and
so
it
gets
updated
in
the
event
field
like
this
and
a
metric
with
this
is
sent.
I
I
Messages
that
we
are
emitting
is
essentially
coming
from
this
code
block,
where
we
had
this
message
that
we
are
you
know
updating
with.
So
I
feel
this
code
area
seems
to
be
right,
but
I
just
want.
I
I
think
I
missed
that
part
yeah
sure,
so,
basically,
what
we
want
is
we
want
to
emit
a
metric
when
the
rollout
fails
with
an
analysis
right,
aborts
it
and
reverts
back
to
the
stable
version
right.
So
this
this
is
a
new
metric
that
we
are
now
emitting.
Okay
and
this
metric
is
emitted
from
the
rollout
condition,
calculate
rollout
conditions
function
and
there
is
a
is
about
block
in
this
block.
We
are
evaluating
and
updating
the
status
object
with
the
corresponding
messages.
I
Okay
on,
why
the
this
is
aborted
and
software.
Now,
at
that
point,
when
this
is
aborted
with
analysis
failure,
we
are
checking
for
that
condition
of
if
it
fail
because
of
the
analysis,
failure
we
are
generating
an
event
for
it.
As
part
of
this
event,
a
metric
is
also
emitted
right,
so
so
that's
what
this
pr
essentially
does,
and
that
is
what
is
raised
initially,
there
is
a
suggestion
from
jesse
that
we
should
probably
do
it
by
checking
the
rollout
status
instead
of
directly
updating
in
the
calculate
rollout
conditions
block.
I
But
my
point
here
is
rollout:
status
is
updated
from
this
block
itself.
So
doing
it
here
is
sufficient.
We
don't
have
to
go
back
and
verify
the
status
object
again
like
there.
The
reconciliation
happens
multiple
times
and
the
metric
may
not
be
accurate
as
well.
So
so
I'm
thinking
this,
this
approach
seems
to
be
right
for
me.
I
just
want
someone
else
to
review
this
and
let
me
know.
A
I
H
I
Yeah,
so
whenever
we
generate
a
recorder
event,
a
metric
for
that
particular
condition
is
generated.
A
Of
the
recorder,
I
would
be
okay
with
having
this
additional
metric
as
part
of
it
as
well.
H
I
Yeah
so
yeah,
I
just
want
to
bring
this
up.
Maybe
you
can
review
it
offline.
Also,
and
let
me
know
your
feedback
on
this
and
you
know
we
can
reopen
the
pr
if
you
think
this
is
good.
A
I
I
Okay,
I
just
need
to
add
a
retest
if
this
approach
looks
good
I'll,
add
the
unit
doesn't
update.
Okay,
okay,
we'll
take
a
look.
I
A
C
Sorry
I
had
a
topic
at
the
beginning.
Sorry,
I
joined
a
little
bit
late,
so
the
first
issue
right
there,
six
four,
four
zero
someone
had
opened
up
a
ticket
in
our
like
openshift
bugzzilla,
and
it
was
about
deploying
customization
that
retrieves
a
base
from
another
repo
is
throwing
a
509
error
and
it
linked
to
this
issue
and
it
is
a
proposal.
So
I
just
wanted
to
know
if
there
was
any
work
being
done
on
this
or
like
any
blockers
or
if
it's
planned
for
any
release.
A
F
A
I'll,
let
maybe
it's
good
to
touch
base
with
the
yeah
with
with
the
security
group,
there's
a
security
channel
and
and
double
check
on
whether
or
not
we
want
to
go
ahead
with
this
approach
because
yeah
the
credentials,
the
gig
credentials
is
provided
by
the
the
repo
that
is
configured
for
that
application.
A
C
Okay,
I
think
yawn
is
also
part
of
that
security
group,
so
I'll
reach
out
to
him
and
talk
with
him
about
that
and
see
if
it
can
be
discussed
in
that
meeting.
D
Regina
do
y'all
have
someone
who
can
push
the
implementation
after
that
security
conversation,
because
I
don't
know
if
anyone.
C
A
All
right,
thank
you
all
right.
So
any
last
minute
topic
someone
wants
to
to
bring
up
to
this
meeting
still
have
a
few
minutes.