►
From YouTube: Argo Contributor Experience Office Hour 29th Jul 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
evening,
everyone
and
I'm
going
to
start
sharing
today's
meeting
agenda
and
we
had
couple
items.
One
is
just
for
me
and
one
is
so
one
is
carry
over
from
previous
meeting.
A
You
know
to
find
what
you
supposed
to
test
and
let
us
know
if
you
cannot
do
it,
we
will,
you
know,
reassign
that
work
to
someone
else.
I
know
that
maybe
xiaomi
won't
be
able
to
do
it,
and
I
think
we
were
going
to
me
and
jan
we're
going
to
distribute
that
work
between
us
yeah.
So
that's
it
pretty
much
and
there
was
a
release.
Blog
and
sorry.
A
I
didn't
have
time
to
you
know
sync
up
with
everyone
about
release
blog,
because
we
were
just
really
busy
last
week
and
basically
a
release
blog
was
published
and
it's
a
it
highlight
most
important
features.
I
believe
it's
never
late
to
update
it
and
add
if
I
miss
something.
So
basically
let
me
know
if
you
want
to
add
more
features
into
release
block
to
get
attention
of
early
adopters
of
of
2.1
release.
A
Thank
you
for
volunteering
to
get
the
blog
done
and
shipped
alex.
It
basically
yeah.
I
wish
know
I
believe,
next
time
we
should.
You
know,
work
on
blog
together,
but
since
we
decided
to
work
with,
you
know
to
to
publish
release
in
a
predictable
time.
I
didn't
want
to
delay
it
by
another
week
and
I
guess
two
days
ago
was
the
deadline
to
create
release
candidates.
So
I
was,
I
really
wanted
to
get
it
done
before.
A
That's
it
that's
pretty
much
update
about
release
and
we
have
few
more
check
boxes
here
which
we're
supposed
to
check
before
that
release
is
going
to
be
closed
and
it's
like
a
just
that
you're
doing
due
diligence
work
right,
that's
it!
If
there
are
no
more
questions
about
release,
we
can
move
to
cluster
caching,
any
questions
or
comments.
A
Yes,
maybe
I
should
demonstrate
more
like
so
basically
what
jan
did?
I
think
he
wrote
the
script
that
discovered
all
changes
that
were
pushed
in
2.1
release
and
that
script
added
need
verification
label
and
basically,
yes,
and
every
search
change
that
has
after
every
apr,
which
has
that
label
it
also
has
assignee
and
basically,
if
you
assign
to
the
pull
request,
you
can
open
it.
A
You
know:
read
the
description,
one
more
time
verify
it
and
once
it's
verified,
you
can
just
remove
the
label
and
that's
it,
and
I
guess
eventually,
we
can
simply
query
all
the
issues
that
has
this
label
and
we're
supposed
to
have
no
issues.
And
if
that
list
is
checked
yeah.
That
means
we're
done
so
yeah
and
I
just
just
for
convenience.
Second
links
per
person.
B
A
And
I
changed
it,
I
guess
we
agreed
to
have
verified
label
and
this
is
like
an
inverse
label,
basically
exactly
and
it's
easier
yeah,
because
we
never
supposed
to
have
two
releases
in
progress
at
the
same
time.
So
it's
safe
to
have
needs
verification
label
and
it's
safe
to
assume.
This
is
for
current
release
and
if
there
are
no
issues
with
that
label,
that
means
current
release
is
fully
verified,
so
yeah
and
it's
still
manual.
Basically,
there
are
two
pr's
in
progress
that
I
still
have
no
time
to
complete.
A
We
want
to
automate
it
so
for
that
release
we
just
completed
it
kind
of
I
had
to
go
for
all
pairs
and
that
you
know
at
the
label
that
where
pr
where
label
was
missing,
I
guess
there
is
a
in
progress.
Automation,
basically
a
board.
That's
supposed
to
add,
need
verification
label
and
assign
the
pull
request
to
someone
automatically
so
yeah,
but
hopefully
it
will
done
soon.
B
Yeah-
and
the
other
thing
I
would
say,
is
I'm
not
sure
if
non-committers
have
the
ability
to
add
and
remove
labels.
A
Yes,
I
think
they
they
cannot
and
basically,
oh,
oh,
I
I
think
non-committers
can
remove
that
level.
It
will
use
or
cannot.
Okay.
If
not,
then
we'll
need
to
think
about
it.
A
I'm
trying
to
figure
out
who
maybe
schrodinger
doesn't
have
permission
to
remove
this
label.
I
don't
know
yeah,
I
just
checked.
I
don't
have
permissions
to
remove
the
label
in
this
case.
A
C
A
D
Okay,
so
guys
before
we
move
on
to
caching
is
there?
Are
there
any
blockers
that
people
are
currently
facing
like
pull
requests
that
need
to
progress
or
things
we
need
to
discuss
in
the
spirit,
that's
kind
of
trying
to
you
know
doing
progress
on
decision
making
and
stuff
right.
C
C
A
I'm
aware
of
two
blockers,
so
one
is,
I
kind
of
we
have
a
pull
request
that
basically
introduces
ui
testing
and
I
just
didn't
have
time
to
merge
it
because
I
felt
like
I
tried
to
run
it
locally
and
I
realized
I
had
to
make
some
changes
to
do
it
locally
and
then
to
to
make
it
work.
And
so
I
realized
that
it
is
really
nice
to
merge
it
along
with
ci
drop
that
can
along
this
docker
file.
A
At
least
that
can
run
e2e
tests
for
ui,
and
I
didn't
have
time
to
work
it,
and
I
maybe-
and
I
guess
blocker
in
this
case
is
me
or
we
just
need
a
person
to
to
unblock
that.
Pull
request
feel
like
we
need
a
person
who
can
work
on
on
docker
file
and
you
know,
and
and
dockerize
the
ui
test
for
ui.
That's
one,
and
let
me
I'm
pretty
sure
we
spoke
about
this
pull
request
in
the
previous
meeting,
so
it
should
be
in
in
agenda.
A
D
D
A
A
So
I'm
pretty
sure.
Maybe
it
can
emerge
today,
but
the
problem
is
that
pull
request
is
that
we
need
to
document
it,
and
maybe
I
basically
I
review
it
one
more
time.
I
had
no
additional
comments,
so
maybe
we
can
make
a
decision
now
and
we
can
agree
to
merge
it.
If
someone
is
going
to
pick
up
the
you
know
the
next
steps
and
try
to,
I
guess
use
you
know,
prepare
a
docker
environment
that
can
be
used
to
launch
the
test.
A
So
the
problem
is
local
dependencies.
Basically,
I
I
could
not
make
it
you
know,
could
not
start
it
locally,
simply
because
my
home
version
was
not
matching
the
version
used
in
the
test
code
and
couple
other
dependencies
and
that's
what
stopped
me
from
using
it
yeah
I
didn't
want
to
have
you
know
test
that
that
it
will
be
just
that
good
unless
we
plan
to
you
know,
continue
working
on
it.
C
Right
so
quick
question:
I
think
the
author
is
keith
any
key.
Do
you
think
this
is
something
you
could
pick
up
as
a
next
step,
assuming.
F
F
D
A
D
C
Yeah
I
could
go.
I
could
provide
some
context
about
why
I
added
this
topic,
not
no,
no
special
reason
as
such,
but
then
I
was
going
through
the
code
and
of
course
I
found
out
that
we
do
cache
the
cluster
state
at
startup
and
because
of
that,
of
course
there
is
a
spike
in
memory
initially,
which
would
flatten
out
eventually.
C
So
I
think
the
general
question
that
came
to
my
mind
is
controller.
Runtime
provides
us
a
cache
already
in
general,
but
we
don't
want
to
hit
that
cache
again
and
again.
That's
why
we
create
our
own
cache,
which
sounds
great,
but
how
do
we
ensure
that
cash
remains
up
to
date?
Is
it
based
on
events
that
we
listen
to
across
the
cluster
and
that
ensures
that
this
take
care?
The
cash
is
updated
because
it's
a
pretty
heavy
cash
to
maintain
and
with
caches
the
first
problem
that
we
would
all
have
is.
C
When
do
we
choose
to
invalidate
it?
When
do
we
choose
to
update
it?
So
I
think
I'm
I'm
I'm
not
looking
for
a
code
deep
dive,
but
I'm
looking
for
more
of
a
conceptual
deep
dive
so
that,
if
I
had
to
recommend
to
somebody
about
sizing,
I
had
a
good
understanding
of
what
the
internal
workings
of
the
cluster
state.
Caching
is
yes,.
A
So
I
guess
I
wanted
to
maybe
explain
why
we
had
to
go
with
the
cache
in
the
first
place,
so
basically
to
to
support
resource
pruning
one
way
or
another.
You
need
to
know
about
all
resources
in
a
cluster
and
basically,
we
used
to
not
have
a
cache
and
we
would
just
execute
bunch
of
list
requests
with
the
label.
A
We
I
checked,
how
flux
did
it
and
I
tried
to
run
it
on
a
cluster,
basically
on
a
midsize
cluster,
and
the
result
was
just
to
do
that
at
least
queries.
It
would
take
maybe
like
10-15
minutes
for
a
mid-sized
cluster,
and
that
was
this
is
pretty
much
one
approach
to
to
compare
to
states.
You
know
state
and
git
and
state
and
cluster,
and
another
approach
is
to
do
the
list
queries
once
at
the
start
of
of
your
of
basically
adversity
and
then
use
watch
requests
to
maintain.
A
A
Just
you
know
in
memory
state
of
the
monument
cluster
right
so.
C
Basically,
right
so
right,
so
quick,
quick
question
on
that,
so
that
was
what
you
mentioned.
So
we
do
a
list
first
to
effectively
get
so
so
do
we
do
a
so
we
do
a
list
and
effectively
get
on
each
of
those.
A
No,
no
okay,
so
we
basically
we
do
list
and
we
do
not
store
the
okay,
so
first,
arguably
for
every
every
resource
in
a
managed
cluster.
Our
argument
execute
list
request
and
only
store
metadata
of
every
resource
in
a
local
in
in
memory
cache.
Basically,
we
restore
kind
name
namespace
and
I
think
uid
and
some
additional
metadata
plus
argo
cd
knows
that
if
resource
has
a
label
the
targo
city
creates.
That
means
the
resource
is
managed
by
argo
cd
and
that's
why
it
will
also
for
such
resources.
C
A
A
C
Quick
question
so
a
little
quick,
so
I
think
just
to
clarify
the
initial
part
you
mentioned.
So
it
does
a
list
on
all
on
all
types
and
it
stores
metadata
of
almost
all
types
except
the
ones
which
are
managed
by
argo
cd
and
those
for
those.
It
maintains
a
full
resource
definition
exactly
yes,
okay
and
it
watches,
but
then
it
it
does
it
to
watch
only
those
objects
or
those
types
which
argo
cd
cares
about,
or
does
it
watch
all
types
that
are
available
on
the
cluster?
Yes,.
A
All
types
because
you
never
know
which
type
supposed
to
be
eventually
deleted,
like
you
might
get
some,
you
might
get
some
resource
that
used
to
be
in
git
and
it's
no
longer
there
but
argo
cd.
Can
you
still
care
about
that
resource?
Because
you
want
to
notify
a
user
that
there
is
a
leftover
resource
and
it's
supposed
to
be
deleted
by
during
sync
process?
A
And
that's
why
choices
are
you
either
very
whole
cluster
once
every
I
don't
know,
10
15
minutes
or
you
continuously
monitor
cluster
and
we
chose
to
monitor
cluster
continuously
and
that
gave
us
better
performance
and
at
the
same
time
it
basically
turned
argo
cd
into
basically
it
powers
all
ui
features,
because
we
already
get
information
about
cluster
state.
We
we
have
it.
We
use
it
as
an
opportunity
to
dump
the
application.
You
know
applications
state
into
radius
and
that
powers
ui
so
yeah.
A
It
kind
of
we
use
the
data
for
two
purposes,
one
to
do:
guitars
and
second,
to
visualize.
What's
happening
in
a
question
and
yeah
and
I
can
talk
about
like
there
are
some
you're
right.
It
might
cause
performance
issues
like,
for
example,
during
the
start,
you
would
get
a
memory
spike
and
we
kind
of
keep
trying
to
fight
with
it.
There
are
some
configuration
knobs
that
not
very
much
advertised
right
now
in
documentation,
but
basically
we
have
set
of
environment
variables.
A
A
C
Does
a
pagination
the
other
does
limits.
A
Exactly
yes,
so,
basically
the
amount
of
memory
you
need
kind
of
controlled
by
two
variables.
One
is
how
how
many
concurrent
list
request
you
do
and
we
use
page
generation
to
minimize
the
amount
of
memory
and
basically
the
page
size
is
the
second
variable.
So
I
guess
that's
the
second
link
yeah
and
it
works
well
for
the
most
of
the
cases,
but
there
are
edge
cases
where
it
doesn't
work.
The
simplest
example.
What,
if
you
have
config
maps,
for
example
in
your
cluster
and
config
maps,
can
be
big.
A
So
basically
you
can
have
up
to
one
megabyte
resource
object
in
configmap
and
then
in
this
case,
if
you
just
query
100
config
maps,
it
would
be
the
response.
Size
would
be
100
megabytes
for
so
50
megabytes.
If
you
use
50
as
a
page
size
and
then,
if
you
try
to
unmarshal
these
50
objects
into
goldlink,
it
will
like
I'm
sure
it
will
explode
into
like
one
gigabyte
of
memory.
So
you
will
get
a
spike
of
one
gigabyte
of
memory
and
there
is
no
like.
A
Basically,
we
we
don't
have
a
good
solution
right
now
like
how
to
fight
with
this
particular
problem.
If
you
have
unproportionally
big
objects
in
your
cluster,
then
basically
only
choice
is
to
give
our
ocd
controller
a
little
bit
of
buffer
of
memory,
so
it
will
survive.
Spikes.
A
Yeah,
and
so
maybe
we
can
start
talking
about
known
issues
and
these
known
issues
right
now
kind
of
not
very
much
documented,
but
I
will
just
go
for
them
and
eventually
maybe
we
will
document
so
this
memory.
Specs
is
one
that
I
think
the
most
prominent
one
and
we
also
had
another
problem
related
to
sinking.
A
It
was
causing
memory,
spikes
as
well
and
with
I'm
pretty
sure
we
kind
of
we
hopefully
solved
it
in
2.1
release.
So
we
like
we
used
to
fork.
Exec
tube
ctl
apply
command
and
later
we
switched
into
pretty
much
the
same
logic
bucket,
but
instead
of
fork
exec
we
were
using.
We
were
importing
cube
ctl
as
a
golden
library,
and
we
were
executing
exact
same
code
and
apparently
cuba
ctl
apply
uses
a
lot
of
memory.
It
literally
like
uses
cups
it'll
apply
with
a
single
resource
could
take
maybe
50
megabytes.
A
And
if
you
try
to
sync
100
resources,
you
need
5
gigabyte
of
memory,
which
is
a
lot
and
we
just
did
a
simple
optimization.
That's
hopefully
eliminated
that
memory
spike
at
all
in
2.1,
but
it's
not
fully.
A
All
right
and-
and
there
is
a
short
problem
that
it's
not
documented
at
all,
but
basically
I
try
to
analyze
how
much
memory
do
we
actually
like?
Even
if
you
don't
worry
about
spikes
controller,
still
uses
quite
a
lot
of
memory.
A
The
worst
case
I
the
worst
case
scenario
I
saw
was
some
user
reported
that
they
manage
giant
cluster.
It
has
almost
1
million
of
objects
in
it,
and
controller
requires
40
gigabyte
of
memory
to
store
that
one
million
of
objects
in
cache,
which
is
a
lot,
and
I
I
was
kind
of
trying
to
analyze
memory
profile
of
a
smaller
rcd
instance
that
uses
just
18
8
gigabytes
of
memory,
and
what
I
found
is
that
apparently
memory
heap
of
that
instance
was
only
one
gigabyte
and
seven
gigabytes
was
somewhere
else
like.
A
I
could
not
even
understood
what
was
it
and
then
based
on
some.
You
know
I
was
just
trying
to
read
how
go
link
manages
memory.
It
seems
like
we
simply
waste
memory
on
on
voice
requests,
so
every
watch
request
requires
creation
of,
I
think
more
than
two
goal
routines
each
go
routine
has
a
little
bit
of
stack.
It
has
a
little
bit
of
you,
know
bunch
of
variables
stored
in
stack
and
some
go
retain
overhead
itself
and
the
instance
that
I
was
troubleshooting.
A
I
think
it
starts
it
monitors
200
or
300
clusters
with
around
100
resources
in
each
cluster
and
that
eventually,
basically,
you
need
to
create
around
20
000
gold
routines
and
each
gold.
Each
gold
routine
consume
few
hundred
kilobytes
of
memory.
So
basically
there
is
a
possibility.
We
can
save,
I
think
more
than
half
of
memory.
That
controller
consumes
if
we
stop
using
routines.
A
If
we
stop
using
built-in
golang
client
to
do
watch
sorry
not
built
in
gold,
you
know
the
kubernetes
official
goal
and
client
client
to
do
watch
request
and
instead,
if
we
switch
to
you
know
just
old
http
like
if
we
manually,
execute
http
request
and
avoid
creating,
go
routine
and
simply
use
callbacks
to
process
response.
So.
A
Basically,
kind
of
to
summarize
that's,
hopefully,
second
problem
of
you
know:
memory
spikes
during
thinking.
Hopefully
it's
solved
and
we
still
have
memory
spike
during
initialization
and
no
plan
how
to
address
it,
and
we
have
in
general
high
memory
usage
which
I'm
hoping
we
will.
We
can
attack
it
in
2.2
release
and
just
reducing
in
general
how
much
memory
we
use.
C
E
A
A
Yeah,
so
there
is
a
function
called
start
missing,
which
also
watch
events
and
all
it
does
it's
that.
A
It
does
a
lot,
but
the
main,
the
the
code
that
I
was
talking
about
is
here.
So
we
start
to
watch
yeah.
So
it's
here
we
start
watch
request,
and
this
is
the
code
that
kubernetes
provides
us.
Basically,
kubernetes
has
golden
client
which
has
voucher,
and
basically
we
start
watch
request
here
and
then
we
process
events
from
that
watch
request
and
update
cache.
Basically,
so
in
theory
we
can
get
rid
of
it
here
and
we
can
simply
create
http
request.
It's
not
so
complex,
it's
literally,
it
has
no
like.
A
We
just
need
to
construct
the
right
url
and
that
url
is
going
to
the
request
to
that.
Url
is
going
to
send
back
json
payloads
that
can
be
deserialized
and
and
with
them
so
and
basically
the
only
way
to
try
the
only
way
to
verify.
If,
if
the
theory
is
working
is
to
try,
you
know
just
implement
that
code
and
then
try
to
run
it
on
a
big
cluster
and
see
if
it
helps
so
or
if
someone
has.
A
C
Sounds
good
to
me
so
far
alex
this
is
helpful.
I
think
I
got
more
information
that
I
was
looking
for
and
I'm
really
happy
about
it.
Thank
you.
Thank
you.
A
A
D
Yeah,
I
think
we
were
pretty
cornered
in
this
to
use
this
approach
because
we
needed
the
cache,
but
we
couldn't
use
the
former
cache,
because
that
former
cache
would
just
be
way
too
big.
So
this
is
kind
of
somewhere
in
between,
like
it
acts
like
an
information
but
allows
flexibility
on
like
how
we
cash
and
what
we
cash
alex.
D
I
did
have
one
thought
about,
as
you
were
going
through
the
caching
so,
as
I
recall
as
we
watch
elements
we
receive
as
they
come
in,
and
then
we
realize
okay,
this
this
pod
is
part
of
application
food.
D
We
then
is
it
true
that
we
then
go,
and
you
know
recalculate
the
resource
tree
at
that
point
in
time.
We
do
all
the
yes
like
call
back
okay,
while
we're
doing
that.
Are
we
holding
the
global
lock
of
the
not
I
mean
I
don't
know
if
it's
global
luck
or
are
we
holding
any
excited
mutex
on.
A
Yes,
that's
as
true
as
well,
we
have
a
mutex
per
cluster.
Basically,
if
argo
cd
monitors
two
clusters,
only
one
is
going
to
be
locked,
but
yeah
we
hold
that
lock
so
every
time
something
changes,
then
we
kind
of
freeze
cache
of
one
cluster
and
then
we're
trying
to
figure
out
if
the
changed
resource
belongs
to
any
application
or
not.
If
it
belongs
to
an
application,
then
we
in
queue
that
we
put
that
application
into
the
queue
and
controller
will
try
to
recompare
the
state
of
application,
and
we
have
basically
we
had
to
introduce.
A
I
think
three
levels
of
recomparing
one
is
very
lightweight.
We
simply.
We
assume
that
nothing
changed
in
the
application
and
we
just
need
to
dump
into
radius
resource
tree
and
second
level
is
we
assume
something
changed,
but
we
already
have
everything
in
cash
and
we
just
need
to
recompare
and
last
one.
Is
we
force
pulling
geets
revision
with
force,
resolving
git
division
to
make
sure
we
get
the
latest
state
from
from
kids
kind
of
yep.
D
And
so
yeah
the
question
I
have
is
that
so
there's
a
there's
a
I
think
of
a
little.
D
Maybe
it's
not
expensive,
but
you
may
you
can
correct
me
if
I
want
there's
a
little
bit
of
expense
of
recalculation
of
the
resource
tree
at
at
the
point
in
time
where
we
get
the
object,
I'm
wondering
when
you
have
a
high
churn.
Let's
say
I
have
an
application
with
like
hundreds
of
pods
in
it
and
then
the
pods
are
going
through
update
and
they're
churning.
So
every
all
the
pods
are
in
q,
basically
related
to
this
app
foo
and
pods
are
constantly
changing.
So
I'm
wondering.
D
I
think
it's
true
that
every
chain
churn
in
the
pod
causes
us
to
recalculate
the
work,
the
resource
tree
and
then
the
next
plot
change,
and
then
we
recalculate
the
resource
tree
and
then
should
that
be
kind
of
put
into
a
work
queue
instead
of
doing
it
in
line
in
order
to
I.
D
A
I
think
we
I
even
had
a
experiment,
but
I
guess
it's
a
good
idea
right
now.
We
have
no
batching
for
no
batch
processing.
We
simply
process
each
event
at
a
time
and
we
already
have
a
ticket.
Basically,
it
causes
high
cpu
usage.
If
you
have
application
that
constantly,
you
know
something
changing
in
the
cluster.
A
We
need
some
kind
of
thing
to
just
maybe
bite
requests
and
then
just
process
them
once
in
a
while,
like
maybe
once
every
I
don't
know
the
trick.
Tricky
part
is
how
frequently
like
how,
how
or
frequently
like
how
long
well
is
it
safe
to
do
the
processing
for.
C
D
But
we
have
this.
This
work
queue
mechanism
already
for
an
application
like
if
we
were
to
just
move
that
work
into
the
application
reconcile
mm-hmm
right
with
that
collapse,
the
the
work
needed
to
be
done.
A
You
can
safely
assume
that
if
you
get,
I
don't
know
like.
Let's
say
we
introduce
batching
and
we
process
every
half
of
seconds
that
batch.
We
can
as
well
go
for
all
events
in
badge
and
then
just
collapse.
All
modifications
get
rid
of
old
notification
events
of
a
single
resource
right,
because
if
you
have
you
know
resource
that
changed
10
times,
you
can
just
discard
nine
other
changes
and
work
on
the
most
recent
right.
F
A
A
A
And
then
second
problem
is
we
don't
have
a
solution?
Basically,
even
if
you
found
find
that
application,
the
best
you
can
do
is
to
use
resource
filters
to
exclude
constantly,
which
is
not
a
great
solution.
Yeah
and
second,
it's
hard
to
find-
and
I
guess
the
perfect
solution
would
be
to
teach
agua
city,
so
it
can
protect
itself
from
such.
C
Resources
quick
question
on
what
you
know
jesse
and
you
were
discussing
just
now-
so
is
this,
so
I
think
it
was
a
discussion
about
right
now
whenever
an
object,
changes
based
on
watching
the
event.
C
We
also
go
in
and
pull
from
git
and
try
to
see
if
on
get
the
yaml
has
changed
in
a
way
that
our
city
should
care
about.
That
is
targo
city,
that
is,
the
local
object.
Hasn't
is
associated
with
an
application,
and
are
we
saying
that,
instead
of
doing
that,
we
should
move
that
comparing
from
gate
with
the
local
version
of
it
to
the
application,
reconcile
queue
that.
D
Is
actually
already
in
the
rectus
film
so
that
that
is
that
work
is,
is
actually
already
done
right
efficiently.
You
mean
it
collapse
and
we
don't
repeat
redundant
work
that
one
there
is
some
one
more.
D
D
And
I
think
there's
a
we're,
probably
doing
unnecessary
work.
If
you
have
a
high
churn
application,
because
you
might
have
like
10
events
for
the
same
app
and
then
you
recalculating
that
tree
10
times,
probably
in
the
span
of
a
milliseconds
and
then
really
you
could
have
just
waited
till
that
last
event
and
then
once.
C
Actually
so
this
is
about
figuring
out
what
is
the
associated
application
and
then
going
on
going
up
all
the
way
up
the
resource
tree
and
we're
saying:
let's
not
do
it
right
away.
Let's
put
that
in
the
queue,
because
other
elements
in
the
resource
tree
may
have
also
got
events,
and
instead
of
doing
it
like
10
times,
you
might
do
it
once
and
ignore
the
others,
because
it's
done
yes,
yeah.
D
This
is
kind
of
this
is
basically
the
same
work.
You
pattern
for
controller
yeah,
but
it
but
we're
not
following
it.
For
some
things
like,
I
think,
there's
other
stuff
aside
from
resource
street
computation
but
anyways
since
we're
on
the
topic
of
performance
is
something
that
I
was
just
thinking
about.
While
you
were
talking
alex.
C
One
more
thing
you
mentioned:
jesse
alex
sorry,
you
mentioned
something
around
git.
Is
that
some
so
do
we
make
any
calls
to
get
at
all
after
receiving
an
event
for
an
object
on
the
cluster?
I
guess,
as
of
2.1.
A
We
no
longer
make
queries,
we
go
to
radius
and
we
get
a
result,
division
from
radius,
and
then
we
get
you
know
previously
calculated
manifest
from
radius
assuming
radius
has.
All
of
that.
We
don't
need
right,
but
if
radius
happens
to
be
empty,
then
we
would
you
know
all
media
cash
expired.
Then
we
would
go
to
gates
and
then
rerun
manifests.
Originate,
manifests.
D
But
that
that
process
was
always
in
the
app
reconcile
and
not,
and
it
was
never
in
the
inline
event
processing,
so
it
makes
sense.
Okay,.
C
A
To
find
it-
and
I
this
is
from
the
user
who
first
they
discovered,
they
have
had
high
cpu
usage
and
I'm
pretty
sure
he
basically
understood
what
is
happening
and
he
filed
a
ticket
that
explains
kind
of
describes.
What
we're
talking
about
right
now,
he's
a
I
mean
he
is
proposing
to
a
throat
link
and
do
not
process
application
too
frequently,
and
basically
do
it
once
we
need
to
agree
on
meaningful
duration.
I
guess
this
may
be
half
of
something
or
maybe
a
second.
D
Is
that
the
possibility
or
is
it
does
it,
do
you
think
we
need
throttling
instead.
A
I
mean
yeah
at
troll
link
was
I
yeah,
I
didn't
think
about.
Okay,
I
thought
about
the
implementation,
but
I'm
it
would
take.
I
was
thinking
to
simply
add
this
kind
of
like
for
simplicity
of
implementation,
so
don't
touch
too
much
code.
We
could
add
basically
in
this
file,
okay
and
I'm
not
ready
to
point
where
exactly
it
should
be,
but
I
believe
there
is
a.
A
I
just
had
a
prototype
and
the
prototype
of
id.
They
just
added
kind
of
simple
batching
logic.
So
we
have
a
function
that
processes,
events
and
all
it
does.
It
writes
them
into
channels,
and
then
you
can
add
additional
logic
that
instead
of
processing
events
in
the
channel,
it
can
simply
accumulate
them
in
basically
batch
and
then
execute
and
then
process
that
batch
once
every.
D
Oh,
but
but
instead
of
that
channel
we
could
have
enjoyed
it
into
a
work
queue,
a
collapsible
key.
So.
A
D
A
Either
way
it
would
be.
You
know
it's
like
implementation
detail
that
doesn't
change.
You
know
in
general
approach
yeah,
but
yeah
yeah.
Basically,
this
is
the
channel
that
I'm
talking
about
yeah,
so
we
read
from
this
guy
and
also
yes,
we
read
from
from
this
guy
and
basically,
instead
of
reading
from
the
channel,
we
can
as
well
just.
E
A
C
E
A
A
A
Okay,
I
think
sounds
like
we're
done
right.
There
are
no
more
topics
and
we
almost
run
out
of
time.
A
A
G
We're
done
with
you're
done
so
I
just
wanted
to
just
highlight
one
thing
as,
as
most
of
you
probably
know,
we
did
some
updates
to
our
governance
recently
and
we
used
to
have
something
called
the
bootstrap
committee
that
was
responsible
for
the
project,
governance
and
and
making
sure
the
project
moved
moved
forward.
As
our
go
has
matured,
the
governor's
been
updated
to
to
basically
deprecate
the
bootstrap
committee
and
and
move
that
voting
to
the
maintainers.
G
Instead,
therefore,
also
the
bootstrap
committee
we've
had
that
has
been
open
to
everyone,
so
I
know
the
summary
have
been
in
the
butcher
committee,
that
is
on
friday,
friday
morning's
pacific
time,
but
that
meeting
will
also
moving
forward
will
be
changed
to
be
called
the
maintainers
meeting.
The
the
agenda
will
be
the
same
basically
more
focused
on
you
know,
project
maintenance
and
project
governance,
but
I
just
wanted
to
make
sure
that
you
know
those
of
you
here
that
are
maintainers
and
everyone.
That
else
wants
everyone
else
that
wants
to
listening.
G
A
Thanks
for
the
update,
henrik
and
yeah,
it
seems
like
we're
going
backwards.
We
spoke
about,
you
know,
controller.
First
now
we
did
updates
and
I
forgot
that
we
didn't
do
introduction.
I
wanted
to
introduce
russia
who
joined
us
first
time
from
code
fresh
and
he
basically
started
helping
with
argo
cd
already.
That's
awesome.
Welcome
pasha.
A
C
C
I
think
the
list
has
now
some
of
the
business
some
of
the
busiest
maintainers,
and
I
think
it's
time
that
we
expand
the
list
to
other
folks
as
well,
so
any
volunteers
to
try
out
the
github
discussions,
moderation
a
quick
overview
of
what
you
need
to
do
there
effectively
when
people
ask
questions,
have
a
way
to
have
a
conversation
with
the
person
and,
if
needed,
give
an
answer
or
find
out
what
the
answer
is
from
your
peers
or
in
general.
C
C
We
have
a
list
of
five
people
there
and
they're
all
pretty
busy
maintainers,
and
we
need
to
ensure
that
other
folks
also
pitch
in,
and
it's
also
a
good
opportunity
to
expand
knowledge
and
figure
out.
What
kind
of
you
use
the
jargo
cd
receives
in
the.
C
Yeah,
I
think,
let's,
let's
add
the
names
to
the
rotation
list.
I
think
that's
the
idea,
let's
add
the
names
to
the
rotation
list
and
we
can
go
from
there.
A
Yeah,
so
I
guess
yeah
just
feel
free
to
add
your
name
into
the
document
itself,
just
in
case
here.
It
is,
if
you
do
not
have
it
for
the
document
and
yeah
we'll
go
from
there,
but
for
the
next
week.
I
think
we
still
might
ask
me
right
so
to
do
it
if
we're
going
to
kpm.
C
Yeah,
sorry,
a
quick
question,
so
I
think
somebody
said
that
she
would
want
to.
He
would
want
to
volunteer.
Was
that
you
pasha.
C
C
Okay,
jesse
did
you
mention
that
you
also
added
your
name?
I
could
just
add
it.
C
Thank
you
yeah.
I
think
one
thing
I'd
I'd
like
to
suggest,
since
we
don't
have
yarn
here
and
since
alex
you've
been
doing
it
for
a
while
and
you're
still
working
on
the
release
process.
John
pittman,
do
you
want
to
take
it
out
this
time
and
you
could
actually
work
with
pasha
as
one
of
the
since
you
both
started
out?
You
could
of
course
talk
to
jan
or
alex
or
jonathan.
If
needed,
is
that
good
with
you
too?.
C
A
C
A
I
realized
it's
not
documented.
I
mean
that
link
was
not
there
in
that
document.
So
the
idea
of
the
moderator
that
moderator
watch
for
questions
asked
here
in
github
discussions
and
you
know,
answer
them,
and
we
also
we
get
a
lot
of
questions
in
issues
and
I
think
we
should
do
a
better
job
and
redirect
questions
filed
in
issues
into
discussions.