►
From YouTube: Argo Contributors Office Hours Dec 23rd 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
so
yeah,
there
is
not
many
of
us,
so
it
might
be
a
short
meeting.
But
let's
start
I'm
going
to
start
sharing
my
screen
and
we
have
got
couple
items
on
on
the
agenda
today
and
I
guess
we
want
to
start
from.
A
C
D
C
The
fact
that
there's
no
maintainers
and
don't
any
any
help
that
that
is
provided
is,
is
really
best
effort
and
donated
time
from
from
the
contributor.
So
I
I
want
to
level
set
people's
expectations
on
the
I
agree.
A
Basically,
instead
of
instead
of
trying
to
support
next
week,
we
can
just
say:
there's
no
official,
maintainer,
yeah
moderator,
okay,
I
I
actually
like
it.
C
Yeah
yeah
now
every
every
other
project
like
if
you're
in
the
slacks
for
other
ones,
you're
like
yeah.
That's
you
know
there
won't
be
any
meetings
there
won't
be
update,
like
anything,
is
best
effort
from
from
the
contributors.
So
so,
let's,
let's
let's
send
that
announcement
so
that
you
know
people
aren't
expecting
help
on
issues
and
stuff.
A
I
will
do
it
after
the
meeting.
Okay
and
next
I
see
we
have
regina
and
I
think
she
proposed
to
talk
about
mitigating
the
breaking
changes
in
the
future
of
next
release,
for
that
introduces
are
back
for
port
logs
viewing.
Do
you
want
to
talk
about
it.
A
Okay,
let
me
know
if
you
want
to
share
screen
or
or
if
not,
I
can
just
open
the
issue
and
we
can
discuss
it.
D
Maybe
yeah,
I
think
that
maybe
we'll
start
just
discussing
and
then,
if
you'd
like,
I
can
share
all
the
code
yeah,
all
right,
okay,
so
basically,
I
have
start,
maybe
first
like
a
lineup
that
this
issue
is
introducing
a
breaking
change
because,
as
all
resources
are
denied
by
default,
unless
a
policy
that
explicitly
allows
them
is
present.
D
And
so
what
is
what
is
visible
today
will
stop
being
visible
unless
there
are
explicit
policies
defined
by
the
users
once
this
change
is
out,
and
so
I
understood
that
it
was
offered
to
have
like
a
switch
for
the
intermediate
release.
D
That
is
saying
basically,
basically,
you
can
just
keep
this
arbuck
off
and
preserve
the
previous
behavior,
and
then
those
users
who
need
it
like
me
will
turn
this
switch
off
and
then
configure
explicit,
allow
policies
and
deny
policies
on
logs,
and
so
I
have
started
developing
this
change
and
I
was
quite
successful
with
implementing
it
and
also
adding
tests
and
making
sure
that
those
tests
passed
with
some
help
from
jan
fisher,
which
answered
some
of
my
question
questions
regarding
that,
but
I'm
currently
there
is
a
challenge
around
around
the
switch
in
just
a
second.
D
So,
as
for
the
switch
for
me,
the
kind
of
intuitive
behavior
would
be
that
the
enforcer
itself
would
be
aware
of
the
switch
and
then,
if
the
switch
is
on,
then
the
enforcer
would
like
return
allow
in
in
any
case,
but
the
enforcer
but
the
the
the
actual
enforcer
is.
It's
like
very
critical
code.
It's
like
the
core
or
one
of
the
core
components.
D
So,
on
the
one
hand,
it's
a
bit
scary
to
touch,
especially
for
me.
It's
my
first
contribution.
On
the
other
hand,
this
switch
is
something
temporary.
Its
life
cycle
would
be
just
one
release
and
then
it
would
die
because
on
the
next
release
like
this
log,
arbuck
will
be
like
first
first
class
citizen
from
arbuck's
point
of
view,
and
so
I
have
tried
to
work
this
around
and
then
like
implement
the
awareness
of
the
switch,
for
example.
D
In
can
I
so
that
the
enforcer
itself
is
not
aware
of
the
switch,
but
kenai
is
aware.
So
if
the
switch
is
on
I'm
asking,
can
I
view
logs
or
something
like
that,
and
it
would
say
yes,
you
can
and
then
perhaps
be
aware
of
the
switch
on
the
level
of
of
the
application
api
on
the
server
side.
And
then,
if
like
something
if
the
switch
is
on,
then
don't
enforce
the
arabic.
D
That
might
be
a
workaround.
But
then,
when
I
got
to
the
testing
part,
the
tests
look
counterintuitive,
because
the
tests
would
operate
on
the
enforcer
level.
C
So
actually,
why
could
we
do
this
at
the
api
level
in
instead
of
the
enforcer,
because
really
the
api
server
is
the
only
thing
that
needs
to
ask
the
question:
should
this
incoming
request
to
vlog?
Should
I
allow
this
to
happen
or
not?
It
seems
like
adding
it
to
the
enforcer
might
be,
as
he's
mentioned,
as
we're
discovering
actually
quite
complicated,
but
I
think
this
is.
C
This
is
probably
one
of
the
few
things
that
we
can
actually
just
decide
at
the
api
level,
because
things
like
the
control
application
controller
won't
need
to
consult
this
actually
actually,
in
general,
I
don't
think
it
needs
to
consult
this
anyways,
but
I
feel
it
might
be
a
simpler
approach
to
just
have
a
special
check
in
the
api,
the
logs
api
for
this,
and
that
way
you
don't
even
have
to
touch
enforcer.
D
I
mean
I
can
be
okay
with
that.
It's
just
you
know.
If
I
imagine
the
tests
yeah,
the
tests
are
currently
performing
in
force,
and
so
the
tests
would
would
have
to
for
this
particular
release
like
the
test
would
look
awkward
because
I
cannot
test
this
by
enforcing,
because
then
the
enforcer
would
return
the
wrong
answer,
because
it's
not
you
know,
because
it's
not.
D
That
so
so,
I
would
kind
of
have
to
write
like
weird
tests
that
are
performing
either.
Can
I
or
or
something
like
that
that
are
not
actually
performing
in
force?
You
know
so
yeah.
D
A
I
know
recently
we've
got
ability
to
write
e2e
tests
for
that
cover
permissions
and
basically
you
can
change
e2e
tests
script
to
turn
on
the
switch
into
the
own
state,
and
this
way
you
can
test
it
end
to
end
basically
and
test
what
would
not
look
awkward,
I
think
partial
was
the
first
one
who
added
few
tests
that
actually
verify
permissions,
so
it's
possible
to
create
a
user
who
doesn't
have
that
permission
and,
and
we
have
a
we-
have
ability
to
set
whatever
environment
variables.
We
want
during
e2
tests
creation.
C
A
C
Before
we
go
that,
I
want
to
ask
alex
and
everyone
here
if
the
the
approach
of
an
api
server
level
transition,
because
at
the
end
of
the
day
this
is
a
transition
thing
like
the
everything
you're
doing,
the
temporary
thing
is
going
to
go
away
with
one
release,
and
so
I,
my
suggestion
is
not
to
kind
of
go
knee
deep
into
the
enforcer
code,
because
that
will
be
have
to
be
undone
in
another,
really
so
the
least
amount
of
things
we
can
do
to
transition.
It
would
be
better.
C
So
my
question
to
the
of
the
room
is
like:
is
the
api
server
approach
except
the
ball.
A
D
Answer
because
I
didn't
want
it
to
be
like
in
in,
I
forget
the
word
like
an
inconsistent
yeah.
A
D
Okay,
so
then
I
I
will
write
like
some
dedicated
tests
for
that
that
are
like
gonna
be
based
on
the
switch
and
maybe
they
would
perform
more
of.
Can
I,
rather
than
enforce,
or
something
like
that.
C
Right,
yeah
right,
I
think
I
interrupted
pasha.
Did
you
have
something
to
add.
B
Yeah,
I
just
want
to
comment
some
yeah.
I
pushed
the
ability
to
test
it
with
and
is
with
income
testing,
so
I
think
it
will
be
very
easy
for
you
to
test
it
now
and
yeah.
I
suggest
that
don't
ask
for
that.
It
will
be
better
coverage
than
in
comparison
yeah.
We
can
do
both.
That's
all
you
said,
that's
what
just
my
edition.
B
D
All
right
great
thanks
a
lot,
maybe
like
a
couple
of
small
things
to
add
I
I
was
not
sure
from
the
point
of
you
know,
of
onboarding,
of
of
a
new
contributors.
I
was
partially
asking
questions
on
the
channel
and
I
I
did
not
receive
much
answers
a
little.
I
did
but
then
at
some
point
I
I
got
the
like
the
direct
talk
with
jan,
and
so
I
was
asking
him
questions
directly.
D
So
I
was
wondering
what
like
what
is
the
approach
to
onboarding
of
new
contributors
in
general.
A
I
guess
the
process
is
that
someone
supposed
to
nominate
you,
someone
said
you
know
some
of
the
ones
maintainers
supposed
to
you
know
propose
you
to
be
a
member
and
typically
that's
it.
We
just
have
a
short
meeting
every
maybe
every
once
every
month
and
we
have
people
and
I
think
yeah,
it's
just.
I
don't
think
we're
doing
the
best
job.
So
basically
we
everyone
is
busy
and
sometimes
we'll
forget,
but
I
guess
because
you
mentioned
and
because
you're
working
on
an
important
feature,
it's
a
good
reason
enough
to
add
huge.
C
D
C
Is
the
basically
this
describes
the
process.
A
C
Yeah,
so
the
member
has
a
relatively
low
bar
because
we
want
to
encourage,
you
know,
contributes
and
then
the
more
privileges
that
have
then
there's
more
of
a
process
to
to
make
sure
that
we
have
the
right
trust
models
in
place.
A
Let's
move
on
so
the
next
topic
I
proposed.
I
just
wanted
to
basically
share
what
we
were
doing
with
performance
testing,
and
why-
and
maybe
I
was
hoping
to
get
some
feedback
so
that
the
problem
we're
trying
to
solve
right
now
is
we
basically
use
intuit
as
a
way
to
test
performance
like
last
minute
performance
changes
which
is
not
good
because
we
have
dependency
on
him
to
it
and
plus
no,
it's
like.
A
Basically
only
I
can
do
it
right
now
and
pasha
was
working
on
trying
to
solve
that
problem,
and
so
we've
got
following
idea.
Basically,
we
were
thinking
that
we
can
build
a
set
of
tools
that
can
generate
just
data
data
and
those
tools
can
live
in
our
cd
report
and
basically
the
data
are
arguably
settings,
obviously
applications
projects,
repositories
and
clusters,
so
the
next
decision
was
basically
we
want
to
use.
I
guess,
v
clusters
to
simulate
lots
of
clusters,
so
basically
we
can
have
a
argo
cd.
A
We
can
have
kubernetes
cluster
that
run
argo
cd
and
has
bunch
of
test
data
for
the
targo
cd
in
the
same
cluster
in
name
spaces.
We
can
run
a
lot
of
v
clusters
and
v
cluster
if
you
never
heard
of
it,
it's
a
it's
an
open
source
project
which
is
just
a
kts
plus
some
additional
stuff.
Actually,
I
guess
the
decision
about
so
so
yeah.
I
guess
some
some
something
k3
is
based
in
the
namespaces
of
this
big
cluster
that
contains
everything
and
so.
A
And
the
idea
was
that
we,
we
will
have
a
set
of
tools
that
can
generate
data.
We
have
some
automation,
maybe
just
a
home
chart
that
can
create
a
bunch
of
clusters
and
install
our
ocd
itself,
and
then
we
can
just
have
a
parameters
sitting
in
the
uber
cluster
and
collecting
metrics
and
just
kind
of
pages
when
things
when
some
metrics
showing
slowness.
A
So
basically
it
doesn't
look
like
a
test.
We
will
have
just
a
continuously
running
test
that
keep
measuring
performance
and
oh
and
one
important
thing
was
that
the
argo
cd
installed
in
the
cluster
should
be
running
latest
version
and
every
time
we
upgrade
every
time
we
make
a
change
in
master.
A
It
would
upgrade
this
argo
cd,
and
so,
if
we
introduce
some
some
slowness,
we
should
see
it
immediately,
it's
kind
of
very
similar
to
what
we
do
it
into
it,
except
in
I
mean
we
don't
have
like.
We
have
argo
cd
instance
that
manages
real
clusters
and
into
it,
and
we
keep
upgrading
to
turbo
cd
and
if
things
go
like
becoming
slow,
then
we
see
it
early,
that's
it
yeah
and
then
basically,
if
anyone
has
any
suggestions.
B
B
Union
can
fix
it
and
so,
for
example,
once
I
push
something
to
master,
I
will
understand
immediately
that
something
wrong.
I
guess.
A
A
You
know,
during
like
normal
testing,
so
the
example
is,
for
example,
we
made
we
introduced
project,
scoped
resources
and
the
change
was
working
great,
but
the
performance
degradation
happened
in
a
cluster
that
had
a
couple
thousands
of
projects-
it's
not
like
this.
Is
you
don't
get
it
normally
in
each
and
every
environment?
You
don't
get
it
in
test
environment,
so
yeah.
C
Yeah
yeah,
I
think
it's
a
great
idea,
so
we
there's
a
proposal
to
have
some
dedicated
resources
that
is
constantly
testing
exactly.
A
Yeah,
it's
not
really
a
test,
it's
like
basically
we're
saying:
let's
have
argo
cd,
that
simulates
all
type
of
performance
challenges
and
okay,
yeah
and
other
thing
we
were
talking
about
is:
let's
say
we
have
a
bunch
of
applications
that
are
installed
into
these
test
clusters
and
in
each
and
every
cluster
we
can
run
also
a
drop
that
keep
touching
annotations.
You
know
just
making
simulating
noise
in
the
cluster
kind
of
trying
to.
Basically
it's
like
a
simulation
of
a
real
environment
and
real
rcd
that
manages
a
lot
of
clusters
and
yeah.
B
And
in
advance
we
can
even
add
ui
your
ui
performance
testing,
but
it's
less
important
right
now.
I
think.
C
B
C
Think
there's
I
want
to
just
jot
down
the
scaling
dimensions
that
we
are
concerned
about,
like
there's
like
many
many.
C
Many
clusters,
many
projects
and
yeah.
C
The
cluster
activity
is
actually
the
the
one
that
I'll
call
it
cluster
churn.
That's
another!
That's
the
part
you
just
mentioned
right,
like.
B
A
B
And
maybe
other
rules
and
users?
It's
also.
I
saw
at
least
few
few
bucks
around
this
that
when
we
have
a
lot
of
abac,
our
back
rules
and
users
and
projects
in
such
combinations,
our
our
enforcers
worked
very
slow
and
affect
all
endpoints.
C
That,
when
they,
this
one
could
be
more
of
a
not
necessarily
a
environmental,
we
can
benchmark
that
through,
like
a
unit
test
on
all
stuff,
some
kind
of
smoke
that
maybe
simulate.
C
Or
at
least
the
our
back
part
of
it,
I
think,
can
be
benchmarked.
First,
like
we
don't
have
to
spin
up
a
real
environment.
B
Don't
you
think
that
this
project
can
has
a
great
chance
to
be
like
our
full
performance
testing
tool
and
like
I
think
we
can
in
future
again
in
future,
we
even
can
measure
like
load
pages
time,
ui
and
so
on,
or
you
think
it
should
be
dedicated.
I
would
just
combine
it
into
one
place
somehow
I
want
to.
C
I
think
it
would
be
useful,
I'm
trying
to
to
size
the
at
least
prioritize
the
thing
like
that.
The
many
application
many
clusters
are
probably
the
obvious
top
two
things
that
we
know
we
constantly
hit,
but
but
yeah
I
it,
I
think
the
rbac
stuff
it
could.
It
could
go
there,
but
I
know
that
that
one's
much
easier
to
test
through
micro
benchmarks.
C
So
it
would
be
my
last
one
to
try
to
implement
for
environment
based
up
scale,
testing
regina,
said
performance.
Oh
yeah,
yeah,
that's
another
good
one!
So
large
applications,
oh
yeah,
yeah!
That's
a
good
suggestion.
B
I
think
it's
good
to
start
him
yeah.
We
can
extend
it
later.
I
don't
see
any
problems
and
another
question.
Basically,
where.
A
B
A
A
You
know
speak
with
our
managers
that
intuit
code
fresh
red
hat.
If
if
we
can
get,
you
know
budget
for
that,
I
try
to
use.
F
Cncf
service,
so
I
would
actually
be
very
happy
to
have
code
fresh,
just
run
it
right.
Now
I
mean
we've
got.
I
don't.
I
think
it's
totally
reasonable
and
I
would
be
very
happy
for
us
to
do
it
and
run
it
at
least
until
it
becomes
a
problem.
You
know,
but
yeah.
My
expectation
is
that
this
is
something
that
we're
happy
to
do
we're,
actually,
our
we
kind
of
need
to
do
it
anyway
for
our
own
use,
and
so
we're
already
doing
some
of
this
testing.
F
So
I
think
it's
you
know.
I
think
it's
totally
reasonable
for
us
to
just
take
it
on
and
run
it.
A
C
A
A
Basically,
the
generators
can
can
be
used
by
developers
to
just
create
test
data,
or
we
can
use
them
to
generate
test
data
for
performance
testing,
and
so
basically,
it's
a
heads-up
that
this
is
coming.