►
From YouTube: sig-auth bi-weekly meeting for 20200902
Description
sig-auth bi-weekly meeting for 20200902
A
Hi
everyone-
this
is
the
september
2nd
2020
meeting
of
sig
auth.
The
first
item
on
the
agenda
is
the
multi-tenancy
benchmark.
Jim,
are
you
on
the
call?
Yes,
I
am
so
let
me
do
you
have
something
to
share.
Would
you
like
me.
B
Oh,
that,
let
me
go
through
okay,
all
right,
so
I
think
we
had
discussed
this
project,
maybe
a
month
or
two
ago
in
in
you
know
in
in
one
of
our
sig
meetings,
but
just
for
quick
recap
of
what
we're
doing
in
the
multi-tenancy
benchmarks
project.
So
this
is
one
of
the
tracks
in
the
multi-tenancy
working
group.
B
B
But
one
of
the
challenges
that
came
up,
I
think
at
a
prior
kubecon,
and
what
we've
been
discussing
and
addressing
in
this
track
is
how
do
you
measure,
if
name
spaces,
are
properly
configured
for
multi-tenancy
and
then
sort
of
broadening
it
to
other
aspects
of
the
cluster
configuration
as
well
right?
So
the
intent
was
to
create
a
set
of
benchmarks
where
we
could
measure
configuration
settings
and
and
test
and
validate
configuration
settings
for
multi-tenancy
and
that's
what
this
benchmark
efforts
does.
B
So
if
you
go
to
the
multi-tenancy
group,
you
go
to
benchmarks,
there
is
a
subfolder
coupe,
cuddle,
mtb,
and
I'm
going
to
talk
about.
You
know
this
specific
project
and
we're
going
to
move
that
up
to
the
main,
folder
and
replace
some
of
the
contents
here
and
what
we've
done
previously
is
we
had
you
know
you
know
kind
of
identified
about
15
or
so
different
checks
and
which
we
would
go
through
for
our
namespace
and
the
focus
of
the
scoop
cartel
mtb
and
yeah.
B
We,
you
know,
worked
with
a
google
summer
of
coding
intern,
a
new
jews
on
the
call,
as
well
as
well
as
divya,
was
a
community
bridge
intern
and
they
have
mostly
developed
this
tool
working
with
working
with
our
working
group,
and
what
I'm
gonna
demonstrate
today
is
what
this
does,
and
you
know
how
it
helps
in
terms
of
this
multi-tenancy
measurement
right
so
feel
free
to
browse
and
look
at
this
folder.
But
what
I'll
do
is
I'll
just
share
what
this
looks
like
in.
B
We'll
just
install
some
role
bindings,
so
this
put
in
a
role
ali
who's,
an
admin
and
edith
who
is
a
edit
user
in
that
namespace
right.
So
now,
if
I
run
you
know
and
prior
to
this,
I
guess
if
I
wanted
to
build,
I
would
just
do
make
coop
cuddle
mtb,
but
I
already
have
this
built.
So
if
I
just
do
cuddle
mtb
and
this
works
as
a
group
cuddle,
plugin
or
it
will,
you
know,
be
just
a
stand-alone
command.
B
What
we
want
to
do
is
we
want
to
say
run
or
if
I
do,
for
example,
get
it
will
show
me
the
list
of
benchmarks
available.
So
these
are
the
16
or
so
checks
we
have
currently
implemented,
and
they
cover
different
things
like
host
isolation.
You
know
with
the
namespace
isolation
as
well
as
different
self-service
operations,
but
what
we
really
want
to
do
over
here
is
we
just
want
to
say
run.
B
We
want
to
say
minus
n
for
the
namespace
which
I'll
use
the
namespace
I
just
created
and
I'm
gonna
then
say
you
know,
as
the
user
ali
who's
the
admin
of
the
namespace
right.
So
now,
since
I
haven't
configured
this
namespace
with
any
any
security
policies
or
any
multi-tenancy
like
any
type
of
policy
engine,
a
lot
of
these
things
are
going
to
fail.
Some
may
pass
because
just
because
of
standard
are
back
right,
so
you
see
the
report
over
here.
B
Most
of
the
checks
failed
right
so
on
what
we
have
done
for
testing
and
just
going
going
back
up
to
the
repo
and
if
you
go
through
the
readme
we've
implemented,
so
these
checks
with
now
also
with
kiverno
as
well
as
oppa
gatekeeper.
So
these
are
two
policy
engines
that
we
have
tested
with
and
what
you
can
do
for
this
demo.
What
I'm
going
to
do
is
I'm
just
going
to
install
kubernetes
policies
which
will
help
us
with
getting
these
checks
to
pass
right.
B
So
going
back
to
the
shell,
I'm
just
going
to
run
this
command,
which
run
installs
the
kubernete
policies
and
the
reason
why
I'm
using
kubernetes
is
the
gatekeeper
takes
a
little
bit
longer
to
you
know
once
so.
We
want
these
to
be
installed.
So,
let's
see-
and
let
me
make
sure
I
have
kubernetes
running.
B
And
so
now,
if
we
do
run,
you
know
with
this
user,
what
we
would
expect
to
see
is
some
of
these.
You
know
more
of
these
tests
to
pass
right.
So
that's
where
it's
gonna
go
through
run
the
checks
and
what
we're
actually
doing
in
these
checks
is
just
using
dry
run,
and
you
know
the
go
client
we're
able
to
oops.
It
seems
like
that's
running
into
some.
B
B
Yeah
I'll
have
to
check
and
see
why
it's
running
into
errors,
but
the
expected
result
would
have
been
once
you
install
the
policies
and
you
have
the
latest.
You
know
benchmarks,
it
would
come
back
and
it
would
show
these
as
successful
right
so
and
that's
where
you
know
like
just
the
idea
is
to
report
back
and
show
success
for
these
particular
users.
B
So
just
I
think
what
you
know:
oh
yeah,
sanuj's
messaging
and
saying
there
was
a
role
binding
error.
So
maybe
I
configured
that
wrong
thanks
a
nudge.
Let's
try
that
again
did
I
or
maybe
I
didn't
oh
right,
so
I
didn't
put
that
in
the
namespace.
B
So
yeah
we
see
more
passes
and
there
are
a
few
failures
expected
because
there's
some
some
tests,
which
we're
not
a
hundred
percent
sure
how
to
handle
exactly
we're
working
with
some
different
frameworks
to
figure
out.
But,
as
you
see
over
here,
most
of
the
tests
pass
the
one
which
shows
an
error
there's.
You
know,
for
example,
some
resources.
If
you
create
like
a
default
network
policy
within
the
namespace
we're
still
debating.
B
B
So
the
the
thinking
there
is
to
have
some
standard
label
selectors.
We
could
introduce
or
some
labels
on
that
resource
and
then
we
would
check
for
that.
The
other
one,
this
name
space
since
I
just
created
it.
I
don't
have
resource
quotas
configured.
So
if
I
configure
that
correctly,
these
two
checks
will
pass
as
well.
B
So
that's
the
general
idea
this
also
produces.
You
know
if
I
kind
of
change
the
output
to
a
policy
report,
so
one
of
the
other
working
groups,
the
policy
working
groups,
is
working
on
a
standard
policy
report
format.
So
this
tool
mtb,
has
the
capability
of
generating
a
policy
report
which
becomes
available
in
your
cluster,
which
you
can
then
collect
as
a
you
know,
as
a
kubernetes
resource
and
report
externally
extra
as
well.
B
So
that's
what
we
wanted
to
show
and
really
what
we're
looking
for
here
is
thoughts
on
next
steps.
If
anyone
from
the
sig
wants
to
help,
you
know
in
terms
of
firming
up
auditing-
and
you
know
kind
of
validating-
that
these
are
the
right
checks
and
also
helping
you
know
so
so
far
we've
been
focused
mostly
on
level
one
checks
where
we're
looking
at
single
name
spaces.
We
want
to
extend
this
to
multiple
namespaces,
that
a
tenant
or
that
a
particular
team
within
an
enterprise
may
own,
and
how
do
we
validate
that?
B
A
How
did
you
define
the
benchmarks.
B
Yes,
so
it
is
very
configurable
and
I'll
show
you
just
quickly.
Let
me
see
if
I
have
this
up
in
yeah,
so
the
benchmarks
themselves,
and
if
we
go
here
to
test
benchmarks,
we
can
take
a
look
at
what
one
of
them
looks
like
what
the
new
gen
devia
have
done.
Is
they've
created
a
way
to
kind
of
standardize,
so
you
can
define
some
set
of
yeah
like
a
yamo
for
the
benchmark,
and
this
will
automatically
generate
the
readme
and
also
some
scaffolding
for
the
test
cases
right.
A
Status
yeah,
it
seems
really
cool.
I
I
think.
A
I
would
like
to
review
what
the
benchmarks
are
so
far
and
maybe
people
you
can
crowd
source
additional
benchmarks
or.
A
A
Yeah
because
we've
had
we've
seen
some
stuff
in
the
cis
that
was
kind
of
questionable
and
there's
probably
stuff
missing.
If
you
look
hard
enough,
the
other
thing
is
the
cube.
Control
plug-in,
I
think,
is
pretty
cool,
but
some
sort
of
continuous
scanning
would.
B
B
Yes
yeah,
so
there
what
we're
thinking
is.
We
could
run
the
same
cli
as
you
know,
as
a
job
or
as
a
cron
job,
and
it
could
produce
this
policy
crs
or
perhaps
we
have
a
you
know
we
could
even
and
one
challenge
is
then,
if
there's
some
cleanup
and
other
things
required
after
so
so
far,
we've
been
able
to
write
most
of
the
benchmarks
where
there's
no
cleanup
required,
but
if
there's
some
cleanup
required,
we
might
need
a
controller
or
to
run
these
in
through
a
controller.
A
Yeah
it's!
It
is
slightly
unfortunate
that,
in
order
to
scan
all
namespaces,
you
would
need
to
know
specific
users
that
have
access
in
that
namespace,
and
you
would
also
need
to
have
basically
a
full
access
cluster
admin
to
the
cluster.
But
I
don't
see
an
obvious
way.
A
D
A
A
Yeah,
assuming
arvac
is
being
used.
The
other
thing
is
this:
these
things
like
block
privileged
containers,
do
require
a
dry
run,
because
you
really.
B
B
Yeah,
so
we
wanted
to
keep
this
as
agnostic
as
possible,
of
the
both
of
the
enforcement
and
of
the
way
to
configure
namespaces,
but
that
does
introduce
some
challenges
where
there's
a
little
bit
of
configuration
required
right.
So
maybe
that
could
be.
I
guess
if
a
cluster
admin
gives
us
some
hints
or
indicates
what
their
expectations
are
of
multi-tenancy,
then
we
can
even
take
that
as
an
input
and
run
the
scans
accordingly.
D
I
do
really
like,
though,
that
black
box
assessment
kind
of
approach
where
it's
not,
we
only
support,
say
gatekeeper
and
we
inspect
your
policies,
but
rather
it's
we
try
doing
things.
Someone
could
try
and
we
report
on
the
success
or
failure
of
it
because
yeah
everyone
has
different
reasons
to
choose
different
tools:
yeah,
there's
so
many
ways
to
configure.
B
C
Yeah
when
mike
was
asking
about
like
which
checks
we
were
doing,
the
one
that
came
to
mind
was
the
stuff
that
tim
has
been
working
on
defining
around
the
different
pod
security
profiles
like
so.
If,
if
you
wanted
to
say
like
this
namespace
is
constrained
to
this
profile,
then
you
can.
You
can
check
those
things,
so
this
seems
like
a
friendly
way
to
do
those
checks
and
it
is
behavioral
instead
of
depending
on
a
particular
mechanism
right.
D
Is
there?
Is
there
such
a
thing
as
a
as
a
meta
test
in
here
where,
if
you
had
individual
tests
for
each
of
the
properties
say
in
the
pod
security
standards,
then
also,
could
you
have
a
meta
test?
That
said,
you
know,
based
on
these
n
checks,
you
know.
Pods
in
this,
namespace
are
constrained
to
the
restricted
policy
from
the
from
the
upstream
pod
security
standards
versus
this
name.
Space
has
the
base
one
versus
you
know
this.
This
name
space
is
unconstrained.
B
Not
not
currently,
so
what
the
way
we
have
structured
the
tests
is.
We
have
different
profile
levels
and
the
thinking
was
profile
level
that
the
base
profile
level
is
just.
You
have
standalone
name,
spaces
profile
level.
Two
introduces
more
like
self-service
across,
so
things
like
that,
you
know
with
hnc,
which
is
the
hierarchical
network,
controller
or
namespace
controller,
which
allows
you
know
admins
to
manage
multiple
namespaces.
We
would
enable
those
and
profile
level.
Three
is
where
you
go
to
a
full.
B
You
know
virtual
cluster
model,
where
you
have
different
api
servers,
which
is
what
there's
another
project
in
the
working
group,
which
is
attempting
to
do
that.
So
that's
how
we're
structured
today,
but
I
like
the
idea
of
you,
know
having
more.
B
So
this
is
yeah
right
now
we
have
these
tests
group
defined
it's
both
both
on
our
github
page
as
well.
As
you
know,
in
this
document
and
we'll
continue
sort
of
refining
these
and
expanding
the
tests,
we
want
to
add
like
some
storage
tests,
which
are
fairly
more
involved
and
complex
with
pvcs.
But
some
of
those
are
important
to
do,
and
you
know
so
definitely
would
love
to
team
up
with
folks
who
are
interested
and-
and
you
know,
get
some
more
feedback
on
whether
these
tests
are
correct.
C
Needed
yeah
just
to
follow
up
on
what
some
other
folks
have
already
said.
One
of
the
things
we
talked
about
for
the
pod
security
standards
was
building
building
some
sort
of
test
that
could
run
those
and
see
if
a
particular
cluster
or
namespace
or
something
was
conforming
to
one
of
those
profile
levels,
and
this
seems
like
a
really
natural
fit
for
that
goal.
C
So
yeah
it
would
be,
it
would
be
cool
if
we
could.
I
don't
know
if
all
of
those
checks
are
in
here
already
if
any
are
missing,
would
be
a
good
addition,
but
also
some
way
kind
of
what
we
were
just
talking
about
but
like
if,
if
there
was
like
a
concept
of
a
test
suite
or
right
a
test
group,
or
something
like
that,
where
you
could
say
I
just
wanna,
I'm
interested
in
the
you
know
pod
restricted
tests.
Tell
me
if.
B
C
See
if
you're
meeting
this
profile
or
to
test
against
this
profile
level,
we
recommend
this
tool
get
some
more
visibility
to
it.
Okay,.
B
All
right,
yeah
I'll
reach
out,
you
know
on
that,
and
you
know
also
send
more
information
on.
I
can
add
that
to
the
agenda
as
well,
for
where
we
have
some
of
these
profile
levels
defined.
I
haven't
read
up
on
the
latest
on
the
profile
levels,
but
I'll
do
that
and
we
can
discuss
what
it
would
take
to
do
that.
Mapping.
D
B
A
A
lot
of
people
want
have
false
requirements.
In
my
experience.
In
my
experience,
people
have
compliance
requirements
and
people
want
to
be
run
big
multi-tenant
clusters.
Maybe
there
might
be
separate
people,
they
might
be
the
security
team
and
the
cluster
operators,
but
there's
definitely
some
intersecting
concerns
there,
but.
A
Okay,
the
other
thing
we,
I
think
we
have
a
brand
new
sig
sig
security
information
in
the
formative
phase.
Right
now
they
are
working
on
some
benchmarking
stuff.
I
know
as
part
of
their
charter.
I
think
they
might
be
interested
in
a.
B
C
Yeah
definitely
a
sort
of
an
announcement
and
scheduling
of
first
meetings
should
be
going
out
soon
to
the
probably
to
kubernetes,
dev
and
kubernetes
security.
Discuss
so
keep
an.
A
A
You
all.
A
C
Wow,
what
what
a
setup,
who
could
ask
for
a
better
setup
for
that
you
wanna?
Let
me
show
my
screen
I'll
open
somebody.
C
A
C
Can
you
see
that
yep
all
right
so
unless
you've
been
hiding
under
a
rock
or
quarantining
with
no
remote
access?
C
Over
the
past
six
months,
you've
probably
seen
some
of
the
rumblings
about
ci
improvements
and
the
119
code
freeze
was
longer
than
normal
and
the
merge
queue
around
the
time
of
code
freeze
was
much
worse
than
normal
and
there
were
sort
of
three
distinct
issues.
C
That
runs
great
if
it
has
three
or
four
cpus
available
to
it,
but
then
gets
starved
and
flakes
if
it
has
half
a
cpu
and
so
that
that
work
is
pretty
much
complete
at
this
point,
which
means
that
now,
when
we
see
flakes
it's
a
much
better
indication
that
there's
problems
in
the
tests
themselves
or
in
the
things
that
they're
testing,
and
so
we
are
working
with
all
of
the
sig
leads
and
the
component
owners,
as
we
start
going
through
120
development,
to
build
in
this
cycle
of
pay
attention
to
your
test,
health,
your
test
coverage
and
your
test
health.
C
C
The
first
one
is
actually
not
working
at
the
moment,
there's
a
issue
for
that,
but
the
triage
board
is
a
really
good
way
to
get
a
sense
for
whether
there's
an
issue
across
test
environments,
and
so
I
actually
did
a
training
session
in
sick
testing
that
we
recorded,
which
is
useful,
and
I
would
recommend,
but
as
part
of
that,
I
pulled
up
the
sig
off
filtered
view
of
that
triage
board.
C
So
it's
a
little
small.
I
apologize,
but
you
can
see
that
prior
to
a
few
days
ago,
there
were
actually
a
ton
of
sig
off
tagged
tests
that
were
just
failing
or
flaking
constantly,
and
it
turns
out
that
was
one
that
was
known
to
be
flaky
and
kind
of
got
relegated
to
a
flicky
suite
and
then
ignored
for
like
a
year,
and
so
I
was
able
to
go
clean
that
up
and
there's
like
one
or
two
remaining
issues.
But
basically
we
now
have
much
better
signal.
C
So
if
we
filter
the
triage
board
to
stick
off
once
this
noise,
like
rolls
off
the
history,
if
we
start
to
see
issues
that
actually
means
something
now,
instead
of
this
sort
of
low-level
background-
oh
yeah,
it's
just
always
flaky,
that's
just
how
it
is,
and
so,
like
I
said,
we're
trying
to
build
this
cycle
of
regularly,
probably
on
a
weekly
basis,
checking
health
of
the
things
that
are
supposed
to
be
testing
the
components.
We're
shipping
a
couple.
Other
resources,
there's
a
link
to
the
video
that
I
did.
C
This
doesn't
tend
to
be
an
issue
so
much
for
sigoth
tests,
which
is
good.
It
tends
to
be
more
an
issue
for
long
running
operations
or
operations
that
have
to
navigate
like
network
drop,
reconnect
types
of
things,
so
api
machinery
storage,
like
some
of
these
longer
running
operations,
tend
to
have
issues
with
flicks,
but
I
just
walked
through
sort
of
strategies
for
reproducing
and
pinning
down
flaking
tests.
C
There's
also
a
link
to
a
pull
request
that
will
shortly,
hopefully,
this
week
go
in
which
will
change
our
unit
tests
from
being
required
to
pass
one
third
of
the
time
to
requiring
them
to
pass
three
times
before
merge,
and
so
our
unit
test
presidents
were
basically
tolerating
up
to
two
failures
per
unit
test
and
sigoth
did
not
escape.
We
actually
had
a
flaky
unit
test
around
csr
defaulting,
the
fuzzer
didn't
match
the
defaulting
rule,
and
so
it
would
sometimes
fail,
depending
on
what
random
data
it
put
in
the
certificate
api.
C
C
C
C
The
second
gap
that
we
have
is
around
upgrade
tests,
and
fortunately
we
don't
do
a
lot
of
things
that
tend
to
interact
with
upgrades
the
the
main
one
that
we
exercise
regularly
is
like
the
default
policies,
the
our
back
policies
they're
built
into
the
api
server,
and
so
we
have
unit
tests
around
the
upgrade
scenarios
for
that
which
ensure
that
when
you
upgrade
new
permissions
are
only
added
to
existing
rules.
C
We
never
ever
automatically
tighten
policy
on
upgrade
because
upgrade
has
to
be
safe,
but
we
have
come
across
a
feature
which
is
the
the
bound
token
feature
where
we
are
getting
to
the
point
where
we
want
to
switch,
which
type
of
token
we
inject
into
running.
Pods
switch
it
from
a
secret
base,
token
to
a
bound
token,
and
that
has
the
potential
to
be
extremely
disruptive
to
running
workloads
on
upgrade.
And
so
I
linked
to
the
discussion
in
the
enhancements
update
around
that.
C
I
think
this
is
a
good
example
of
a
place
where
we
really
need
to
prove
that
this
change
is
safe
before
we
roll
it
out
to
open
source,
and
so
there's
discussion
in
that
issue
and
sig
testing
is
involved
as
well
in
trying
to
find
ways
for
us
to
run
tests
like
this,
but
starting
to
think
about
that
as
a
category
when
you're
designing
a
future
thinking
about
the
role
out
of
it
is
important,
especially
given
how
fundamental
a
lot
of
the
things
we
work
on
are.
C
If
you
can't
authenticate
it
doesn't
really
matter
much
else
like
it
doesn't
matter
how
great
all
the
other
features
work.
If
you
can't
even
make
the
api
calls
to
exercise
them,
and
then
the
last
thing
I
wanted
to
call
out
was
a
proposal
that
vojtech
from
scalability
has
been
working
on.
This
is
trying
to
come
up
with
ways
to.
C
Encourage
the
project
to
address
issues
like
this,
so
making
sure
that
we
have
things
in
place
to
allow
for
scale
and
soak
and
upgrade
tests
and
then
making
sure
we
have
good
signal
to
recognize
when
we
have
issues
in
an
area
and
then
the
feedback
loop
of
saying.
If
we
have
failures
in
a
component
or
a
sig's
area,
then
the
only
types
of
changes
that
should
be
made
in
that
area
are
fixing
those
issues.
C
So
we
want
to
slow
down
like
feature
emerges
and
changes
that
are
not
related
to
fixing
the
the
signal
that
tells
us
whether
what
we
have
is
working.
So
if
you
haven't
seen
that
take
a
look
at
that
yeah,
so
I
wanted
to
make
sure
people
know
about
these
resources
and
start
to
think
about
them
as
you
work
on
your
individual
features,
but
then
also
as
a
sig
making.
A
Awesome:
jordan,
thanks
for
all
the
hard.
A
Work
one
question:
no,
you
mentioned
we
did
have
some
flaky
unit
tests.
How
are
end
to
end
test
doing
or
is
that
pending
the
fixed
test
grid
or
test
cluster.
C
The
end-to-end
tests
last
I
looked,
I
were
fine,
the
ones
I
pay
attention
to
the
most
are
sigof
options:
api
machinery,
api
machinery
has
a
few
like
timeout
type
ones
where
a
controller
takes
longer
than
expected
and
some
runs.
So
those
are
a
little
tricky
to
pin
down,
but
most
of
the
zig
off
ones
are
very
concrete
whether
they
pass
or
fail.
It's
not
so
much
a
timeout
type
issue.
A
A
And
regarding
the
upgrade
tests,
what's
the
what's
the
current
plan,
are
we
either
implementing
in
a
deprecated
suite
or
waiting
a
quarter
for
us
to
graduate
a
feature.
C
I
my
perspective
is
that
getting
the
existing
upgrade
suite
running
again
and
writing
a
minimal
test
around.
This
is
probably
the
quickest
way
to
get
coverage.
That's
probably
what
I
would
recommend.
C
Is
it
does
a
pretty
good
job
of
separating
sort
of
the
upgrade
aspect
from
the
functional
like?
What
should
the
world
look
like
before?
What
should
the
world
look
like?
After
what
are
the
checks
you
want
to
run?
So
I
know
sig
testing
is
working
on
a
way
to
do
upgrade
tests
that
isn't
coupled
to
the
cuba
cluster
spin-up,
which
is
great,
I
think
that's
good
from
talking
with
them
once
that's
available,
the
moving
the
existing
upgrade
tests
from
the
ede
upgrade
suite
over
to
that
framework
doesn't
seem
like
it
would
be
that
difficult.
A
A
Let's
move
on.
Does
anybody
have
a
any
announcements,
maybe.
A
Tim
had
one
but
looks
like
he's
not
on
the
call
anymore
about.
I
think
he
was
just
gonna
say:
there's
an
upcoming
inaugural
meaning
of
sig
security,
so
keep
an
eye
out
on
the
mailing
list
and
I
think,
there's
actually
already
a
six
security
mailing
list.
If
you
go
to.
A
D
Created
either
yesterday
or
the
day
before,
I
don't
remember,
which
but
yeah
one
can
one
can
now
join
the
mailing
list
and
jay
is,
is
working
on
an
announcement
and
how
to
drum
up
appropriate
levels
of
community
support
around
the
kinds
of
things
that
they
have
in
their
charter.
A
Awesome
I'll
paste
the
mailing
list
link
into
the
agenda
and
then,
let's
move
on
to
triage.
A
C
C
We
could
probably
also
pull
in
sig
off
tagged
integration
tests,
so
I'm
not
actually
sure
where
this
dashboard
is
defined,
but
we
do
have
more
tests
than
this.
A
I
think
it
might
just
be
an
testing
for,
if
I
recall,
somewhere
in
a
demo
file,
wow
great
job.
Everyone.
A
C
Also,
I
don't
know
if
you
followed
the
controvex
discussion.
There's
a
the
goal
is
to
add
a
triage
or
a
needs.
Triage
label
like
to
new
issues
and
new
prs,
and
I
think
that
will
simplify
this.
C
I
don't
think
there's
an
owner,
I
think
the
person
who's
most
interested
in
it
is
not
an
org
member,
so
they
probably
can't
own
it.
A
A
A
A
A
Not
even
sure
if
this
is
a
good
idea,
anymore
cool.
So
so,
let's
looked
at
this
one
last
night
it
was
opened
a
really
long
time
ago
and
it
first
went
to
stick
storage
and
went
to
clay,
and
then
I
went
to
stick
cluster
life
cycle
and
then
april
machinery
all
right
now,
that's
me
and
then
just
take
off.
A
C
I
I
haven't
looked
at
the
issue
to
see
if
they
provided
more
information,
if
it's
reproducible
and
then
this
seems
like
a
real
issue,
well,.
A
So
if
you
make
any
modification
to
either
the
ciphertext
or
the
key,
it's
gonna
unauthenticate
or
it's
to
garbage,
and
then
we
pull
out
that
length
prefix.
A
We
get
this
invalid
padding
on
input
so
either
the
bit
flipped
in
storage
and
sdg
changed
some
something
or
key
change,
and
I'm
going
to
guess
that
somebody
regenerating
a
key
accidentally
is
more
likely.
However,
I
don't
know.
C
A
A
He's
an
authenticated
container
yeah,
that's
another
option,
but
it
still
doesn't
tell
you
what
it's
key
to
change
to
this
cypher
text.
A
A
We
did
have
the
one
issue
where
bits
were
being
flipped
in
std
that
was
causing
corruption
of
the
kind
or
somewhere
we
don't
know.
If
was
it,
do
you
remember
that
yep
am
I
in
my
nightmare.
A
A
Right,
namespace
creation
deletion,
automation.
So
what
were
you
saying
about
support
or
you're
saying
that
they've
already
can
add
a
tree
actually
relaxed.
A
I
don't
know
we
used
to
send
just
stack
overflow,
but
I
think
that
changed
at
one
point.
I'm.
A
C
A
I
know
where
this
is
coming
from.
We
see
a
bunch
of
fall
throughs
on
to
the
subject:
access
review
web
hook
when
the
node
authorizer
is
loading.
A
C
You
need
to
think
about
what
we
want
to
do
there,
so
it
sounded
like
the
discussion
was
kind
of
going
in
two
directions.
One
was
indicating
readiness
or
like
factoring
authorizer
readiness
into
the
api
server
readiness
signal
which
could
maybe
help
you.
If
you
have
a
multi
api
server
cluster
like
wait
to
bring
one
into
rotation
until
its
authorizers
are
happy.
C
A
C
Yeah
from
the
applications
perspective
rejecting
looks
the
same
as
everything
in
the
chain.
Having
no
opinion.
C
C
If
we're
talking
like
a
couple
seconds
for
it
to
be
ready,
then
that's
maybe
okay,
but
we
don't
know
it's
just
gonna
be
a
couple
seconds.
We
could
wait
for
a
little
while
I
don't
know,
there's
some
some
of
the
some
of
the
wait
for
a
short
period
of
time.
If
we
haven't
synced
yet
seems
maybe
the
most
reasonable
to
me,
but.