►
From YouTube: Cloud Custodian Community Meeting 2023-09-05
Description
Our community meeting is public and we encourage users and contributors of Cloud Custodian to attend! You can find the notes for this meeting in both GitHub Discussions and in HackMD:
- https://github.com/orgs/cloud-custodian/discussions
- https://hackmd.io/@c7n
Check out our Slack for more info! http://slack.cloudcustodian.io
A
We
we're
still
keeping
this
note
around
I
I
think
even
as
of
the
the
next
release,
that's
coming,
I
think
3-7,
we're
still
still
supporting
for
now,
but
still
planning
on
moving
on
from
it
this
year,
we're
getting
toward
the
end
of
the
year,
so
I
would
think
that
happened
pretty
soon
and
I
know
one
of
the
questions
we
got
in
some
of
the
threads
in
slack,
and
there
was
some
discussion
around
the
next
release.
A
Time
I
believe
the
release
blockers
are
sorted
out
at
this
point
around
the
RDS
termination
was
one
of
them,
the
planner
it
sounded
like
you
were
having
some
some
audio
over
video
issues,
but.
A
B
Sounds
good
to
me:
okay,
cool,
so
we're
planning
on
voicing
tomorrow
was
going
to
probably
on
the
release,
and
that's
currently
scheduled
for
tomorrow
am
EST
and
I'm,
not
aware
of
anything
that
would
consider
blocker
at
this
point.
A
All
right
that
sounds
good
go
for
douche.
Hopefully,
everything
is
smooth
and
happy
with
that.
B
And
the
process
on
doing
releases
is
considerably
more
automated
than
it
ever
has
been,
so
it
was
actually
like
a
hopefully
if
I'm
saying.
A
C
I'm
excited
too
hop
on
and
see
the
process
never
got
a
chance
to
kind
of
go
through
that.
So.
A
Sounds
good,
okay
and
then
for
two
since
you're
talking
with
you
doing
some
of
the
release,
I
I
know
on
the
pr
and
issue
side.
I
know
you
had
the
the
glue
catalog
yep
filter.
We
can
talk
about
that
a
little
bit
because
so
release
release
planning
this
week.
Ideally
tomorrow
that
sounds
cool.
A
Does
anybody
else
before
we
get
into
PR
individual
PRS
and
issues?
Does
anybody
else
have
any
questions?
Questions
or
comments
is.
B
We
already
have
multiple
people
able
to
do
releases
they're,
definitely
like
I,
think
I
didn't
really
do
release
last
year
for
a
while
I
know
the
people
did
and
it's
all
it's
a
move
towards
both
I
think
it's
the
right
answer
like
we're,
trying
to
automate
it
and
eventually,
ideally
get
humans
out
of
it
as
much
as
possible.
B
But
that
is
a
that's
a
long-term
goal
and
aspiration,
but
we're
not
quite
there
yet
I
mean
it's
a
bit
of
a
random
aside
discussion,
but
if
you're
I
mean
I
mean
this
is
probably
the
place
for
those.
So
in
that
context,
what
what
needs
to
be?
What
is
remaining
on
getting
to
know
humans
in
the
loop
releases?
I?
Guess,
because
we've
made
significant
progress.
B
The
the
human
in
the
loop
process
is
effectively
going
to
be
before
the
release,
creating
a
PR,
that's
updating,
dependency,
graphs
and
data
sets,
and
then
it's
more
than
likely
a
human
has
to
go
and
fix
whatever
issues
come
out
of
that
and
then
as
a
follow-up
to
that,
there's
a
notion
of
doing
the
release
just
on
a
croton
job
and
the
only
thing
that's
actually
remaining.
B
There
is
actually
the
manual
the
manual
updates,
the
changelog
to
actually
you
know
if
Cubit
messages
aren't
always
great
and
I
think
we
could
probably
fix
that.
To
be
honest,
we
could
automate
that
part
if
we
just
had
a
bot
like
a
a
bot
on
PRS
that,
like
check,
pull,
request,
titles
internet,
so
I
think
we
can
get
there
pretty
we're
pretty
close
to
be
fair
and
then
the
last
bit
is
just
getting
rid
of
Secrets,
at
least
on
Taipei.
They
now
have
oidc
authentication.
B
So
that
way
we
don't
even
have
to
have
a
stack
secret
anymore
in
the
repo
to
do
the
publication.
So
that's
about
it
as
far
as
so
it's
it's
it
all
seems
very
much
in
reach,
but
we're
not
there
yet.
D
B
A
B
Currently
we
do
functional
tests
against
AWS
at
the
moment,
but
we're
looking
to
help
support
from
directly
from
some
of
the
cloud
providers
on
getting
functional
test
accounts
and
other
providers
in
some
cases
through
cncf,
in
some
cases
direct
I.
Currently
we
only
do
it.
Aws
I'd
like
to
get
gcp
and
on
that
mix
that
may
come
that
access
may
come
through
cncf
and
then
oci
may
come
direct
with
them.
B
It
there's
a
team
there,
that's
what
I'm
working
on
Oracle
support
and
they
they
have
that
as
a
item
in
their
backlog.
A
Sounds
good
anything
else,
any
questions
or
comments
on
the
release,
process
or
timing
plans.
A
All
right
and
it's
it's
PR
and
issue
time
and
because
Patricia
I
know
we
we
had
a
little
bit
of
back
and
forth
about
this
and
I
was
out
and
let
it
hang
for
a
bit.
But.
E
A
Know
you
had
this
this
PR
for
the
glue
catalog,
the
kmsk
filter
I
was
on
the
fence
about
this.
How
to
respond
about
it
and
that's
where
that's
where
Community
meetings
are
super
helpful.
The
background
is
that
we've
got
a
glue
security
config
filter
today,
but
usually,
if
we're
looking
to
do
KMS
key
filters,
we
have
a.
We
have
a
KMS
key
filter
with
glue
with
the
glue
catalog
settings.
A
But
this
glue
security,
config
filter
was
doing
some
weird
stuff
on
the
back
end,
where
it
was.
If
you
matched
on
a
on
a
key
ID,
it
was
kind
of
finding
aliases
and
updating
the
the
existing
resource
list
and
dumping
out
something
to
resources
that
Json
that
didn't
match
that
didn't
match
reality.
So
I
think
producer
were
saying
that
it
was.
It
was
updating
resources
when
you
didn't
want
to,
or
some
of
that
ID
and
Alias
stuff
was
getting
a
little
mixed
up.
Is
that
there
or
not,
covering
everything,
yeah.
C
I
think
the
Alias
issue
is
on
AWS,
like
when
we
try
to
put
an
encryption
on
any
resource
type
if
the
Alias
of
the
key
doesn't
exist.
Aws
kind
of
throws
an
error
when
you
try
to
use
that
key,
but
with
glue
catalog.
That
doesn't
seem
to
be
the
case
like
if
you
try
with
Alias
XYZ
tea
and
try
to
encrypt
the
catalog
settings
that
goes
through
the
glue
security.
C
Config
is
the
filter,
which
is
basically
the
KMS
which
basically
behaves
like
the
KMS
Vector
for
glue
catalog,
but
I
think
it
doesn't
know
how
to
handle
a
few
edge
cases.
That's
all.
F
A
Yeah
so
I
guess
one
of
the
questions
here
is:
do
we
add
a
KMS
key
filter
because
that's
kind
of
an
expectation
or
do
we
go
with
a
do?
We
try
to
update
the
existing
glue
security
filter,
yep.
A
And
glue
security
config
it
had
something
else
that
you
could
update
to
right.
It
wasn't
just
the
keys,
you
were
changing
something
else
and.
C
Yeah,
that
was
on
the
set
encryption
action
type,
so
there
are
two
settings
and
even
if
with
the
CLI,
if
you
try
to
change
one
config,
it
tends
to
override
the
other
one,
because
that
goes
as
a
blank
input
in
the
API.
A
C
F
C
Basically,
it
tackles
those
two
things:
I
was
also
on
the
fence
with
the
KMS
key
filter,
because
I
didn't
know
how
to
like
make
changes.
The
glue
security
config
was
a
filter
on
the
AWS
account
resource
type,
which
was
extended
to
Glow
catalog.
So
yeah
I
was
not
sure
how
to
like
go
about
this
issue.
A
Yeah,
that
does
seem
a
little
bit
tricky
I'm
wondering
if,
because
people
expect
to
see
a
KMS
key
filter,
if
it
might
make
sense
to
fix
or
update
the
security,
config
filter
and
then
have
kmsk
as
an
alias
to
it
or
if
that's
more
confusing,
does
anyone
in
the
group
have
enough
context
or
passionate
opinions
to
to
lean
one
way
or
the
other
on
that.
B
I
mean
if
we
already
have
an
existing
filter
Port.
Does
it
make
sense,
suggest
to
extend
that
it'll,
be
my
thought,
but
I'm
I
am
not
have
not
I
have
not
very
little
context
spelled
in.
A
Yeah
yeah
yeah
is
it
that
seems
reasonable
to
me
too
I
think
part
of
the
issue
was
that
I
mean
Patricia.
This
seems
like
your
case
too,
that
when
you
looked
at
the
schema,
it
wasn't
obvious
that
there
was
a
KMS
filter,
because
the
name
was
not
what
you
expected
yeah
so
yeah,
maybe
fixing
the
existing
one
and
having
an
alias
to
it
all
right,
let's
roll
with
that
and
if
yeah
that
seems
reasonable.
If,
for
some
reason
it
ends
up
not
working,
then
we
can
follow
up
with
that
sounds.
C
A
That
was
the
one
from
that
was
the
main
issue.
I
remember
having
a
follow-up
on
from
slack.
Does
anybody
else
have
specific
PRS
or
issues
you
want
to
talk
about
here.
G
F
A
F
Think
we
can
jump
to
the
last
comment
from
Stephen
there
I
think
just
to
ground
the
discussion.
I
think
this
is
use
case.
It's
given
if
you
wanted
to
read
or
just
describe
it,
to
give
everyone
some
context.
G
Yeah
sure
I
mean
basically
we're
deploying
a
policy
out,
which
is
you
know,
talking
to
service
quotas,
and
the
policy
is
pretty
simple:
it
just
requests.
You
know
a
a
limit
increase
any
time
that
there's
a
quota
usage
above
80
percent.
G
We
want
to
basically
have
that
deployed
out
to
all
of
our
accounts
and
regions.
We
have
quite
a
lot
of
accounts
and
yeah.
The
issue
we're
hitting
now
is
yeah
I
mean
we
deploy
this
policy
out
to
all
those
accounts.
At
the
same
time,
basically,
with
like
a
rate
weekly,
you
know
periodic
schedule
they
all
kick
off
at
the
same
time,
because
basically
we're
deploying
all
the
all
the
policies
around
the
same
time
and
it
it's.
G
You
know
it
seems
to
be
bombarding
that
AWS
service
quotas,
endpoint
we've
seen
this
happen
with
config
before
some
other
service.
You
know
endpoints,
where
you
know,
if
you
have
many
many
requests
from
many
accounts
hitting
it
at
the
exact
same
time
it
can
cause.
You
know,
throttling
issues,
so
that's
sort
of
what
we're
seeing
and
it
was
you
know.
G
Originally
the
thought
was,
you
know:
can
we
introduce
something
into
the
you
know
periodic
schedule
to
possibly
randomize
the
you
know
cron
for
like
when
it
runs
so
as
an
example,
if
we,
you
know,
did
daily,
it
would
randomize
the
minutes
and
hours
we
did
weekly
would
be
the
day
minute
hour.
Something
like
that
is
you
know
what
we're
thinking
to
potentially
address
this.
So
you
know
when
it
deploys
out
it's
basically
kind
of
evenly
Distributing.
You
know
the
executions
across
some
time
frame.
B
The
problem
is
like
I,
think,
there's
an
open
question
there
on
yeah
I
provided
feedback
in
that
conversation
with
regards
to
like
there
are
limitations
on
periodic,
and
if
you
have
a
lot
of
like,
and
especially
if
you're,
a
lot
of
policies
and
a
lot
of
cardinality
you're,
almost
yeah
you're
always
going
to
be
better
off
like
providing
it
some
form
of
compute
because
they
can
get
cast
on
the
lookups
as
well.
F
I
think
it
just
wasn't
clear
to
us.
So
do
you
recommendation
recommendations?
Hey
don't
use
periodic.
So
then
what
mode
do
we
run?
It
obviously
not
event
based
right,
because
these
are
not
anything
event
based.
So
then,
are
you
saying
Lewis
pool
pool
base?
Is
that
what
you're
saying
and
running
on.
A
F
B
Exactly
you
do
a
pull-based
base
and
that
way
you
get
a
cash
and
then
you
don't
you're,
not
hauling.
You
don't
have
to
deal
like
you
can
have
that.
You
know
a
thousand
policies
on
a
single
resource,
and
it's
okay
like
in
that
context,
whereas
we
did
them
all
as
periodic
you're
gonna
immediately
run
into
this
exact
issue
and
what
you
want
to
provide
in
terms
of
compute
is
open-ended
like
well,
we
don't,
we
don't
dictate
what
that
is.
There
are
many
options
available.
F
Yeah
for
us
that
approach
would
be
like
a
huge
shift
in
how
we
set
up
our
our
pipelines.
We
purposely
chose
to
go
with
where
everything
is
deployed
and
run
on
Target's
account.
I
would
say
the
main
reasoning
behind
that
was
such
that
we
don't
have
a
single
point
of
failure.
If
anything
goes
wrong,
things
that
got
deployed
out
to
Target's
account
they
continue
to
to
operate.
I
think
that
was
one.
The
principle
that
we.
B
It's
worth
exploring
yeah
I
understood
how
you
got
there.
The
I
think
the
question
is
also
on
like
what
does
it
even
mean
to
do?
They
asked
let's
say
to.
B
To
do
the
the
original
request
like
what
does
it
mean
to
randomize
schedule?
The
problem
is,
is
to
randomize.
Schedule
requires
I,
think
a
granularity
on
the
randomization,
and
we
we
don't.
We
have
to
do
that
when
we
go
to
create
the
schedule
itself
yeah,
because
we
can't
we
can't
do
it
necessarily
when
we
get
the
event,
because
we're
in
the
runtime
constraints
of
Lambda
and
we're
just
losing
out
I'm
available
runtime
for
the
cardinality
resources.
B
H
Yes,
yeah
I
wanted
to
comment
on
listen
slack,
but
I
think
it's
a
little
easier
to
hear
it
if,
in.
H
Do
some
sort
of
balancing
on
like
a
random
policy
execution,
we
would
have
to
know
the
entire
state
of
all
of
the
potential
policies
that
you're
trying
to
execute
anyway,
like.
E
H
Here
is:
is
an
orchestra
policy,
orchestrator
yeah
and
that's
significantly
like
that
is
not
just
come
up
with
a
random
time
to
execute
the
policies.
I
think
additionally,
like
there
are
tools
in
C7
like
policy
conditions
where
you
can
use
that
to
evaluate
whether
or
not
to
execute
like
positive
policy
condition.
Evaluation
happens
before
any
sort
of
like
polling
against
the
API
happens.
So
it's
you
know
for
something
like
service
quotas,
we're
not
going
to
make
those
like
hundreds
of
passionated
calls.
H
We
usually
are
going
to
evaluate
the
you
know
whether
or
not
you
should
be
running
anyway.
So
doing
things
like
you
know,
checking
what
account
you're
in
or
checking
what
time
it
is
like.
You
could
do
that
with
policy
conditions,
given
the
way
that
you've,
like
written
policies
and
from
what
you're
describing
applying
to
Target
regions
like
that,
might
require
like
different
policies
for
different
accounts
or
you
can
use.
H
You
could
use
things
like
variable
expansion
to
pass
the
account
ID
into
the
policy
condition,
so
you
can
still
maintain
like
a
single
set
of
policies
using
variables
to
fill
it
in
stuff,
like
that,
but
I
mean
me
personally,
I
in
general
I
think
it's
it's
a
no-go
in
terms
of
like
what
the
ask
is
here
like
it's
just
way
outside
the
scope
of
what
custodian
can
do
right
now,.
B
A
B
Well,
it's
on
outside
the
scale,
because
we
have
a
defined
path
to
this
very
clearly
defined
paths
to
get
around
us
and
like
because
it's
which
is
use
compute
and
pull
into
periodics
like
because
there
is
a
bunch
of
other
limitations
on
periodics,
like
especially
when
you're
dealing
with
scale
like
that
it
just
you're,
just
gonna
you
it's
a
round,
Square!
Sorry,
yeah,
it's
a
it's
a
square
and
around
you
don't
know.
F
And
when
you're
talking
about
using
compute
so
that
we
can
Lab
at
Cash
I'm
guessing
because
we're
using
mostly
all
Lambda
base
and
we're
not
really
leveraging
cash.
But
when
you
talk
about
cash
cashing
that
will
only
come
into
place
once
we
start
I'm
guessing
on
the
same
compute,
where
we
make
different
queries,
let's
say
on
the
same
objects
and
then
then
it
will
cache
and
then
we
don't
have
to
so.
B
The
cat,
the
cache
scope,
is
account
region,
service,
sorry
account
region,
resource
type,
and
your
question
comment
on
like
distributed
versus,
centralized
like
you,
can
still
go
distributed
per
se.
Well,
like
you,
you
fire
a
Target
container
like
how
you,
where
you
run
compute
in
this
context,
is
not
it's
not
something
that
the
pro
like
that
is
dictated.
It
is,
is
very
much
designed
to
be
flexible
because
people
run
custodian
from
Jenkins
or
you
know,
gitlab
or
kubernetes,
or
you
know
far
Gates
or
big
ec2
instance
like
we.
H
Yeah
there's
also
cash
benefits
outside,
like
I
think
the
obvious
benefit
of
caching
is
like,
if
you
have
two
ec2
policies,
both
executing.
I
H
Know
you
only
have
to
run
the
described
instances
call
at
once,
but
there
are
also
cash
benefits
across
policies
of
different
resource
types.
So,
like
any
time
that,
like
you
know
if
you're
familiar
with
the
source
code,
where
we
call
the
resource
manager
it'll,
do
a
cash
lookup
to
say
like
do
a
security
group
related
filter
like
that's
another
API
call
that
we
don't
want
to
hit.
J
F
J
Just
going
to
say,
as
an
example,
we
do
exactly
all
this
like
what
they're
suggesting
where,
like
you,
we
run
custodian
in
each
account,
but
we
run
it
on
code
build
and
then
we
have
our
own
cloud
watch
events
that
fire
off
jobs
on
an
hourly
weekday
daily
weekly,
these
kinds
of
schedules-
and
it
just
runs
in
and
code,
build
and
then
says
you
know
well.
D
J
Have
we
have
our
own
wrappers
that
do
you
know
template
based
Dynamic
policy
generation,
but
then
we
also
say:
okay
now
run
all
the
policies
that
start
with.
You
know
daily
Dash
to
pick
out
the
ones
that
are
supposed
to
run
on
daily
schedule,
that
kind
of
thing,
and
so
then,
all
of
our
policies
that
we
write
benefit
from
the
cash
because
they're
all
running
in
the
same
context.
At
the
same
time,.
F
But
all
of
them
they'll
still
be
running
within
a
particular
Target
account
and
not
for.
H
Us
yeah
yeah
I
mean
yeah,
you
can
slice
it.
You
know,
however,
you
want
you
could
have
like,
like
you
have
a
Lambda
function
like
dispatch
jobs
into
the
target
account
you
could
have
you
know
whatever
you
know
you
want
to
fancy
right.
You
could
have
like
a
Super
Bowls
into
like
from
the
the
main
account
there's
plenty
of
different
ways
to
slice
it,
but
I
mean
for
large
accounts
like
it's
significant.
The
the
time
savings,
as
well
as
like
the
less
strain
on
the
API.
B
So
centralized
versus
centralized
is
sort
of
added
like
you
can
go
either
way
like
and
I
think
sometimes
are
just
really
doing
central
start
doing.
Decentralizing
like
firing
up
compute
for
like
ephemeral,
computers,
you
know
code
builds
or
you
know,
fargate,
that's
there.
It
is
very
much
in
line
with
your
current
architecture,
one
one
but
runs
into
many
less
problems
that
you'll
find
than
periodic
which
I
feel
like
is
our
number
one
foot
done
that
we
have
exerting
in
the
meat.
B
Probably
we
should
add,
have
an
issue
like
at
least
telling
the
show
on
yeah.
A
Yeah
we
have
seen
folks
go
both.
You
know:
waffle,
sort
of
between,
centralized
and
decentralized
on
the
C7
and
org
those
those
periodic
policy
runs.
I,
say
periodically.
I
mean
that
the
C7
are
the
compute
based
ones,
because,
with
the
with
the
centralized
you
do
get
a
I
feel
like
it's
a
bit
easier
to
control
that
stampeding
herd
type
problem,
because
you've
got
the
some
limit
to
the
parallelism.
That's
coming
with
C7
and
org,
so
you're
not
going
to
run
across.
A
A
G
B
G
The
idea
is,
you
know,
we're
leaning
towards
not
wanting
to
add
support
for
this,
because
it's
supported
by
other
mode
types
is
that
kind
of
the
main
gist
of
it.
B
Yeah,
the
gist
of
it
is,
is
that
there's
a
certain
intractability
around
the
scaling
periodic
mode,
and
that
is
ordicated
for
by
providing
any
other
form
of
compute
and
starting
with
the
standard
poll
mode
and
there's
a
boatload
of
benefits
that
avoid
the
exact
same
exact
problems
you're.
Having
now
and
we'll
likely
other
problems
that
you'd
also
have.
G
B
Things
you
don't
need
to
randomize
the
key
social
compute
like
the
problem.
That
problem
goes
away.
I'd
say
if
you
are
using
a
periodic
and
just
executing
the
policies,
you
don't
need
to
randomize,
because
it's
already
going
to
go
you're
already,
you're,
not
dealing
with
concurrent
things
that
like
are
trying
to
pull
data
down
repeatedly
because
you're
also
taking
advantage
of
the
cast.
You
should
probably
see
like
significant
API
reduction.
Actually
in
this
context,
but.
F
B
Like
you're,
if
you're
not
periodic
is
effectively
fully
independent,
like
so
assuming
all
decentralized,
both
scenarios
periodic
is
basically
saying,
let's
spin
up
a
bunch
of
things
at
the
same
time
and
they're
all
going
to
hammer
the
API,
then
there's
no
opportunity
to
share
cash.
If
you
use
any
form
of
compute
you're
going
to
have
the
opportunity
to
share
cash
and
they
can
go
through
them
with,
and
you
still
have
the
same
schedule.
B
Okay,
so
you
remove
you're,
actually
reducing
your
API
calls
by
like
oh
and
of
where
n
in
this
context
is
the
number
of
policies
against
the
same
resource
type
at
a
minimum
and
then
as
something
alluded
to
there's
additional
savings
from
other
other
caching.
With
regards
related.
B
Actually
hold
is
the
primary
thing
available.
Battery
is
typically
account
region
service
and
sometimes
down
further
to
the
API
call.
But
in
this
context
the
cache
is
going
to
hold
the
account
region
service,
and
so,
in
that
context
like
using
using
most
setting
services
like
if
you
have
a
hundred
policies
against
it,
you're
going
to
be
able
to
keep
the
cash
between
them
and
additionally,
because
you're
in
you're
also
reducing
concurrency
against
that
thing.
F
I
That's
that's
the
limit
on
the
AWS
side.
It's
not
the
power
account
power
region,
TPS
limit,
so.
F
E
F
B
I
It's
about
I
think
it's
about
20
000
lamina
function
running
in
like
10
15
minutes
times
right
so
yeah.
E
Just
just
to
make
sure
everybody
understands
when
you
run
these
as
a
Lambda
function,
every
Lambda
holds
the
API
and
when
the
Lambda
closes,
the
cache
dies
it's
gone.
Nothing
else
can
use
it.
So
your
next
Lambda
needs
to
pull
it
again.
Api
and
the
next
one
needs
to
pull
it
again.
Api
and
the
next
one
needs
to
pull
it
again
and
that's
what's
killing
you,
which
is
why
we
use
compute
and
we
can't
use
the
cache
so
heavily.
F
F
Yes,
exactly
so
that's
what
I
was
trying
to
get
to
is
you
know,
running
compute
is
just
not
making
enough
right,
I
think
as
you
get
into
it,
it's
like
first
serial
serially
and
then,
of
course,
not
all
right,
yeah,
not
numbering
all
the
same
time.
B
I
mean
it's
serial
across
the
policies
that
you
put
into
the
poll
note
like
I
mean
it
sounds
like
you're
saying
that
the
service
code
API
has
some
special
issues.
B
That
is
not
maybe
not
quite
in
part.
Amazon
architecture
regards
to
cell
base
division,
but
I
mean
I.
B
I
think
I'm
getting
to
explore
that
a
little
bit
more,
but
that
may
require
talking
to
some
AWS
teams,
but
like
the
this
guidance
that
came
universally
from
the
crowd,
it
I
think
stands
regardless
of
that.
B
I
Hey
I
want
to
kind
of
bring
back
the
conversation
back
to
what
Stephen
was
asking
instead
of
talking
what's
the
best
practitioner
in
compute
versus
non-comput
and
stuff,
so
we
we
cannot
change
our
current.
You
know,
structure
deployment,
you
know
we.
We
can't
use
a
single
account
for
resiliency
for
security
and
compliance.
Various
reasons,
there's
a
reason
why
we
we
have
we
end
up
with
current
deployment
structure
and
it's
simply
we
need
to
randomize
the
execution
time
and
schedule
and
the
periodic
mode
and.
I
Have
we
we
came,
we
basically
have
a
two
options.
One
is
we.
We
have
a
deployer
support,
we
create
our
own
talk,
internal
token
and
randomize
the
Quran
time,
or
this
may
be
benefit
to
you,
know
C7
and
other
user
in
c7n.
I
If
she
says
even
in
support.
Currently
we
have
like
a
what's
the
rate.
You
know
rate
bracket
one
day
or
something
right
for
daily.
Maybe
we
can
have
support
for
Red
Bracket,
one
day,
random
or
something
so
the
C7,
and
we
generate
a
random
schedule
for
for
a
day
daily
schedule,
the
same
for
weekly
hourly
and
that's
that's
that's
about
it
and
you
know,
if
you
guys
think
that's
that
will
benefit
to
other
users,
then
maybe
we
can
make
a
full
request.
B
I
appreciate
that
perspective,
but
the
underlying
randomization
is
to
the
scheduler
itself,
and
it's
not
clear
to
me
that,
and
it
also
needs
to
find
sort
of
a
randomization
scope
and
also
requires
parsing
the
expression
and
like
there's
a
lot
of
little
things
there
that
are
finicky
frankly,
and
it's
not
clear
right,
I,
think
like
and
by
the
way
like
I
said
in
the
sidebar
like
using
compute,
does
not
mean
you
have
to
be
a
single
account.
B
B
To
the
service,
but
any
use
of
cash
here
will
also.
B
Fashion
you'll
also
get
natural
randomization
effectively
because
you're
going
to
be
like
as
I
go
through
the
other
resource
policies.
You
won't
necessarily
hit
this
one,
particularly
whereas
when
you
do
whole
base
sorry,
when
you
get
periodic
you're
effectively,
they're
they're
all
firing
at
the
same
time,
regardless
you'll
get
natural,
randomization,
anyways
and
you'll
get
a
lot
of
other
benefits
from
cash
up
for
the
other
other
periodic
policies
that
you
may
have.
H
I
think
I
mean
for
what's
worth
I
think
there
is
still
a
path
like
to.
If,
if
you
really
cannot
move
off
of
periodic,
you
can
do
very
you
know.
Variable
expansion
or
yeah
policy
conditions
to
achieve
the
goals
you're
trying
to
get
to,
but
yeah
I
mean,
like
Kapil,
said
it's
it's
a
much
larger.
B
H
Know
the
entire
State,
like
song,
randomization,
like
on
iTunes
like
they
had
to
when
they
introduced
like
purely
random
song,
Playbook
like
it,
it
wasn't
random
enough,
like
there
are
things
you
have
to
do
to
make
it.
You
know
more
and
more
random
than.
F
I
think
with
me,
I
I
got
the
answer,
and
it's
that
approval
for
things
that
we
want
to
do
at
scale,
yeah
to
run
our
compute
and
as
a
capital
set.
That
group
doesn't
doesn't
mean
centrally
running
it
from
a
single
account.
We
can
set
things
at
to
run
on
target
account.
I
think
Todd
was
also
mentioned.
F
That's
what
they
have
done
too,
but
so
there
has
to
be
some
sort
of
either
seriously
or
randomization
to
it,
because
with
quota
service
again,
it's
not
the
limit
within
one
account.
Okay,
so,
even
though,
with
the
caching
and
and
what's
not
within
that
one
account
I,
don't
think.
That's
gonna
solve
the
problem
of
the
quota
service,
API
that
all
of
a
sudden
across
three
thousand
account
it
all
hits
the
API.
At
the
same
time,
it
actually
comes
issue
with
the
equivalent
service
API,
so
yeah
there's
some.
F
There
has
to
be
some
sort
of
Randomness
to
it
too.
B
So
on
the
randomization
like
you're,
just
injecting
that
in
the
policy
thing
suddenly
talk
about
variables.
One
thing
that
recently
went
into
Percy
said
unlocked
because
you
know
shift
lab
stuff
tends
to
run
in
like
Ci
pipelines
where
all
everything
gets
exposed
to
the
variables.
So
it
made
a
lot
of
sense
to
be
able
to
expose
those
direct,
expensive
environmental
variables
directly
to
policies,
if
that
provides
a
reasonable
path
for
some
of
the
randomization.
That
you're
looking
for
then
I
think
that
that's
definitely
like
in
scope.
F
I
think
for
us
at
this
point
it's
too
hard
for
us
to
change
the
way
we
deploy
and
run
our
policies.
I
think
what
we're
going
to
do
instead
is
we
do
have
an
orchestration
layer.
That's
it
that's
it.
On
top
of
Cloud
custodian,
that's
how
we
deploy
policies
to
our
Target
account,
I!
Think!
That's
where
we're
gonna.
Do
the
injection
of
randomizing
the
the
cloud
watch
schedule
that
the
Chrome
expression,
which
is
randomize
it
at
that
level?
F
I,
don't
know
if
that's
going
to
solve
everything
but
I
think
that's
what
we're
gonna
try
out
at
least
I.
Think
for
this
particular
issue.
I
think
it
was
so,
but
as
we
ran
more
and
more
into
similar
cases
like
this
it,
it
then
might
not
work
as
things
get
more
up.
J
I
Yeah
I
mean
this
is
something
we
we
observe
recently,
but,
as
Darren
mentioned,
we
we
saw
similar
behavior
in
the
config
policy.
When
we
are
like
talking.
There
was
all.
D
I
Of
you
know,
discussion
with
AWS
support
because
we
are
not
obviously
hitting
the
TPS.
It
was
like
three
TPS
or
something,
and
we
are
not
even
making
that
and
yeah
we
concluded
it
was
AWS
size
problem
in
the
AWS
side,
but
yeah
where
there
is
a
I
mean
if
he's
a
decent
threat,
Center
Ice
there
is
a
you
know,
security
compliance
content.
We
can
go
if
it's
decentralized.
We
will
have
the
same
problem
because
we
will
be
end
up
deploying
you
know
for
each
account.
I
This
region
and
I
mean
this
relationship
that
we
have
same
problem.
So
yeah
sounds
like
we
have
to
go.
Go
with
our
internal
solution
for
now,.
G
Yeah,
thank
you
guys
for
the
Lively
discussion
around
this
Kapil
and
and
sunny
and
AJ
or
yeah
chiming
in
and
the
chats
and
yeah
letting
us
kind
of
raise
this
and
figure
out.
The
best
solution
really
appreciate
it.
Guys.
A
Thanks
for
bringing
it
up,
it
does
seem
like
a
valid
issue.
The
way
I
know
we've
got.
We
have
some
information
on
the
Lambda
side
in
the
in
the
docs
about
what
gets
disabled
by
default.
Like
I
I
know,
there's
a
note
in
there
about
disabling
caching,
but
it's
I
think
it's
not
clear
enough.
I,
don't
know.
I.
Think
you've
mentioned
this
before
too,
that
it's
not
clear
enough.
A
The
sort
of
walls
that
were
we're
setting
people
up
to
run
into
when
we
use
periodic
a
lot
so
I
I,
think
separate
from
the
Lambda
mode
stuff.
A
We
should
call
that
periodic,
specifically
and
I
think
you
mentioned
that
just
making
those
documents
a
little
bit
a
little
bit
clearer
and
mentioning
maybe
bringing
some
of
this
consolidating
condensing
some
of
this
discussion
into
a
into
a
warning
block
or
something
in
the
docs.
So
it's
a
little
clearer.
What's
going
to
happen
and.
A
All
right
that
was,
that
was
a
good
one.
Do
we
have
thanks
Aaron,
we
exhausted
Darren
any
other
issues
or
PR's
that
are
worth
bringing
up,
or
did
that
that
last
discussion
raise
anything
else
for
anybody.
B
About
the
release
tomorrow,
I
did
notice
that
the
docker
build
last
night
filled,
I'm,
hoping
it's
it's
transient,
but
tracking
that
for
the
release
and
there's
still
going
to
be
at
least
a
virgin
bump.
Pr
come
as
well
priority.
A
All
right,
one
other
thing,
I'll
mention
I,
saw
I
think
this
came
in
when
I
was
out,
but
there's
this
someone
on
someone
else
on
the
call
may
run
into
this
I
know
a
while
back.
A
We
had
an
issue
where,
if
you
tried
to
interpolate
the
the
now
value
when
you're
running
in
in
a
serverless
mode,
it
would
expand
that
now
to
the
current
time
when
you
provisioned
a
function
which
was
generally
not
what
people
wanted
and
you
sort
of
had
to
work
around
it
by
adding
double
braces
and
I
had
a
change
to
fix
that
a
while
back
by
making
it
not
expand.
Now
when
we're
running
serverless
modes,
but
it
seemed
like
because
this
is
like
bringing
it
full
circle
back
to
periodic.
A
If
you
go
to
run
a
periodic
policy,
it
is
both
serverless
and
bull
mode,
and
so
it
looks
like
there's
an
issue
where
that
was
just
not
expanding
now
at
all
and
I
think
that's
what's
going
on,
but
I'm
just
mentioning
here.
This
is
a
good
issue.
I
know
Capel!
You
already
commented
on
it.
D
So
with
that
also
be
the
cause
of
say
in
gcp
it
not
when,
when
processing,
gcp
tags
and
going
I
can't
parse
this
as
a
date
or
I,
can't
add
this
as
a
tag
when
using
now.
D
D
Yeah,
because
gcp
is
so
particular
about
what
it
will
take
in
a
label
value
gotcha.
It's
unpleasantly
particular
to
be
honest,
but
yeah
all.
A
Right,
well,
that's
yeah,
I
I,
so
I
bring
this
up.
Just
in
case
I
mean
we've
got
a
pretty
good
crew
here
with
a
running
policies
in
a
bunch
of
different
modes.
So
in
case
somebody
sees
something
weird
just
you
know
calling
it
to
your
attention
that
that
I
may
have
busted
something-
and
it
looks
like
I.
A
Think
part
of
this
is
that
when
we
go
to
load
policies
like
every
command
wraps,
this
kind
of
this
load
policies,
internal
method,
that
that's
going
to
do
this,
the
variable
expansion,
and
so
we
might
have
to
just
tweak
the
way
that
I'm
saying
don't
expand
for
serverless
and
make
it
more.
So
it's
like
don't
expand
during
a
provisioning
phase,
even
though
I
don't
think.
It's
super
obvious
to
tell
that
the
way
the
logic
is
now,
but.
B
I'll,
look
at
it
and
make
sure
it's
like
you've
seen
a
switch
like
you've
got
like
we
probably
have
some.
Is
it
since
checks
and
for
periodically?
We
probably
need
to
be
like
you
probably
need
to
have
a
list
or
something
or
like
some
or
maybe
we
need
to
double
check
on
on
subclassing
to
determine
it.
A
Yeah
I,
my
first
thought
was
like
I
was
saying:
don't
don't
expand
now,
if
we're
serverless
and
I
was
thinking.
Oh
maybe
I
need
to
say
like
do
expand
if
it's
pull,
but
then
that
might
go
yeah
basically,
I
just
need
a
test
for
periodic
mode
and
to
make
sure
that
we
do
the
right
thing
when
we're
loading
and.
A
So
thanks
for
the
report
there,
okay
I
think
that's
all
I've
got
in
here
is
anybody
else.
Have
any
topics
to
bring
up?
This
is
PR's
issues
or
sort
of
anything
else.
That's
wild
card
time.
A
All
right
silence
says
that
we're
done
here
and
you
are
free
to
go
out
into
the
world
thanks
everybody
we'll
catch
up
in
slack
and
see
you
again
in
a
couple
weeks.