►
From YouTube: Cloud Custodian Community Meeting 2023-08-08
Description
Our community meeting is public and we encourage users and contributors of Cloud Custodian to attend! You can find the notes for this meeting in both GitHub Discussions and in HackMD:
- https://github.com/orgs/cloud-custodian/discussions
- https://hackmd.io/@c7n
Check out our Slack for more info! http://slack.cloudcustodian.io
A
Anybody
feel
up
for
introductions.
I
think
we've
got
a.
We
could
probably
split
for
introductions
today,
just
looking
at
the
cast
on
the
call,
but
we
had
a
little
bit
of
prep
before
the
recording
started
and
we've
got
some.
We've
got
a
couple:
pull
requests
and
issues
to
talk
through
I,
don't
think,
there's
any
Topline
agenda
items
other
than
going
right
into
specific
PR
and
Asia
discussion.
B
No
issue,
but
we
were
able
to
cut
a
new
release
and
the
SNS
payload
stuff
is
working.
Fine
in
our
non-prot
environment,
like
we
were
seeing
close
to
2
000
policy
errors
and
I
think
those
are
all
resolved
right
now.
So
it's
looking
pretty
solid.
Oh.
A
C
And
comment
yeah,
so
I
want
to
I
I
had
a
request
to
are
we
recording?
Yet
we.
C
To
have
a
bunch
more
resources
and
I
tried
to
automate
it
effectively
to
in
that's
I
know.
What's
coming
out
of
that,
automate
that
and
automate
the
testing
is
some
of
the
testing
as
well.
C
It's
it's
definitely
helpful,
there's
a
bunch
of
resources
in
there,
but
my
hope
is
more
about
how
we
can
use
it
to
build
out
a
lot
more
resources
fast
on
a
faster
basis.
So
that's
I
guess
where
it's
at
there's
a
bunch
of
stuff
still
to
do.
I.
C
Think
roughly
half
of
those
resources
now
have
unit
tests,
everything
with
a
custom
everything
the
custom
describe
and
I
think
I
still
have
to
get
to
there's
still
some
pending
work
on
security
findings
and
guard
Duty
findings
that
you
were
talking
about
with
AJ.
So
your
feedback
is
still
an
address,
but
most
of
the
rest
is
addressed
at
this
point:
Sorry
advisors,
all
the
finding
things
still
have
to
go
and
that's
included,
including
advisor,
but
the
rest
seems
to
be
pretty
decent.
C
Yeah
I
mean,
and
so
the
the
testing
part
was
interesting,
so
we're
actually
pulling
from
terraform
examples
to
get
a
terraform
resource
that
we
can
then
directly
use
as
a
functional
test.
So
it.
B
C
A
Actually,
that
was
one
of
the
questions
I
had
about.
The
testing
is
that
for
some
of
those
ones
that
we're
unlikely
that
that
many
folks
are
unlikely
to
have
is
the
assumption
that
we'll
just
we'll
have
have
some
terraform
templates
provision
stuff
and
then
run
the
describe
test
as
functional
test,
and
that's.
C
Some
of
them
are
easier
than
others,
so,
let's
see
like
the
ones
that
are
tricky,
I,
think
or
I.
Don't
I,
don't
like
anything
like
like
I,
actually
had
a
spin
up
instances
or
like
had
real
costs
like
device
form
replication
data.
Sync
really
know
what
to
do
with
so
those
don't
have
tests,
but
they
also
don't
have
any.
A
D
A
Works
does
anybody
else
on
the
call
just
looking
at
this
list
of
resources
or
any
of
these
particularly
interesting
for
for
other,
on
the
call
to
use
or
test
and
I
see,
people
leaning
in
I
should
do
a
bit
more.
B
Yeah
I
was
trying
to
go.
Our
sandbox
account
looks
pretty
white,
so
I
was
hoping
to
find
all
resources
in
there
and
I
tried
to
run
a
bunch
of
tests,
starting
with
the
top
ones,
and
I
still
have
to
go
through
the
full
list
of
what
to
do.
There.
C
Okay,
I
definitely
appreciate
you
taking
the
tires,
the
X-ray,
sorry,
the
state,
API
Gateway
stage,
one
definitely
needed
some
fixing,
but
yeah
that
could
be
addressed
now.
They
also
have
monkey
arms
I
also
addressed.
A
C
I
was
actually
that's
a
good
call
out.
I
was
also
wondering
about
doing
a
Delta
against
config
and
seeing,
if
like,
if
we
can
just
given,
you
know,
actually
given
like
right
now,
we're
going
directly
to
the
API
and
it's
still
fairly
manual,
but
to
configurate
to
configure
the
generator.
But
if
we
could
just
be
like
poke
it
like
point.
D
C
A
an
arbitrary
cloud
formation,
type
or
config
type
and
generate
that
might
be
useful.
It's
it's
still
pretty
as
they
generate,
but
it's
still
a
manual,
so
I
don't
know
really
what
it
is.
But
I.
C
A
So
that's
this
one.
This
will
so
this
one
will
be
kicking
around
a
little
bit
more
of
a
book's
able
to
test
it
work
on
it,
cool.
C
I
also
was
going
to
one
other
topic
that
I
had
for
the
meeting
was,
was
gonna,
try
to
cut
or
release
of
custodian
this
week,
possibly
for
the
next
24
or
48
hours.
Just
we
have
backslid
the
last
few
releases
to
doing
like
the
third
week,
just
because
of
packaging.
Other
stuff-
that's
just
happened,
but
now
also,
we
also
have
the
the
building
of
the
binary
the
building
of
the
wheels
automated
fully.
A
Would
that
release
happen
all
driven
through
GitHub
actions?
Then
it
would
just
be
a
manual
kickoff.
C
It
does
have
the
ability
to
generate
and
upload
to
Pi
Pi
I'm,
not
I,
still
I'm
still
trying
to
configure
Pi
Pi
with
password
like
they
just
they
have
this
new
passwordless
authentication.
So
that
way
we
don't
have
static,
keys
to
publish
it'll,
do
like
oidc
stuff,
but
that's
not
that
still
needs
some
more
baking
time.
So
not
and
also
the
change
log
generation
currently
currently
still
require
some
treatments,
so
it
will
it.
C
A
So
right
before
the
recording
started,
Jerry
mentioned
paying
for
a
for
this
one.
This
openpr
on
the
bucket
replication,
filter
and
it'll
capil.
You
and
I
had
both
looked
at
this
one,
and
then
you
suggested
pivoting
the
logic
to
use
more
of
a
list.
Item
list
item
filter
it
was,
and
we
had
a
lot
of
back
and
forth
so
I
think
Jerry.
You
said
you
made
the
changes
there.
It
seems
like
it
just
deserves
a
fresh
review.
Honestly
I
haven't
looked
at
it
since
those
changes.
C
I
have
looked
at
it
briefly.
I
wasn't
able
to
get
enough
time
to
fully
dive
in
unfortunately,
a
lot
once
traveling
around
but
yeah,
okay.
D
A
Same
here,
but
I
yeah,
Jerry
I'll
give
it
another
look
I'm
sure
you
will
also
compare
but
I'll
see
if
anything
else
jumps
out
thanks
for
sticking
with
it
and
working
on
it
and
tweaking
things.
E
And
no
problem
I
feel
like
this.
This
is
a
very
useful
filter
for
us
and
I
just
added
a
couple
of
custom
properties
beyond
the
regular
AWS
Json
data
coming
back
from
replication
rules,
so.
E
A
A
C
A
Not
no
it's!
These
are
yeah.
This
is
all
different,
but
me
I
mean
I.
I
might
just
take
a
try
to
cut
out
some
of
these
and
see
if
the
test
will
pass
and
then
just
just
push
an
update
there,
but
I
think
the
logic
I
think
we're
okay,
with
logic,
I
knew
you
had
suggested
renaming
it
from
house
planning
maintenance
to
pending
maintenance
seems
good.
A
You
and
the
only
other
one-
and
this
is
I-
guess
kind
of
related
to
to
Jerry's
cemeterys
work
on
the
refactor
and
the
filter
to
use
list
item.
We
had
a
pull
request
come
in
for
Azure
to
add,
cider
support
for
the
network
security
groups
and
I
was
just
looking
for
some
other
opinions
on
how
to
handle
this
because
came
in
Isabel.
A
Put
this
put
the
work
in
and
had
a
a
separate
cider
key,
which
seemed
to
be
kind
of
mirroring
what
we
were
doing
on
the
Ada
on
the
AWS
side,
but
it
ended
up
duplicating
a
lot
of
The
Cider
matching
logic,
and
so
we
were
just
kicking
around
like
from
the
policy
authoring
side.
What
we
would
expect
I
don't
know
if
anyone
has
a
particular
has
a
lot
of
experience
using
the
Azure
policies,
but
what
kind
of
what
kind
of
policy
structure
we
might
want
here?
A
If
anyone
has
a
an
opinion
on
this,
if
we
should
make
because
right
now
we
have
source
and
destination
and
those
both
take
a
an
IP
address
as
a
string,
and
it
just
does
a
string
check
and
we're
thinking.
Well,
do
we
add,
like
value,
filter,
semantics
under
source
and
destination?
A
A
D
A
D
A
You
look
at
Azure,
Network,
Security
Group.
It's
got
the
like
a
source
and
destination
prefix
and
then
a
list
of
prefixes
also,
and
we
can
sort
of
roll
those
all
into
just
one
list
of
addresses
and
do
a
list
item
check
on
it.
If
that's
useful,
I
think
sometimes,
when
we're
extending
these
older
filters,
it's
good
to
keep
in
mind
that
we
can
pull
in
some
of
the
list
item
and
some
of
the
other
things
that
we've
that
we
some
of
the
improvements
we've
made
to
generic
filters.
A
So
if
anybody
has
thoughts
on
this,
please
chime
in.
C
Yeah,
there's
I
mean
I
I
I'm
super
excited
that
this
got
contributed
site
was
trying
to
dig
through
the
Azure
code
because
we
had
an
existing
PR
or
issue
for
it,
and
it
is
a
little
tricky
on
the
implementation,
so
very
thankful
that
we,
this
was
for
this
contribution,
but
no
real
Buffs
on
how
to
expose
it.
A
Yeah
yeah
at
the
same
thought,
just
like
big
thanks
Isabel,
if
you're
watching
this
or
listening
yes,
thanks
for
for
putting
this
in
and
we'll
we'll
get
it
sorted
out,
because
you're
right
having
the
cider
checks,
I
think
I
actually
filed
the
issue.
So
I'm
super
grateful
that
that
we
got
a
contribution
for
it,
because
I
also
looked
at
the
logic
and
I
thought.
Oh,
this
is
going
to
take
a
little
while
to
figure
out
how
to
implement.
C
87
83
in
my
bed
there's
a
link.
D
D
C
C
A
I
looked
through
it
and
when
I
I
saw
it
and
I
was
thinking
of
it
in
the
context
that
the
default
Security
Group,
because
that
looked
like
where
the
issue
was
coming
from
on
on
their
side.
But
it
seemed
like
what
we're
actually
trying
to
do
was
ignore
self-references.
So
that
makes
sense.
I
I
missed
that
the
that
the
picks
came
through.
So
thanks
for
the
Ping
on
that
I
will
look
because
yeah
the
the
default
groups
when
they
get
created,
they
have
a.
A
They
have
a
self-reference
like
a
allowing
all
traffic
from
themselves
and
when
we
have
the
unused
filter,
it
considers
a
reference
to
itself
as
as
usage.
So
it's
kind
of
like
it's
a
little
circular
that
way
so
I
feel
like
the
the
fix,
would
probably
for
default
groups
or
or
any
group
really
just
not
consider
a
self-reference
as
a
as
an
instance
of
usage,
and
that's
probably
what
this
does,
but
I
I
hadn't
looked
at
it
yet.
A
I'll
give
that
a
proper
look
but
yeah.
It
was
actually
that's
one
of
those
issues
that
I
saw
and
I
thought
I'm
surprised
that
hasn't
come
up
before,
because
that
does
seem
like
a
good
call
out.
D
A
I'll
give
that
one
a
look
thanks.
C
Still
and
then
yeah
I
think
it's
still
open-ended
if
it's
useful
or
not
currently,
I
think
I
have
it
set
up
for
maintainers
to
see.
C
But
definitely
need
to
figure
a
way
to
try
to
get
more
more
reviewing
help,
but
this
was
oh
wait
at
least
try
to
treasure
things
that
felt
like
they
could
still
use
could
use
some
help.
C
No,
it's
not
on
labels.
Some
things
come
from
labels,
it's
mostly
for
manual
edit.
The
table
like
the
tip
like
the
it
picks
up
the
pull
request
directly
and
then
it
sort
of
allows
you
to
group
them
on
a
different
thing
categories
and
then
try
to
assign
priorities.
And
then
you
can
drop
a
kanban
view
on
on
the
topic
of
labels.
C
We
do
have
some
automation
for
being
able
to
automatically
apply
provider
labels
correctly
to
all
the
things
potentially
might
try
to
drop
that
in
as
sorry
I
only
works
for
pull
requests,
but
try
to
drop
that
into
a
nightly
GitHub
action.
If
that
helps
for.
A
D
A
A
we
have
some
folks
who
have
more
experienced
with
some
providers
than
others,
and
it
would
always
takes
me
longer
to
look
at
an
Azure
PR
than
another
provider,
so
that
makes
sense
yeah.
A
Cool
yeah,
thanks
for
putting
that
together,
that's
good
because
you're
right,
just
going
through
I,
know,
we've
done
like
lifo
type
Behavior
recently
you
just
go.
Look
at
open
requests,
say
check
in
and
obviously
just
this
call
has
has
shown
that
I
definitely
lose
track
of
things.
So
this
is
cool.
C
Yeah,
there's
that
one
there
eighty
eight
six,
six
five
that
actually
looks
kind
of
interesting
and
that
it
totally
got
missed
I,
think.
D
D
A
Let
me
try
to
post
a
list
of
the
of
just
the
the
parents
that
came
up
on
this
call
I
wonder
if
there's
a
way
to
represent
that
here,
if
it's
worth
doing
that
or
if
that's
kind
of
a
separate
concern,
maybe
it's
just
in
just
recording
notes.
A
A
D
D
A
It's
probably
not
worth
having
a
having
a
separate
task
for
or
re-reviews
or
anything,
that's
probably
a
little
bit
too
too.
D
C
Yeah
I
think
I'm
gonna,
give
it
to
I,
can
give
it
to
the
different
Cloud
groups
like
we
have
some
teams
or
Global
I.
Don't
know
it
might
be
useful
to
avoid
just
to
make
it
wide
open.
C
This
was
an
experiment,
so
I
just
I
had
it
as
a
smaller
user
pop
for
to
start
with,
but
it
does
seem
useful
way
to
collaborate,
so
I
had
been
like
trying
to
do
spreadsheets
and
stuff,
but
that
was
really
silly.
This
is
much
better
as
far
as
auto
updating.
C
B
C
C
A
A
All
right,
thanks
for
that,
any
other
topics.
F
Hey
I'm,
sorry.
F
A
D
F
Go
for
it
yeah,
so
we
recently
discovered
the
one
like
configurable
parameter
in
the
lamina
function.
That
was
a
memory
memory
limit
for
each
function.
That
which
is
I,
think
the
custodian
really
supports
it
to
you
know
by
default
it's
a
512
Meg,
but
custodians
supposed
to
bump
up
to
I
think
the
Lambda
limit
of
10
gig
and
yeah.
F
We
are
experiencing
the
symptom
that
you
know
when,
when
I
run,
when
we
run
the
certain
position
locally,
especially
like
scanning
in
a
bunch
of
Security
Group,
comparing
CSV
files,
this
kind
of
thing
it
completes-
let's
say,
like
you-
know,
10
minutes
like
66
and
a
second,
but
when
we
deploy
to
the
account
number
always
time
out
and
for
the
longest
time
that
we
couldn't.
You
know
like
resolve
this
problem,
but
recently
yeah.
We
found
that
just
bumping
up
the
the
memory
limit.
F
Even
the
nominal
function
itself
is
using
only
couple
in
100
Meg,
but
we
can
just
go
ahead
and
bump
it
to
you
know
one
need
to
give
or
beyond
that
will
actually
upgrade
the
you
know
the
underlining
CPU
like
processing
power
or
something
another
function,
and
it
and
the
timeout
stopped
happening
so
I
don't
know
if
this
is
something
that
common
knowledge
in
this
community,
but
I'm
just
kind
of
sharing
and
also
if
this
is
a
common
knowledge.
F
I
kind
of
like
want
to
get
your
input
on
So,
based
on
my
test,
I
started
with
512
and
in
one
gig
two
four
gig
and
10
gig
and
I
see
that
the
the
Improvement
in
terms
of
the
processing
time
stop
at
two
gig
so
giving
you
an
example.
Whenever
the
certain
policy,
when
I
run
locally,
it
was
330
seconds
to
execute
and
and
then
initially
the
support
is
deployed
to
database
account,
it
was
15
minutes,
timeout,
I
bumped
to
one
gig.
F
It
finished
after
730
seconds
or
something
and
I
doubled
to
two
gig.
It's
I
finished
in
420
seconds
and
beyond
that
I
even
doubled
to
from
two
weeks
to
4G.
It
was
same
I
maxed
out
to
10
gig.
It
was
still
the
same,
and
this
is
just
the
one
policy
and
I'm
thinking
to
test
on
other
policy.
But
is
this
consistent
with
you
know
the
knowledge
you
guys
have
that
you
know
the
laminar
function,
process,
processing,
power,
kind
of
max
out.
You
know
at
the
2gig
or
or
there's
more
to
that.
It's.
C
A
nuanced
topic,
so
the
you
asked
about
sort
of
common
knowledge
in
the
community.
The
primary
common
knowledge
is
that
you
shouldn't
use
pole
based
lambdas
or
like
sorry,
which
are
shouldn't
use,
periodic
lambdas
for
any
significant
resource
cardinality.
The
amount
of
work
that
the
Lambda
has
to
do
is
highly
dependent
on
the
policy
yeah.
E
C
F
Yeah
exactly
sorry,
maybe
I
didn't
mention
that
yeah,
the
older
event
policies-
maybe
we
have
about
I,
don't
know-
maybe
third
is
even
other
third
is
periodic,
and
even
policy
will
never
get
time
now,
because
you
know
you
get
executed,
only
everybody
don't
need
a
single
resource
right.
One
example
I
can
give
is.
This
is
also
most
popular
policy.
That
is
always
time
out
in
our
organization.
Is
the
trusted
advisory
policy
by
default
as
I
need
to
check
the
policy,
but
by
default?
F
You
know
service
quota
that
supports
and
when
I
run
locally
it
will,
it
will
finish
I
remember
it
takes
very
long,
like
maybe
10
minutes
or
something,
but
it
will
finish,
but
when
once
we
deploy
to
AWS,
it's
almost
always
time
out.
So
that's.
B
C
Not,
and
it's
like
the
general
suggestion
for
recommendation
would
be
don't
run
them
in
Lambda.
If
it's
a
full
periodic
policy
use
compute
like
yes,
you
can
do
it
as
you
found.
There
are
issues
and,
yes,
you
can
work
around
those
issues,
but
it's
very
specific
to
what
you're
doing
and
the
cardinalities
in
your
environment.
And,
as
you
know,
if
your
environment
grows
a
lot,
then
those
numbers
are
going
to
change
again.
C
So
in
general,
the
recommend
is
for
periodic
policies
against
significant
cardinalities
to
run
that
on
some
form
of
dedicated
compute,
be
it
you
know
a
far
gate
container
or
you
know
Jenkins
or
you
know
an
ec2
instance
that,
like
that's
generally
the
best
practice,
we
support
it
well,
to
the
extent
that
this
issue
is
known.
C
So
there
are
some
people
that
only
have
like
a
dozen
resources-
and
you
know
it
doesn't
matter
like
the
cardinality-
is
so
low
that
we
want
to
be
able
to
for
them
to
have
an
easy
access
to
to
deploy
without
having
to
spin
up
on
the
machine.
When
you
get
to
a
certain
size,
then
using
a
dedicated
compute
facility,
it
that
doesn't
have
the
same
limitations
as
Lambda
is
more
appropriate.
Lambda
excels
at
based
processing.
In
this
context,
it's
not
doing
events.
It's
it's
doing
full
scans.
F
A
As
for
what
you
might
see
a
see,
the
benefits
stop
at
a
certain
point,
I'm
wondering
also
we
do
have
when
you're
running
those
pull
mode
policies.
Sometimes
we
fan
out
into
multiple
workers
to
to
fetch
resource
details
in
parallel
and
I'm
wondering
if
you're
scaling
up,
we
hit
a
certain
amount
of
vcpus
for
the
Lambda
where
it
can
like
at
some
point.
It's
just
not
kind
of
fan
out.
It
has
fanned
out
as
much
as
it
can
I
don't
know.
If
that's.
C
It
has
to
do
with
a
concurrency
aspect
by
the
way,
there's
another
really
important
reason
why
to
preferred
dedicated
compute
over
lambdas
for
periodics
when
you're,
because
custodians
cash
is
not
available
in
Lambda
between
different
policies,
where.
F
C
Have
it
on
the
local
disk,
then,
if
you
have
10
policies
on
ec2,
you
only
have
to
fetch
it
once
and
the
rest
can
run
out
of
cash,
and
so
that
reduces
the
API
burden
on
your
environment,
which
is
you
know
very
important
to
to
to
ensure
the
availability
of
the
API
for
applications.
C
I
mean
with
regards
to
the
the
total
memory
footprint
and
size
like
we
only
use
I,
don't
I
think
we
generally
have
tried
to
back
off
on
our
concur
currency
like
two
or
three
again
because
of
API
limits.
I
think
the
Lambda
environment
itself
does
scale
at
some
point
to
like
I.
Think
four
CPUs
I,
don't
recall
the
exact
numbers,
but.
F
D
C
C
It's
also
a
good,
you
know
it's
a
good
discussion,
and
but
it's
also
a
good
reminder
that
we
should
maybe
document
some
of
that,
like
at
a
big
like
warning
on
the
Periodic
mode,
about
yeah.
F
It
was
surprised
to
lots
of,
like
our
internal
engineering,
that
you
know
bumping
that
memory.
Even
you
know,
the
lamina
function
actually
do
not
use
actually
upgrade
the
the
process
processor
and
that
Lambda
function
use
that
we
didn't
know,
because
when
we
every
time
we
look
at
the
the
the
Lambda
log
right,
whether
I
see
you
know
it
times
out
or
not.
It's
always
just
like
200
300
million
or
something
we.
F
The
bumping
up
memory
will
help,
but
it
did
so
yeah,
it's
just
in
the
future
general.
You
know
just
this
knowledge.
If
someone
asks
you
know,
hey
I
run
more
clay,
he
finished,
but
when
I
deploy
to
TWC
timeout.
Why?
And
this
can
be
a
solution
for
this
user.
A
Yeah
I'm
wondering
if
some
of
that,
because
some
of
that
we
could
certainly
capture
in
docs
at
least
the
warnings
around
when
you
would
around
not
using
those
the
dangers
of
polling,
lambdas
I
guess
it
has
come
up
in
issues
before
and
I'm
wondering
if
it
might
make
a
useful
discussion
thread
only
because
it
does
come
up
every
once
in
a
while
and
someone
will
say:
hey
I'm
trying
to
do
this,
I'm,
not
how
much
is
docs
and
how
much
is
discussions
and.
C
I
think
I
think
just
a
big
warning
on
the
Periodic
mode.
Okay
would
cover
like
90
of
it
and
be
in
the
right
place
for
people
to
see
when
they're
writing
the
policy
together
through
that
doc.
An
additional
discussion
thread
still
is
also
useful,
but
it's
on
the
docs
and
then
deeper.
Then
it's
gonna
happen
again.
Yeah.
F
F
That
can
you
know
just
time
out
so
like
we
don't
want
to
make
a
custom
solution
just
for
for
that,
and
we
cannot
do
that
so,
but
yeah,
just
by
increasing
memory,
if
it
does
solve
enable
so
also
the
document
I
found
in
was
saying
you
know
by
increasing
memory.
Of
course
it
will
cost
more
to
us,
but
because
the
execution
time
will
be
shorter,
so
overall
we
will
save
actually
more.
You
know
more
money.
That
was,
it
was
that's
what
he
was
saying.
C
Yeah,
there's
a
Lambda
power
like
this,
like,
if
you're
trying
to
do
the
tuning
like
there's
a
built-in
or
not
a
built-in,
there's
a
widely
used
and
documented
tool
called
Lambda
power
tuning,
which
can
do
like
a
we'll
do
a
graph
of
cost
and
throughput.
If
you
give
it
like
the
sample
workload,
I've
used
it
before
when
I've
like
been
doing
application
tuning
to
find
what
the
right
size
is,
but
it
basically
a
step
function
that
tries
to
figure
out
and
tell
you
and
give
you
the
results.
C
But
I
I
would
again
our
the
general
recommend
would
be
to
you
know:
provision
compute
to
you
know
potentially
centrally
uses
him
and
or
to
run
those
poll
based
policies.