►
From YouTube: Cloud Custodian Community Meeting 20220301
Description
Our community meeting is public and we encourage users and contributors of Cloud Custodian to attend! You can find the notes for this meeting on our github repo: https://github.com/cloud-custodian/community/discussions
To get an invite to the meeting join the google group and you'll receive one via email: https://groups.google.com/g/cloud-custodian
A
Awesome
so
yep
we're
gonna
go
ahead
and
officially
start.
This
is
the
cloud
custodian
community
meeting
for
tuesday
march
1st.
It
is
the
first
of
march
that
march
snuck
up
on
me
real
fast,
just
a
reminder
that
we
are
recording
this.
So
if
you
can't
be
on
your
best
behavior
at
least
don't
be
on
your
worst
or
behavior
that
can
get
you
fired
from
your
job
because
we
will
post
this
on
youtube.
A
So
so
at
least
be
on.
You
know
that
behavior,
I'm
gonna
go
ahead
and
post
the
meeting.
A
link
to
the
meeting
notes
in
the
chat
feel
free
to
add
stuff
to
the
agenda.
It
is
an
open
agenda,
so
there
you
go
so
yeah.
You
can
go
in
there
and
follow
along
with
me.
C
Sorry,
everyone,
my
furnace,
is
being
repaired.
A
Cool
so
yeah,
so
first
up,
we've
got
some
workshops
or
webinars
coming
up
this
month,
which
is
super,
exciting
george
and
I
will
be
hosting
or
facilitating
those.
So
on
the
8th
we've
got
our
cloud
custodian
is
that
the
101?
I
think
I
need
a
double
check,
but
between
8th
and
9th
we've
got
a
101
and
a
workshop.
A
The
101
is
going
to
cover
essentially
the
anatomy
of
a
cloud
custodian
policy
as
well
as
your
basic
cloud
custodian
commands
and
then
demo
how
you
would
run
that
and
then
the
workshop
is
an
actual
like
hands-on
webinar,
so
you
will
be
following
along
at
home
and
then
one
of
and
we've
done
both
of
those
before
so.
A
You
might
have
attended
those
in
the
past,
but
the
102
is
a
new
one,
that's
on
the
16th
and
that
in
that
one
we're
going
to
be
showing
you
how
to
use
c7n,
org
and
c7n
mailer
c7n
org
enables
you
to
run
cloud
custodian
across
multiple
accounts
and
mailer
is
what
you
use
to
make
the
notify
action
actually
work.
So
if
you've
done
the
101,
if
you've
done
the
workshop,
if
you're
at
the
point
where
you're
like
cool
cloud
custodian,
is
awesome,
I
want
to
do
more
with
it.
A
This
is
the
webinar
for
you
so
check
out
those
links.
The
webinars
are
free.
You
don't
need
to
pay
anything,
but
it
is
helpful
if
you
register
so
yeah
check
that
out.
Let's
see
so
any
questions
about
that.
A
Cool
make
sure
I'm
keeping
track
of
anybody
raising
their
hand,
okay,
cool,
so
yeah.
This
was
a
big
conversation
recently,
but
fyi
python
2.6
is
deprecated
as
of
january.
A
B
I
was
just
gonna
say:
python36.
The
upstream
from
python.org
has
been
end
of
life
for
even
longer
than
that
last
year,
and
if
you're
on
an
older
enterprise
distro
one
of
the
route
like
there's
lots
of
them,
then
use
docker
would
be
our
recommendation,
because
that
way
we
can
continue
to
focus
on
supported.
Upstreams
and
docker
is
well
supported
on
all
the
enterprise
systems.
B
But
generally
speaking
for
organizations
that
are
using
docker
in
some
form
or
another,
then
our
images
are
really
just
the
existing
cli
entry
point.
So
the
rest
is
and
we
provide
some
documentation
on
mapping
over
credentials
et
cetera,
such
as
those
run
transparently.
A
C
Yeah,
I'm
adding
it.
The
only
thing
I
want
to
add
for
the
video
those
are
you
watching
is
on
some
enterprise,
linux
distros
that
have
the
older
python
versions
that
you
can't
move.
You
might
also
have
an
older
kernel
that
makes
running
docker
containers
actually
harder
than
it
might
be.
That's
what
a
rel7
expert
told
me,
so
I
don't
know
if
certain
vendors
offer
newer
kernels
as
back
ports
or
or
whatever.
B
B
Does
lvm
thin
for
docker
instead
of
doing
effectively
a
the
overlay
2
that
everyone
else
uses?
This
is
probably
due
to
se
linux
issues
and
does
resolve
in
rel
8,
but
outside
of
rel
any
other
linux
enterprise
distro
is
fine
running
a
docker
image
even
on
seven.
It's
viable,
there's
just
particularly
gotchas
that
don't
generally
affect
accident
usage
with
regards
to
we're
not
doing
disk
volumes
in
that
context
per
size.
So
I
don't
know
that
it
would
be
an
issue
there
either
outside
of
handling
the
standard
rail
semantics
around
sc
linux.
A
B
A
B
Using
seo
linux,
so
it
this
does
feel
real
specific
app
suse
has
app
armor
optionally
or
I
think,
by
default.
Moving
to
for
some
definition
also.
Does
that
and
it's
generally
fine
the
as
we
go
more
far
afield,
I
I
would
hesitate
to
comment
without
knowing
what
the
specific
scenario
case
is,
but
and
as
an
organization
we're
happy
to
help
you
with
our
software.
We're
not
helpful
happy
to
help
you
debug
your
os.
A
That's
completely
reasonable
cool
any
questions
on
that.
Besides,
that
can
of
worms
so
moving
on
to
next
so
clack
custodian
9.15
is
out,
everyone
hurrah
check
out
those
links
for
more
info,
so
this
is
merge
now.
So
there's
now
lots
of
new
resources
to
play
with.
B
It
is
highly
alpha,
there's
a
missing
part.
D
C
B
B
Probably
going
to
be
whole
categories
of
issues
per
se,
that's
the
nature
of
alpha
like
there's
some
resources
which
don't
do
certain
fetches.
Actually
as
a
follow-up
to
that.
There
is
now
the
there's,
a
draft
pr
for
the
cfn
hook,
option
on
this
bit
for
this
provider
as
an
execution
mode.
We
have
comments
on
last
time.
B
It's
just
a
very
weird
environment,
where
it's
executing
on
service
servers
and
the
aws
cloudformation
account,
and
we
have
to
juggle
multiple
credentials
into
customer
accounts,
as
well
as
the
provider
side
and
it's
a
bespoke
packaging
format.
That
looks
a
little
bit
like
land
if
you
squint,
but
it's
very
different.
B
Check
it
out
and
you
run
into
an
issue,
feel
free
to
please
feel
free
to
file
an
issue.
A
Awesome
well
yeah
thanks
for
the
work
on
that,
that's
pretty
exciting!
So
shall
we
talk,
release
cadence!
C
Yeah,
so
I
I
added
that,
so
we
did
this
release
and
notice
we
hadn't
released
in
a
while
and
kapila,
and
I
were
kind
of
having
discussions
like
hey.
How
come
we
don't
do
time-based
releases
or
should
we
or
should
we
automate
them,
and
I
was
just
wondering
if
anyone
had
any
opinions
about
that
or
if
that's
something
that
is
important
to
people
to
discuss.
B
I
mean
definitely
interested
to
hear
feedback
on
what
we
should
strive
to.
Historically,
we
have
tried
to
do
monthly
it's
on.
As
far
as
six
weeks
over
this
past
holidays,
it
went
to
actually
three
months
so
we're
and
now
the
question
is:
is
we're
working
towards
getting
more
people
doing
the
releases,
but
there's
also
a
question
of
hey.
Do
we
just
say,
let's
just
do
a
time-based,
automated
release
unattended?
It
just
happened
so
to
speak.
That's
on
this
is
a
functional
test
as
well.
C
F
F
It
really
no
preference
as
long
as
as
we
contribute
back
because
as
people
know,
and
we
are
in
the
process
of
migrating
off
of
the
old
fork,
that
we
have
a
lot
of
custom
code
as
we
contribute
back
and
we
want
to
be
able
to
have
it
merge
and
then
release,
and
then
we
can
then
pull
down
the
the
latest
release.
So
as
long
as
as
the
process
is
there,
it
doesn't
matter
if
it's
on
demand
or
as
a
scheduled
interval.
It
doesn't
really
matter
for
us.
D
B
Yeah,
I
mean
you
know
we
could
go
to
longer
ones,
but
because
we
have
such
a
constant
pace
of
change
and
the
cloud
providers
are
changing
as
well.
Definitely-
and
yes-
I
have
noticed-
you
know
several
pr's
in
the
last
24
hours
as
well,
so
much
appreciated
for
the
contribute
contributions
and
we
can
go
through
those
as
well
in
this
meeting.
B
But
in
that
context
I
didn't
hear
an
expression
on
a
preference,
but
like
would
clearly
be
too
long,
like
you
know,
it's
just
too
far
from
your
contribution
to
see
it
back
out
there,
or
does
it
definitely
probably
will
be
too
long.
Okay,
yeah,
cool
good
feedback,
thanks.
F
E
C
F
Yeah
this
sense
you,
I
don't
know
if
I
have
permission
to
present,
but
if
you
can
maybe
go
to
that
pr.
F
Is
follow-up
question
for
kapil
we
were
talking
about
using
the
missing
filter.
I
think
I
understand
what
you
meant
now,
so
I
looked
through
the
code
and
I
updated
the
the
pr
and
also
provide
the
sample
policy.
So
if
you
just
take
me
to
the
execute
look
at
the
policy
to
see
if
this
is
what
you
have
in
mind,
so
basically
now
we
have
a
missing
filter.
F
A
Sweet
cool
and
then
what's
adding
actually.
B
To
the
rest
of
the
faucet
I'll,
just
scroll
down
the
rest
of
the
policy
cool.
F
A
Okay,
cool:
we
looked
at
that
one
and
then
yeah,
so
we
have
a
policy
question.
How
do
we
want
to
do
this?
George.
D
Sure
yeah,
so
I
mean
this
is
a
pretty
basic
policy.
So
the
ask
is
to
stop
an
ec2
instance
if
the
mandatory
tags
are
missing,
so
I've
created
the
policy,
but
I've
used
the
even
based
mode,
and
I
gave
the
event
as
run
instance,
but
the
instance
isn't
stopping.
D
D
So
as
a
workaround,
I
created
a
periodic
policy.
I
marked
a.
B
When
you
said
that
you
were
looking
at,
you
tried
to
do,
you
couldn't
do
things
while
it
was
pending,
are,
are
you
using
cloudtrail
or,
and
then
try
to
wait
for
after
it
was
running?
Is
that
were
you
trying
to
use
the
cloudtrail
event
or
the
ec2
event
flow
on
ec2
and
since
up.
D
So
I'm
using
the
cloudtrail
event
for
run
instance:
I'm
checking
it
at
run
instance,
but
I
gave
the
instance
running
state
as
a
filter.
D
D
So,
as
a
workaround,
I
use
the
periodic
mode
so
in
the
event
based
policy
on
marking
that
instance
for
op
marking
it
for
op
to
stop
it
after
one
day
and
as
in
the
periodic
one
I'm
checking
for
that
made
state
status
tag
and
if
it
check,
if
it
gets
the
main
status
stack,
then
it
should
stop.
D
So
I'm
getting
the
type
error
as
can't
compare
offset
knife
and
offset
offset
aware
date
time.
B
So
that
would
be
useful
to
follow
an
issue
as
terms
of
switch
going
back
to
your
original
mechanism
of
trying
to
do
enforcement
real
time.
You
don't
want
to
necessarily
go
on
the
cloudtrail
event
per
se.
What
you
really
want
to
do
is
there's
a
separate,
there's,
multiple
event
streams
as
part
of
the.
B
If
you
look
at
the
mute
for,
I
think
it's
mute,
docs
or
excuse
me,
docs
on
for
aws
there's
also
a
separate
set
of
events
just
for
ec2
that
are
like
ec2
instance,
pending
ec2
instance
running,
which
are
effectively
real
time
on
the
underlying
instance
state
and
which
point
when
you
get
writing
that
the
instance
is
running,
you
can
take
any
other
actions
on
them.
B
The
ec2
instance
workflow
itself
and
the
api
for
creating
instance,
as
you
noted,
when
you're
in
a
pending
state
that
workflow
doesn't
allow
for
certain
other
transitions,
but
you
can
wait
till
and
it's
in
the
exact
state
that
you
want
to
take
the
further
action
on
by
using
that
as
two
instances
now
keep
in
mind.
B
In
that
context,
you
will
potentially
be
affecting
instances
that
are
going
from
a
stop
state.
To
start
like
someone
stops
an
instance
and
then
tries
to
turn
it
back
on,
and
so,
if
there's
an
additional
checks
that
you
need
to
do
in
your
policy
to
make
sure
that
you're
only
executing
into
your
target
set,
those
are
useful.
Those
are
necessary
as
well.
D
Okay,
so
just
to
be
sure,
you're
suggesting
not
use
the
cloud
trail
mode
and
use
the
basic
aws
apis
for
the
instant
state.
B
No,
I'm
referencing
that
custodian
has
support
for
multiple
execution
modes,
all
of
which
are
event-based
in
the
context
of
in
the
context
of
this,
the
specific
ones
that
I'm
referencing
are
I'm
just
linking
them
here
is
a
separate
execution
mode
called
ec2
instant
state
which
you
can
use
to
execute.
Whenever
an
ect
instance
has
reached
a
particular
state,
you
can
focus
on
pending
stops.
You
know
running
in
this
context
and
therefore
you're.
B
B
One
is
is
to
use
a
different
event
that
actually
reflects
a
place
that
you
could
take
those
actions,
there's
a
third
second
option,
which
I
don't
recommend,
but
we
have
it
because
it's
useful
for
this
purpose,
which
is
a
there's
effectively
an
ability
to
sleep
and
therefore
try
to
let
the
lambda
get
the
control
plane
get
to
a
steady
state
before
the
lambda
finishes
policy
execution.
B
In
this
one,
I
would
actually
look
at
the
situation
state
because
that
actually
targets
the
instance
being
in
the
running
state,
but
with
the
caveat
that
you'll
be
looking
at
other
instances
beyond
those
that
are
done
by
create
by
by
run
instances.
You'd
also
be
looking
at
things
that
are
turned
on
by
start
instances.
D
Okay
yeah:
this
might
be
helpful
thanks
I'll
look
into
this
easy
to
instance.
Date.
B
Right
but
sorry,
but
please
continue
with
the
next
question.
D
Okay,
so
the
policy
the
ask,
is
to
compare
to
check
if
the
ec2
resides
within
the
subnet.
D
B
There's
a
filter
for
this.
It
was
like
net
location.
I
forgot.
It
was
for
this
type
of
exact
type
of
use
case
where
you
want
to
intersect
security
groups
and
subnets
and
application
resources,
they
all
match
the
same.
Try
it
try.
I
can't
remember
the
name
of
the
filter
off
the
top
of
my
head
network
location
is
the
name
of
the
filter
for
this
purpose.
B
You
can
kill
one
way
hops.
You
can
compare
to
your
resources,
it's
when
you
get
to
like
two
or
three
resources
or
you're
like
so
like
you
can
do
a
security
group,
or
so
you
can
do
a
subnet
filter
on
a
ec2.
The
direct
attribute
match
on
the
ec2,
that's
fair,
but
it
allows
you
to
go
one
way
hops
in
some
cases
where
there's
a
specialized
need
like
we
implement,
there's
a
separate
filter.
That's
dedicated
to
this
purpose
in
this
particular
use
case.
B
This
was
something
that
came
up
for
for
an
interesting
contributor
and
they
implemented
the
functionality
to
do
that.
Tag
intersection
across
multiple
network
attached
resources
from
the
resource
to
the
network
to
and
as
well
as
security
groups,
okay,
but
as
a
generic
question,
you're
accurate-
and
it
just
happens
that
in
this
particular
case
that
you're
looking
for
that
there
were
some
things
added
specifically
for
it.
D
D
Perfect,
okay,
so
if
time
permits,
I
have
one
small
question.
This
is
also
related
to
the
tagging.
D
Shall
I
yeah
good
part?
Okay,
so
after
so
when
we
filter
out
non-compliant
resources
like
non-compliant
ec2
instance,
I'm
using
the
action
to
send
the
email
to
the
resource
owner.
So
again,
after
referring
the
document,
I
understood
that
there
should
be
one
tag
named
like
I'm
sending
to
resource
hyphen
owner
and
there
should
be
a
default
tag
of.
D
B
You
can
customize
what
the
owner
tag
is
inside
the
mailer
when
you,
when
you're
doing
your
mailer
can
take
we
don't
we
don't.
We
expect
there
to
be
a
notion
that
there's
a
tag
that
is
mapped
to
an
owner,
but
what
that
map
actually
maps
physically
to
is
a
configuration
item
that
you
can
configure
and
what
again
against
your
mail
or
config,
but
I
think
it
does
assume
a
default
of
a
resource
center.
I
forgot,
I
don't
know
if
it
has
a
hyphen
or
even
an
owner,
I
think,
might
be.
B
D
So
I
saw
how
to
customize
it
how
to
customize
the
config
in
the
mail.
I
did
that,
but
I
just
wanted
to
understand
in
the
documentation
it
said
that
if
we
haven't
customized
it
by
default,
there
is
a
there
should
be
a
tag
contact
owner
and
if
the
contact
owner
exists,
if
we
and
in
the
action
type,
if
we
send
the
email
to
resource
hyphen
owner,
then
it
will
use
the
contact
owner
tag.
D
D
I
went
inside
the
mela
config
I
saw
everything
was
fine,
it
was
sufficient
suffixing,
the
e
domain
name
as
well,
but
apparently
it's
not
sending
the
email.
So
I
was
wondering
if
there
is
a
way
where
I
can
just
create
a
tag
and
then
just
append
it
with
a
string
at
the
rate,
my
company's
name
dot
com,
my
company's
domain
name,
then
I
could
use
that
tag
to
send
the
email,
but
I
couldn't
find
any
example
of
that.
D
I
actually
have
a
tag
with
the
owner,
so
I
I'm
getting
while
creating
a
new
instance.
I
have
created
that
auto
tagging
thing,
so
I
already
have
the
there's
something
known
as
core
id
in
my
company
and
the
email
goes
like
this
core
id.
Add
it
my
company's
name.com.
So
what
happens?
Is
I'm
getting
the
core
id?
So
I
wanted
to
find
a
way
if
I
could
append
that
tag
with
at
the
rate,
my
company's
domain.
B
D
B
So,
for
that
case,
I
think
the
question
is
is
have
we
looked
at
the
mailer
logs
and
maybe
that's
a
better
chance.
B
This
might
be
a
better
topic
for
getter,
but
the
because
this
sounds
like
more
interactive
debugging
per
se,
and
so
in
that
context,
the
like,
if
something's,
not
working
with
the
mailer,
the
right
thing
to
do
is
look
at
the
logs
for
the
mailer
and
let's,
let's
review
that
and
sort
of
channel
with
regards
to
like
it
could
be
a
network
issue
like
the
the
myriad
of
issues
for
actual
delivery,
could
be
trying
to
use
ses
and
not
having
ses
configured
correctly.
D
Sure
I'll
post
my
question
over
there
on
the
getter,
we
have
configured
everything
so
if
I
explicitly
specify
an
email
in
the
do
section,
it
goes
out,
but
this
only
this
resource
owner
thing
is
not
working.
So
that
was
a
concern
for
me
but
yeah.
Thank
you
for
your
response,
I'll
just
post
it
in
the
github
and
would
look
out
for
their
response.
A
And
then
you
get
her
there's
a
lot
of
brains
there
and
someone
should
be
able
to
help
you
out
yeah
thanks
for
asking
those
questions.
I'm
sure
you
also
help
some
other
people
out
here
too,
as
well.
Awesome
cool!
A
G
E
A
B
I
had
a
brief
look
a
few
days
ago.
It
johnny
looks
good.
Thank
you.
I've
been
super
wanting
to.
I
think.
B
Branch
on
that
myself,
but
I'm
happy
to
take
this
one
sounds
great.
I
I
had
a
brief
look
like
a
few
days
ago.
It's
on
my
backlog,
but
yeah.
D
E
Yep,
I
think
that's
it
for
me,
there's
one
more
pr
that
I
was
looking
into.
I
think
there's
a
pr
from
lucas
and
there
are
some
feedbacks
that
needs
to
be
addressed,
but
I
think
he's
missing.
I
haven't
seen
him
for
a
while,
so
I
was
thinking
if
I
could
do
a
separate
pr
for
that.
It's
related
to
lambda
edge
on
lambda
resources.
B
Yes,
three
nines.
B
B
B
Okay,
I
just
meant
like
still
at
the
org
as
in
assigning
the
cncf
cli
stuff
is,
is
gonna
want
all
the
people
that
have
commits
in
it
to
to
do
that,
and
this
pr
predates
when
we
have
an
effect,
got
it.
Okay,.
B
D
B
The
tldr
is
like
how
aws
account
work
is
different
than
how
gpa
private
projects
work
and
how
azure
subscriptions
work,
and
these
few
projects
in
azure
subscriptions
were
actually
doing
self-discovery
based
on
the
credentials
of
the
executing
user
to
use
the
iteration
apis
for
products
and
subscriptions
in
aws
account
is
a
a
self-reflective
resource,
basically,
whatever
account
the
credentials
coming
from,
we
always
have
a
singleton
there
that
represents
that
account,
and
so,
in
the
context
of
using
a
c7.org
accounts
file,
when
you
go
to
execute
account,
it's
a
different
type
of
resource
than
the
ones
in
the
other
ones,
and
that
is
potentially
reflective
of
doing
an
additional
resource
in
both
gcp
and
azure.
B
The
other
consideration
is
doing
it
against
the
discovery
based
projects
and
having
appropriate
filters
there.
There
is
a
pr
around
doing
config,
based
with
some
additional,
more
fine-grained
hierarchy
than
what
is
offered
then,
which
lets
you
get
a
full
sub
tree.
The
issue
with
the
discovery,
api
and
gcp
against
the
filters
that
are
available,
as
you
already
noted,
is
effectively
its
single
level
node,
it's
single
node,
deep
from
whatever
parent
you
pick.
It
doesn't
actually
recurse
the
subtree,
which
is
problematic
for
lots
of
use
cases.
B
So
I
understood
that
the
I
think
the
two
options
there
are
going
to
be
hey,
let's
go
ahead
and
merge.
Some
of
the
cloud
asset
inventory
support
for
just
a
few
products.
That's
been
hanging
out
for
a
little
bit,
the
which
I
think
is
probably
the
writer
thinks
since
it's
already
got
work
in
flight
and
it's
got
other
fixes
for
source
execution,
that's
across
different
providers,
so
it
would
be
nice
to
get
that
in
and
the
alternative
or
other
consideration
would
be
having
a
self-reflective
type
of
resource.
B
That's
an
analog
to
a
database
account
in
both
gcp
and
azure,
where
effectively,
what
you
do
put
in
the
config
file
will
be
exactly
what
you
you
would
be
executing
against.
In
that
context,
now.
B
B
The
difference
is
that
that
should
do
it
yeah.
I
was
just
reflecting
on
the
nature
of
credentials
and
the
different
clouds
I
mean
just
and
azure
are
effectively
our
multi-multi
account.
Let's
say
from
an
aws
perspective,
whereas
an
aws
credential.
B
But
I
don't
think
it
would
actually
matter
for
the
self-reflective
nature
in
this
context,
we'll
just
set
the
appropriate
subscription
id
project
id
and
the
rest
of
the
apis
would
would
work
against
that.
One.
H
Yeah
yeah,
because
I
think
all
the
calls
take
the
project
id
I
mean
all
the
calls
all
the
calls
I've
worked
with.
You
can
scope
it
down
to
the
project.
H
So
the
one
I'm
struggling
with
cabell
is,
you
know
trying
to
make
this
work
with
a
query
filter,
I'm
not
sure
if
it's
something
that
used
to
work
and
stopped
working,
but
that
filter
is
not
working
like,
for
instance,
again
goes
off,
pulls
the
full
list
from
the
projects
and
then
only
client
side
starts
doing
the
filtering.
H
Yeah
right
right
to
the
bottom,
I
think
list
is
sharing
right,
yeah
and
right
at
the
bottom.
So.
G
D
H
But
what
happens
is
it
still
calls?
You
know,
first,
all
the
projects
out
then,
on
the
client
side,
it's
doing
the
filtering,
which
is
not
what
I
expected
so.
H
Yeah,
can
I
mess
it
up
like
just
put
some
bogus
in
there
and
I
see
the
dump
it.
It
looks
like
the
api
call,
you
know
seems
to
be
correctly
structured.
You
know
the
parameters,
but
the
moment
I
put
a
real
id
in
there
there's
only
eight
projects
in
there,
but
it
returns
first.
You
know
I
I
kill
it
because,
needless
to
say,
thousands
of
projects,
so
this
one
is
super
hurtful
at
the
moment.
B
Okay,
I
mean
I
can
take
a
look
at
this
there's
going
to
be
no,
I
do
want
to
caveat
that
when
I
dug
deep
onto
the
project
filtering
mechanism,
it
was,
it
was
shallow
as
far
as
it
only
returned
back
immediate
children
and.
H
B
Are
directly
in
that
parent
I
see
otherwise,
if
they
were
in
subfolders
yeah,
they
wouldn't
get
returned.
That's
why
there's
work?
That's
been
done
on
class
inventory
as
a
folder
source
which
supports
actual
hierarchy
and
nested
in
the
full
subtree,
but
yeah
the
server
side
api
that
gcp
exposes
did
not
have
that,
but
and
if
you're
having
issues
with
that,
you
know
that's
relatively
straightforward
to
look
at
but
yeah.
H
Yeah,
if
the
server
side
filtering
only
gonna,
give
you
direct
ancestors,
sorry
direct
descendants
of
that
folder.
It's
gonna
have
almost
very
limited
use.
Okay,
so
you
say
the
card,
as
the
inventory
would
give
us.
B
B
E
B
H
Okay,
yeah
the
challenges
and
then
is
that
the
im
credentials
are
in
this
res
in
this
api
and
the
resource
manager,
and
so
the
only
way
to
get
that
out
is
through
this
I
mean
yeah,
I'm
not
sure
if
anybody
else
is
trying
to
get
iron
credits
out,
but
this
is
a
real
challenge.
H
Yeah
I
like
the
primitives,
I'm
going
to
pull
out
all
the
owner
and
editor
members
of
a
project
and,
of
course
those
are
very
much
of
interest
from
a
security
perspective
and
to
get
that
yeah.
You
know
currently
getting
all
those
projects
back.
B
Yeah,
the
cai
stuff
should
work.
If
and
then
the
alternative
is,
we
add,
a
new
resource
which
is
actually
relatively
easy
for
us
to
do.
It
might
even
be
less
work
than
trying
to
using
prn
to
add
some
notion
of
a
self-reflective
project
in
the
context
of
gc7,
or
I
think
the
question
to
verify
would
be
whether
we
set
the
project
id
as
we
go
through
to
execute,
which
I
think
we
do
so
yeah.
B
Okay,
that'd.
H
Be
awesome
and
in
some
similar
lines,
do
you
know
why
parameters
are
not
getting
passed
through
when
you
do
run
script?
Is
that
expected
behavior.
B
D
B
B
So
if
you
can
put
the
the
whole
command
in
quotes,
that
also
would
result
it's
just
it's
that's
the
shell
script
passed,
it's
necessary,
passing
and
just
which
parser
tries
to
interpret
which
command
line
so
can
be
explicit
if
you
can
closely
just
drop
the
whole
thing
you
want
to
execute
in
quotes,
then
it'll
just
get
straight
past
her
okay.
H
Let
me
give
that
a
shot.
I
haven't,
because
I
can
see
the
the
bars
are
right.
You
know
in
terms
of
dumping,
you
know
the
shell
environment
variables,
but
to
get
them
to
be
passed
on
the
command
line
to
the
g
cloud
because
it
mandatory
require
it.
It
won't
read
the
in
vbar
anyways.
You
would
see
my
tests
in
the
github
chat
and
then
the
one
that's
really
hurting
us.
Is
this
missing
tags
from
aws
it's
in
c7
org,
so
the
bars
and
tags
are
missing
from
the
resources
files.
H
B
H
H
Yeah
they're
missing
from
the
resource.json
the.
H
B
Resource
tags
or
the
no
the
account
tags,
so
the
account
tags
don't
go
in
the
resources
json
they
go
in,
like
they're
they're
defined
in
the
account
file.
When
you
go
to
do
the
seminar
report,
you
pass
both.
B
H
Okay,
so
let
us
make
the
question
differently.
If
I
want
all
the
varsa
attacks
doesn't
matter
in
the
accounts
or
gml
to
be
available
in
the
resources,
so
I
can
do
reporting
what
would
be
the
best
way
to
do
that.
H
H
So
when
you,
when
I
do
the
c7
org
report
command,
you
want
to
just
reference
those
tags
or
vars
in
the
report
as
additional
columns.
Usually
you
can
only
reference
things
in
the
resources.
You
know
the
json
file.
H
B
Okay,
that
that's
that's
a
fair,
I
think,
and
useful
capability
and
worth
a
github
issue
like
when
you're
doing
c7
network
report
being
able
to
do
fields
from
the
account
cmo
configuration.
I
think
that
that
sounds
totally
good.
Yeah.
H
H
B
H
Okay,
so
then
yeah,
because
one
is
providing
that
config
file
in
the
run
in
the
report
right.
H
A
Awesome
love
seeing
that
happen
yeah.
We
have
been
talking
a
lot
this
week,
excellent
meeting.
So
taking
a
look
at
these
open
pr's
here
does
anything
jump
out
at
anyone.
G
C
H
H
What
is
you
know,
some
of
the
best
practices
where
you
know
how
to
set
up
the
environment,
how
to
debug
it?
How
to
add
it
instead
of
having
to
like
start
over
from
scratch
that
I
think
be
helpful,
because
there
are
definitely
things
I've.
I
can
definitely
contribute,
but
of
course,
instead
of
spending
hours
figuring
out
the
best
way
to
do
this.
These
are
really
great
to
have
that
maybe
aj
can
lead
it.
A
Yeah,
you
are
actually
the
second
person
to
ask
this,
and
I
have
been
thinking
about
this
and
how
we
can
do
this
so,
and
we
also
want
to
address
the
contributor
docs
that
we
have
right
now
and
in
the
process
of
doing
that
figuring
out
like
how
else
can
we
support
contributors
and
people
who
want
to
contribute
so
stay
tuned?
A
Because
we
are
thinking
about
that.
We
just
have
to
kind
of
think
about
what
that
content
would
look
like
and
how
we
could
best
like
serve
someone
but
stay
tuned.
It
is
on
our
radar.
H
Yeah
no
that'd
be
really
helpful,
so
more
of
us
can
help
fix
those
issues.
D
D
A
B
Also,
just
reiterating
that
we
are
planning
on
doing
a
contributor
sprint.
I
think
at
pycon
this
year,
which
is
oh,
it's
bicon.
Okay,
I
think
it's
april
is
sometime
in
april.
It's
in
salt
lake
city.
I
think
at
the
end
of
april.
H
H
Oh,
it's
awesome
great
great
meeting
this
week.