►
From YouTube: Cloud Custodian Community Meeting 20220118
Description
Our community meeting is public and we encourage users and contributors of Cloud Custodian to attend! You can find the notes for this meeting on our github repo: https://github.com/cloud-custodian/community/discussions
To get an invite to the meeting join the google group and you'll receive one via email: https://groups.google.com/g/cloud-custodian
A
Started
please
be
aware
that
we
do
record
all
of
our
calls
and
put
them
on
a
youtube
playlist
so
that
we
have
them.
This
is
a
public
meeting
and
we're
also
abiding
by
the
cncf
code
of
conduct.
So
please
be
excellent
to
each
other.
I've
tossed
the
notes
url
in
chat
and
the
date
is
january,
18
2022,
it's
our
second
community
meeting
of
the
year.
So,
let's
start
I'm
going
to
share
my
screen
with
the
notes,
how's,
everyone
feeling
today.
A
Everyone's
really
motivated,
I
can
tell
all
right
here-
we
go
all
right,
so
welcome
back
quick,
cncf
status
that
I'd
like
to
go
over
with.
If,
if
you're
not
aware,
we
are
in
the
cncf
sandbox
area
applying
for
moving
into
the
incubation
phase
of
the
project,
the
toc
meeting
to
discuss
that
is
february
8th,
which
is
when
we'll
be
doing
the
application
and
a
lot
of
other
projects
will
be
there
as
well.
A
So,
if
you're
into
that
kind
of
thing
holler
at
me-
and
I
can
give
you
all
the
information
if
you're
interested
in
sitting
in
those
meetings,
which
is
pretty
cool
and
the
security
assessment
for
the
project,
is
still
going,
I've
added
a
link
to
that
pull
request
there.
I
think
the
last
time
I
checked
about
an
hour
ago,
kapil,
I
think
the
ball's
in
our
court
to
answer.
A
If
you
want
to
check
that
out,
that'd
be
great
in
the
meantime,
we'll
move
on
workshops
are
back,
umer
wasn't
able
to
make
it
today,
but
we
have
two
workshops
back
to
back
here
in
january
january,
25th
will
be
the
introduction
to
cloud
custodian.
A
We
recommend
this
this
one
for
anyone
who
is
just
getting
started
with
the
project
and
then
cloud
custodian
101,
which
is
the
day
after
on
the
26th
and
that's
kind
of
the
workshop,
where
you
have
hands-on
experience
with
the
tool
and
that
kind
of
stuff
we
we
have
like
examples
for
you
to
use
and
stuff
liz.
Aj.
Is
this
you,
or
is
this
jamison
and
aj
who's?
Who.
C
I
believe
that
one
is
jameson
and
me.
A
A
All
right
goals
for
the
week
getting
back
from
the
holidays,
that's
still
happening.
We're
gonna
go
ahead
and
move
on
there
to
the
pr
issue
reviews.
So
normally
I
have
an
automated
script
that
runs
and
it
like
broke
over
break.
So
I
wasn't
able
to
figure
that
out
in
time,
so
I've
asked
around,
and
cherry
picked
some
things
for
us
to
discuss
before
we
do.
That,
though,
does
anybody
I
I
do
rec
see
some
people
on
the
call
that
I
haven't
seen
before.
A
If
you
want
to
say
hello,
welcome,
first
of
all,
and
does
anybody
have
any
other
agenda
items
that
they
don't
see
listed
on
here
that
they
would
like
us
to
discuss.
A
B
That's
a
new
provider
yeah,
I
think
produced
had
had
flag
country
looking
at
the
cloud
control
api
and
I
think
we
had
seen
some
stuff
around
reinvent
around
it
from
both
apache
corp
in
the
form
of
a
brand
new
terraform
provider,
as
well
as
plummy
in
the
form
of
not
in
their
case
of
actually
having
their
own
provider
instead
of
using
we're
using
the
terraform
providers.
So
what
is
it?
I
think?
B
A
Pr
I've
got
the
wrong
one.
B
Some
repository
text,
which
sort
of
goes
through
the
nature
of
this
new
provider.
What's
what's
the
value?
What's
the
utility?
Why
are
we
doing
it?
Why
is
it
a
new
provider
and
the
nutshell
is
basically
understanding
what
cloud
control
actually
is.
It
is
effectively
the
latest
version
of
terraform
engine
has
a
notion
of
type
registry
associated
to
it.
That
type
registry
is
extensible
and
also
has
some
some
of
the
aws
native
sources.
Resources
are
available
through
it
as
well.
B
Sorry,
our
open
source
as
well
as
available
through
it,
but
basically
that
type
registry
exposes,
I
would
call
a
cruddle,
create,
read,
update,
delete
and
list
api
on
the
different
resources
and
cloud
control
effectively
says
hey.
Why
don't
we
just
take
all
that
that
type
registry
api
that
these
plugins,
that
these
resource
implementations
are
using
and
make
them
available
through
a
separate
api,
independent
of
cloudformation,
and
so
this
provider
effectively
exposes
those
directly
as
a
new
as
a
new
provider.
B
The
reason
for
doing
a
new
provider
is
that
they
have
very
different
attributes,
then
what
we
would
commonly
see
if
we
did
a
describe
on
it
on
that
resource
through
the
native
service
apis
and
in
many
cases
it
is
just
casing,
but
in
many
cases
you
know,
there's
a
long
legacy
of
cloud
formation
versus
the
original
service
team's
battle
of
specification
and
in
many
cases
the
attributes
that
we
may
potentially
care
about,
aren't
necessarily
exposed.
B
So
if
you
take
a
log
group
as
an
example
of
a
resource
that
is
available
through
both
this
cloud
control
provider,
as
well
as
the
aws
native
provider
and
custodian,
you
would
see
that
stored
bytes,
for
example,
is
not
available
in
the
as
an
attribute
in
cloud
control,
and
so
in
doing
that.
I'd
also
highlight
that
the
cruddle
implementation
is
non-uniform,
not
all
resources,
support
list,
for
example,
and
some
resources
which
purport
to
support
update.
B
Don't
really
do
it
in
a
usable
way,
lambda
functions
as
an
example.
At
the
same
time
and
sorry,
another
couple,
a
couple
other
caveats
to
work,
highlighting
some
commonly
used
resources,
say
ec2
sns
sqs,
just
to
name
a
handful
top.
My
head
are
not
available
through
this
provider
just
because
of
a
little
bit
of
limitations
in
cloud
control
where
they
can't
do
an
actual
list
of
those
resources,
but
for
all
the
resources
that
it
is
supporting
or
that
are
available
through
this
provider.
B
There
is
a
generic
update
and
delete
action,
and,
in
general,
where
we
support
tags
through
this
as
well
universally.
So
every
resource
would
have
an
update,
a
delete
action
and
tag
filtering
as
and
potentially
tag
modification
as
well.
So
this
is
dependent
in
many
ways.
The
plot
control
is
due
is
a
trade-off
between
using
a
server
specific
api
and
dealing
with
videos
and
credits
into
using
receipts
of
the
service
to
a
generic
api
but
non-uniform,
and
how
that
actual
api
behaves.
B
So
now
we're
dealing
with
the
idiosyncrasies
of
the
confirmation
type
implementation
that
implements
that
api
and
in
many
cases
those
are
not
open
source,
it's
probably
like
40
or
open
source,
or
it
might
be
even
smaller
than
that.
I'd
say:
slow
maturity,
but
there's
maybe
800
aws
types
exposed
through
the
confirmation
type
registry,
400
of
which
are
fully
mutable
and
of
those
a
smaller
subset,
are
support.
The
full
api
set
that
we
need
net
net,
though
it's
still
a
couple
hundred
resources,
probably
still
like
200
resources.
B
I
just
want
to
create
awareness
that
this
is
mostly
long
tail
resources,
so
nimble
studio.
Did
you
even
know
if
you
guys
had
a
service
called
nimble
studio?
Well,
they
do,
and
this
supports
it,
and
so
some
additional
things
along
those
lines.
So
this
is
really
helpful
for
just
sort
of
filling
out
the
the
full
coverage
on
aws
to
be
inclusive
of
all
the
services
that
are
out
there.
It
was
a
it's
been.
It's
only
been
around
for
a
few
days,
this
particular
provider.
It
does
seem
to
work.
B
I've
got
a
few
sample
policies,
I'm
writing
out
some
docs
trying
to
get
it
into
our
dock
building
stuff.
One
additional
capability
here
is
the
schema
for
most
of
the
resources
is
available
and
within
json
schema,
so
sorry
to
see
if
we
can
expose
that
out
directly
in
the
documentation
as
well
as
attributes
the
resource.
Potentially,
we
can
do
some
additional
work,
around
validation
of
like
value
filters
to
make
sure
that
they're
actually
targeting
a
real
tribute
on
the
resource.
B
Just
for
you
know
improving
offering
experience,
there's
still
some
caveats
here,
specifically
around
child
resources
that
that
we
still
need
to
work
through
both
in
terms
of
matching
up
to
event-based
execution
modes
as
well
as
just
being
able
to
natively
expose
child
resources.
Are
those
that
only
exist
if
you
pass
like
a
parent
id
alongside
of
them,
so
just
trying
to
make
sure
that
we,
this
is
actually
useful.
The
box.
We
do
I'm
currently
considering
this
provider
alpha.
B
Unlike
the
other
providers,
where
you
know,
we
have
to
reimplement
all
outputs
and
log
syncs
and
blob
stores.
This
will
just
sit
piggyback
on
the
existing
aws
providers
for
those
outputs.
So
everything
else
is
roughly
the
same
anyway.
B
That
is
a
good
question.
Is
there
a
value?
Add
not
exactly
so,
but
if
we
have
an
existing
integers
provider
and
it
does
something
useful
for
you,
then
then
that
is
probably
your
best
bet.
B
This
provider
will
do
additional
api
calls
for
the
resources
based
on
how
that
the
underlying
internal
implementation
of
the
provider
is
and-
and
as
I
noted
in
several
cases,
we
don't
actually
have
full
awareness
of
what
that
actually
looks
like,
but
typically
there
will
be
additional
api
calls
involved
for
the
for
the
implementation
of
the
provider
based
on
the
type
of
registry
implementation.
B
So
I
would
say
it's
mostly
for
supplemental
resources
versus
existing
resources
that
I
would
target
if
you,
for
example,
memory
db
is
supported
through
this,
so
and
but
it's
not
supported
natively
in
the
clock,
studio
provider.
So
if
you
wanted
to
utilize
memory
to
be
into
governance
policies
on
that,
it
would
be
worth
looking
at
this
provider
for,
but.
E
B
You
know
if
you
have
a
lambda
policy
for
instead
of
land
deposits,
for
example,
you
would
be
better
off
using
the
existing
lambda
resource
implementation,
but
the
commonality
of
custodians
schema
validation.
B
Caching,
outputs
will
be
the
same
between
both
providers.
It's
really
just
a
resource
differential,
but
as
an
example
for
even
for,
like
reads
and
gets
in
lists.
In
many
cases,
the
the
type
the
cfm
type
registry
implementation
will
do.
Additional
api
calls,
even
if
you
as
a
policy
author,
were
weren't
interested
in
those
attributes
and
so
that
can
result
in
additional
load
on
the
account.
So
in
we
are
so
we
also
generate
like
the
same
way
that
we
do
for
aws
and
gcp
where
we
are.
B
These
are
also
metadata.
There's
also
metadata
on
all
of
these
all
the
resources,
as
well
as
their
actions,
to
also
annotate
what
permissions
they
need,
which
is
a
direct
correlation
to
what
api
calls
are
doing.
B
By
the
way
this
it
might
be
worth
taking
two
seconds
to
talk
about
the
implementation
of
the
provider
itself.
If
you
look
at
that
diff
count,
it
says
92
000,
plus
lines,
most
of
that
is,
is
json
actually
like
400
of
the
files.
There
are
json
and
there's
maybe
six
python
files.
Those
are
directly
pulled
from
cloudformation
type
information
and
we'll
just
update
those
out
of
band
as
like
a
background
nightly
thing,
and
so
those
are
mostly
immaterial
to
the
code
change
itself
that
that's
engaged.
F
B
Nothing
per
se
the
impact
is
so
mostly
confirmation
is
supplemental,
and
so
this
is
the
underlying.
The
json
schema
here
is
related
to
this
schema
definition.
So,
as
a
result,
they've
got
a
bunch
of
compatibility
constraints
associated
to
them,
so
the
typically
changes
are
supplemental
for
compatible
via
addition
of
attributes.
So
it's
not
per
se,
it's
not
per
se
problematic.
It
would
effectively
be
you
wouldn't
necessarily
be
able
to
address
the
new
attribute.
Till
till
there
was
a
new
release
of
the
provider
with
the
updated
files.
B
B
My
favorite
metrics
think
as
that
are
out
there,
and
I've
also
previously
heard
from
some
larger
users
with
regards
to
that
are
collecting
metrics
on
every
policy
that
the
sum
total
of
all
those
metrics
can
actually
be
become
expensive,
and
so
one
thing
we're
looking
at
is
actually
adding
and
is
actually
adding
stats
d
support
as
a
metric
sync
into
into
that'll
be
available
generically
across
the
different
providers.
B
So
if
you
have
a
local
stats
d,
aggregation
point,
then
we'll
flag
it
and
we
will
try
to
support
the
additional
metadata.
This
sats
d
is
not
a
standard
standard
per
se.
It's
a
de
facto
standard
that
every
implementation
has
chosen
to
do
a
few
things
a
little
bit
differently,
but
at
least
vcd.
B
It's
it's
still
simple
enough
for
us
to
implement
without
entailing
additional
dependencies
that
it
will
be
a
relatively
straightforward,
lift,
just
by
way
of
alternative
that
was
considered,
was
looking
at
open,
telemetry,
open
telemetry,
as
as
I've
gone
through
it.
It's
it's
open.
B
Telemetry
is
a
cncf
standard
and
implementation
that
has
seen
some
uptick
in
the
industry,
but
involves
effectively
a
set
of
sdks
that
are
in
direction
that
typically
end
up
requiring
as
well
an
additional
routing
broker
game
in,
and
it's
also
involved
into
a
set
of
competing
distributions
of
of
the
same,
and
so
all
of
that
at
a
minimum
would
entail
multiple
megabytes
of
additional
dependencies
versus
just
doing
a
single
module
for
statsd
support
and
vcd.
Stat
telemetry
also
entails
integration
of
traces
as
well
as
metrics,
and
I
think
they're
starting
to
look
at
logs.
B
We
already
have
native
log
integration
to
the
different
provider,
back-end
stores,
as
well
as
metrics
et
cetera.
So
it's
not
clear
that
it's
a
for
something.
That's
not
new
starting
off!
That
might
be
the
right
lift
for
us.
Given
we
already
have
the
native
provider
integrations
we're
probably
not
going
to
go
after
that.
A
B
No,
I
I
was
just
happening
upon
some
quick
happened
upon
some
meister
statsd
and
client
implementations
that
were
relatively
small.
So
I
will
it's
on
me
to
go,
follow
an
issue
and
try
to
tag
all
the
related
issues
to
it.
A
Okay
and
if
you're
listening
to
it,
hopefully
the
notes
will
have
that
issue
in
the
in
the
notes.
By
the
time
you
get
to
it
all
right
anything
else,
kapil,
that's
it
starting
off
2022
with
a
nice
fat
list
of
stuff
to
do
19,
000
lines,
aj
you're
up
next
you've
got
two
cool
things
that
we
ran
into
over
the
weekend.
C
Yeah,
so
this
one
where
this
started
was
moving,
I
I
don't
know
who
else
is
running
doing
some
local
contributions,
running
tests
locally,
I've
been
using
talks,
and
I
know
at
some
point:
kapil
had
switched
over
to
using
poetry
to
both
install
the
dependencies
and
run
the
tests
with
with
make
install
poetry
and
make
test
poetry.
C
So
I
started
migrating
some
of
my
local
tests
over
to
using
that
and
because
of
some
of
my
aws
cli
and
git
config
options,
I
had
some
test
break
and
so
just
made
a
what
was
theoretically
a
fix
and
and
the
fix
the
fix
worked
for
the
the
aws
cli
bits
that
was
just
on
setting
an
environment
variable,
but
with
the
get
side
it
was
dying
because
of
a
a
non-default
initial
branch.
This
is
with
a
lot
of
the
a
lot
of
get
support,
moving
over
to
use
main
rather
than
master,
and
that.
E
C
Of
thing,
so
the
policy
stream
tests
were
all
implicitly
using
master
as
the
default
branch
to
try
to
move
that
stuff
over
and
it
all
worked
locally.
It
worked
in
ci,
but
then
with
docker
with
some
of
the
older
docker
images,
it
had
an
older
version
to
get
that
didn't
support
the
the
initial
branch
option
to
the
get
init
command.
C
So
it
just
kind
of
it
just
died
and
there
looks
like
kapil
just
bumped
the
image
version
we're
using
a
newer
ubuntu
container,
a
newer
ubuntu
image
that
has
a
newer
version
to
get,
and
so
we
don't
run
into
that
issue.
Things
just
kind
of
work.
C
That
is,
I
don't
know
if,
if
other
folks
have
started
using
poetry
for
when
they're
doing
any
sort
of
the
local
installation
testing
or
if
there
have
been
any
other
issues
that
have
come
up
related
to
that.
Aside
from
these
couple
things,
it's
been
pretty
smooth
and
happy
on
my
side,
so
just
worth
calling
that
out
in
general
that
the
the
move
to
poetry
has
been
pretty
nice.
B
Yeah
the
you're
actually
recently,
so
this
is
effectively
support
for
non
non-master
branches
or
default
main
branch
at
least
regards
to
the
project
clock
starting
repo
itself.
I
have
tried
this
out
in
a
couple
of
other
repos,
this
conversion
process-
it's
mostly
painless,
except
for
people
with
existing
checkouts,
in
which
case
when
they
go
to
the
project
page,
there
will
be
an
instruction
that
they
can
copy
and
paste
it
does.
B
A
C
I
guess
related
thing
that
I
didn't
raise
for
policy
stream,
but
I'm
wondering
it
look
like
policy
stream
if
you
don't
provide
a
branch
name
so
just
quick
context
on
policy
stream.
It's
one
of
the
custodian
tools
that
you
can
use
to
see.
Changes
in
policies
across
different
git
commits
in
your
policy
repo
and
if
you're
running
a
policy
stream,
diff
command
will
show
changes
from
one
commit
to
another.
And
you
don't
give
it
a
a
target
revision.
It
will
just
assume
master
and
I
don't
know
if
it's
worth
adding
any
intelligence
to
that.
B
That's
a
good
question:
at
the
very
least,
it's
worth
documenting
what
what
that,
what
that
should
look
like
there
is
a
way
to
interrogate
it.
I
believe,
but
I'm
not
entirely
clear
if
we
want
to
do
the
extra
fork,
I'll
defer
to
tim
who's
got
his
hand
raised
tim.
F
In
my
use
of
it
at
least,
I
thought
it
just
used.
If
you
didn't
specify
a
branch,
it
used
the
default
branch
that
was
there,
I'm
not
sure
I
didn't
recall,
saying
that
explicitly
to
cleared
master.
C
E
F
C
F
Using
I
think
it's
maybe
it's
the
way
in
which
we're
using
policy
stream
differently
to
the
to
the
actual
cli
we're
sort
of
hooking
in
at
a
different
level.
So
maybe.
B
Yes,
I
mean
yeah
policy
stream
has
a
bunch
of
different
stuff
in
it.
It
does
want
you
to
specify
like
a
source
and
if
you
don't
specify
a
source,
I
think
it
will
try
to
do
some
inference
to
like
master
actually
yeah
it
does
it
does.
It
will
point
to
master
or
like
head
head
like
up
one
rev,
so
there
there
is.
There
are
one
or
two
heart
coded
references
in
there
which
are
overrideable
via
command
line
flex.
F
The
default
implementation?
Well,
the
current
upstream
implementation,
at
least
last
time
I
looked
was
entirely
time
based,
which
is
fine.
If
you
have
a
a
linear
squash
history,
but
if
you
don't
have
a
linear,
squash
history,
you're
gonna,
potentially
miss
commits.
B
So
for
the
sorting
between
repos,
I
believe
we
do
a
topological
where
we
do
the
one
of
the
pi
lib
get
sorting
algorithms.
I
have
to
admit
that,
like
I'll
stop,
my
head,
I'd
have
to
take
a
look
at
it
before
I
could
actually
speak
to
it
intelligently.
C
B
Yes,
and
so
you
can
on
command
line,
you
can,
I
think
we
default
to
reverse
and
but
reverse
time,
but
you
can
pass
that
in
explicitly
as
well.
F
Okay,
it's
yeah.
I
was
just
thinking
that
the
implement
I
sort
of
override
the
implementation
to
actually
do
a
complete,
dag
search
of
the
historical
previous
revision,
because,
if
you
say
start
from
this
rev
id
and
then
move
forwards,
the
topological
sort
doesn't
guarantee
to
grab
all
of
the
history
correctly.
F
I
guess
that
just
comes
from
my
my
history,
playing
with
revision
control
systems
and
knowing
that
you
know,
if
you,
if
you've
got
a
you
got
your
graph
of
revisions
and
you
say
start
from
this
revision
and
you
have
a
topological
sword.
It
doesn't
actually
guarantee
that
if
you
had
an
earlier
commit
that
was
not
merged
in
until
a
later
time
on
the
branch
it
was,
I
think
it
skipped.
B
It
yeah,
I
mean,
there's
always
trade-offs,
that's
why
we
expose
both
the
algorithms
on
the
both
the
available
algorithms
are
exposed
for
that
purpose.
But-
and
you
know
like
we
default
to
one
just
so
you
know
reverse
time
is
a
very
intuitive
thing.
Yeah,
I
think
the
other
one
there
there
is
that
option.
F
Yeah,
I
think
a
lot
of
people
also
tend
to
squish
history
a
lot
it's
much
more
of
a
practice
and
get
to
basically
keep
a
nice
main
line.
B
Yes,
for
those
that
are
used
to
history,
preserving
version
control
systems-
this
is
yeah
not,
as
is
maybe
a
little
bit
strange,
but
it
is
a
very
common
behavior.
I
think.
A
How
bizarre
speaking
up
speaking
of
tim
you're
up
next
with
your
three
ideas.
F
Yeah,
so
I
I
think
we
had
it
so
do
you
want
to.
F
I'm
just
gonna
yeah
I'll
just
go
from
the
top,
so
about
a
year
ago
I
sort
of
was
talking
with
kapil
about
you
know.
How
could
I
contribute
to
custodian
and
and
looking
at
things
and
one
of
the
things
we
were,
we
were
looking
at
you
know,
prior
to
making
a
custodian
1.0.
We
needed
a
way
to
deprecate
some
older
ways
of
of
doing
things,
because
custodian
has
grown
over
time.
F
As
things
were
introduced
or
complexity
changed.
There
was
a
very
strong
desire
to
keep
backward
compatibility
for
everything
we
didn't
want
to
break
any
of
the
existing
users.
So,
even
though
new
ways
were
introduced,
the
old
ways
were
still
allowed
so
a
way
to
sort
of
mark
things
as
deprecated
got
got
it
merged
in
a
number
of
versions
back,
but
we've
not
we've
not
aggressively
gone
through.
The
code
base
actually
marking
things
as
deprecated,
with
the
intent
of
of
having
a
potential
break
moving
forwards.
F
I'm
not
sure
what
what
the
desire
is
around
a
1.0
or
whether
or
not
we're
going
to
say
we're
actually
going
to
break
some
backward
compatibility
and
actually
remove
some
of
the
old
deprecated
things,
but
at
least
being
able
to
raise
awareness
through
the
c7n
validate
command
that
you
are
using
an
old
way
of
doing
something
and
being
able
to
move
to
a
new
way.
F
So
I
think
one
of
the
things
that
that
I
I
found
this
I
was
going
through
the
code
base
was
just
terminologies
to
like
white
lists
and
blacklists
in
a
number
of
different
filters
or
actions,
and
I
was
so
this
pr
is
just
saying
we
should
deprecate
that
terminology
and
use,
allow
and
deny
it's
all.
It
tends
to
be
more
clear
about
the
intent
of
what
you're
what
you're
referring
to.
So
that's
what
this
pr
is
of
this
issue.
F
B
I
think
there
was
also,
I
think
we
had
chatted
potentially
about
having
some
sort
of
tool
that
potentially
could
do
some
of
the
simpler
transformations
for
people
as
a
upgrade
type
of
step,
although
that
so
you're
the
other.
That's
the
other
way.
F
That's
that's
another
one
yeah,
so
I
created
a
and
just
a
few
issues
from
the
discussion
point.
One
was
about
just
changing
white
deprecating,
whitelist
and
blacklist.
I
think
there
was
another
one.
One
of
the
other
issues
that
I
raised
was
actually
about
yeah,
creating
a
simple
tool
that
we
can
have
in
there.
That
does
the
they
can
do
some
of
the
automated
conversion.
For
you.
F
Yeah,
so
this
one
was
like
based
on
the
idea
of
having
some
deprecation
warnings.
It's
like
okay,
if
we
know
that
there's
a
subset
of
things
that
we
want
people
to
easily
be
able
to
move
from
one
way
to
another,
we
could
have
a
simple
tool
that
would
just
automatically
update
the
policies
to
say
here.
This
is
the
old
way,
here's
the
new
way,
and
we
could
then
automate
that
for
some
people.
F
So
what
I'm
trying
to
do
is
see
if
I
can
get
some
dedicated
time
dedicated
work
time
to
actually
develop
on
on
upstream
custodian.
A
B
F
B
Like
there's
a
long,
I
have
a
long-term
desire
of
extracting
out
c7
and
core
from
c7,
which
is
currently
c7
and
aws,
really
just
so
that
the
other
providers
like
if
I'm
on
gcp
to
install
photo
and
sdks,
I
don't
care
about,
and
so
that
that
creates
that's
part
of
what
creates
some
of
those
awkwardness,
yeah.
B
B
I
hear
you,
but
the
reality
is,
is
what's
going
to
be
the
least
amount
of
disruption,
what
will
actually
work?
What
actually
doesn't
hurt?
People
yeah,
unfortunately
changing
out
c7n
to
be
non-functional
by
itself,
wouldn't
effectively
damage
a
lot
of
installations.
F
F
Yes,
I
think,
there's
I
I
guess
the
other
thing
I'd
just
like
to
to
say.
If
we've
got
here,
people
that
are
here
listening
the
the
I'm
not
sure
how
many
people
actually
run
the
c7n
validator
over
there
policies,
but
with
the
introduction
of
the
deprecation
warnings.
When
you
run
the
validator
now
it
will
actually
let
you
know
if
you're,
using
any
of
the
things
that
that
are
marked
as
deprecated.
F
We
talked
about
for
the
deprecation
warnings.
This
we
talked
about
having
like
an
optional
end
date
with
the
intent
of
saying
this
field
is,
act
is
deprecated
and
we
intend
to
remove
it
out
on
this
date.
None
of
the
ones
that
we've
added
deprecation
warnings
to
have
actually
put
a
date
on
it
with
the
intent
of
this
is
the
date
we're
actually
going
to
remove
it.
F
But
with
the
deprecation
there
was
an
extra
flag
added
to
the
validator,
where
you
could
say
strict
and
what
it
would
do
is.
The
validation
would
then
fail
if
you
were
using
deprecated
things,
as
opposed
to
just
warning
you.
So
I
think,
if
you
are
using
validation
as
part
of
your
ci
of
your
policies,
potentially
look
at
adding
in
the
strict
flag
to
this
to
the
validator,
and
that
would
help
you
make
sure
you're
moving
off
things
that
were
deprecated.
F
B
I
think
it's
in
the
core:
it's
been
there
for
a
year,
yeah
yeah,
maybe
like
six
or
six
months,
at
least
as
far
as
the
actual
merchant
release
yeah.
Okay,
the
regards
to
deprecation
time.
I
think
that
was
also.
I
think
we
pushed
that
point
of
that,
so
we
actually
defined
what
our
actual
deprecation
schedule
would
be.
For
the
thing
I
mean
it's
the
start
of
a
new
year
it
and
I'm
definitely
definitely
open
to
to
feedback
on.
B
I
think
we
would
need
to
actually
put
together
an
issue,
but
it'd
be
up
until
like
something
like
nine
months
with
regards
to,
with
regards
to
starting
to
actually
start
deprecating
things,
and
you.
B
As
far
as
yeah
like
we'll
start
a
start
to
actually
publish
that
as
part
of
the
deprecation
warning,
and
then
you
know,
this
will
stop
working
in
this
date
and
then,
whatever
the
release
is,
after
that
date,
we'll
just
we'll
effectively
strip
it.
At
that
point,.
A
But
does
it
spew
stuff
to
the
console
that
says
deprecated
or
anything.
A
F
F
F
I
think
what
I
would
like
to
do.
It
might
be
interesting
to
see
if
we
I
could
write
an
automated
tool
initially,
because
there's
the
easy
ones
to
tackle,
I
think,
would
be
like
there's
with
some
of
the
older
tagging
mechanisms.
There
were
mark
and
unmark
as
well
as
tag
and
untag,
and
so
it'd
just
be
nice
to
that
would
be
a
nice
easy
one
to
start
with
and
see
if
we
could
get
the
conversion
tool
working
just
with
those
and
then
so.
F
I
think
that
would
be
an
interesting
pr
to
start
with
with
the
dedicate
tool.
I'm
not
sure
what
we
want
to
call
it.
Naming
things
is
hard
open
to
suggestions.
F
A
Yeah
and
with
that
we've
reached
the
end
of
the
agenda,
does
anybody
have
any
anyone
think
of
something
during
the
meeting
they
want
to
talk
about
or
any
final
comments.
B
B
You've
got
a
couple
of
pr's
up.
Is
there
anything
you
wanted
to
flag
or
race.
E
No,
no
and
just
intro,
I
joined
a
little
bit
late,
but
I'm
darren
I'm
from
into
it
we're
a
big
user
of
cloud
custodian
and
just
a
heads
up
we're
going
to
be
contributing
a
lot
of
pr
back
to
how
custodian.
In
the
past,
we
have
been
using
the
older
version
of
crack
custodian,
where
you
guys
support
the
plug-in
model
and
we
were
having
our
own
plugins
of
things,
and
now
we
migrate
over
to
the
latest
version.
E
So
now
we'll
submit
back
everything
that
we
have
customized
and
then
of
the
pr
that
I've
been
submitting.
Thank
you
kapil
for
merging
one
of
them.
I
think
one
has
been
approved,
so
I
was
just
asking
you
know
what
else
is
needed
to
you
know
to
merge
and
then
the
other
one
or
just
some
follow-up
questions.
Nothing.
A
E
A
A
A
Sorry,
I
don't
mean
long-winded
appeal.
If
you
need
a
more
bandwidth,
rich
experience
of
someone
explaining
something
to
you,
you
can
bring
your
pr
and
we
just
kind
of
work
through
it
and
that
kind
of
stuff.
That
way
you
can
come
for
the
agenda.
If
you're
not
interested
in
the
prs,
you
can
just
bail
in
the
meeting
and
then
we
sit
there.
We
can
also
schedule
other
review
meetings
as
well.
A
So
if
you
get
to
the
point
where
you
want
to
hop
on
a
call
or
something
and
do
that
kind
of
high
bandwidth
discussion,
we
definitely
will
do
that.
So,
thank
you
with
that
anything
else.
We're
giving
everyone
18
minutes
back.
It's
a
good
one.
A
The
the
notes
will
be
published
to
the
google
group
as
usual
and
the
channel
and
the
video
will
be
on
the
youtube
channel,
probably
by
the
end
of
the
day.
So
with
that,
thank
you
very
much.
Everyone
and
we'll
see
everyone
in
two
weeks.