►
From YouTube: Kubernetes SIG Auth 2019-09-18
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2019-09-18
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
A
B
C
A
Cool
alright
Tim
did
you
want
to
talk
about
your
cat.
D
Yeah
so
I'm
proposing
three
additional
restrictions
that
are
enforced
through
the
sorry
enforced
through
the
nude
restriction
plugin
for
117,
so
the
the
three
restrictions
are
on
so
that
just
a
reminder.
This
applies
to
what
cubelets
or
anything
with
a
node
identity
is
able
to
do
in
the
cluster,
and
so
these
are
kind
of
placing
additional
restrictions
on
editing
and
creating
pods.
D
The
three-year
strings
are
two
labels,
annotations
and
owner
references,
and
the
reason
is
that
there's
a
bunch
of
controllers
in
the
cluster
that
match
pods,
based
on
on
labels
and
owner
references
and
being
able
to
adjust
those
references
can
be
used
to
manipulate
those
controllers
to
do
things
that
the
node
should
be
able
to
do.
You
know
if
we're
assuming
command
node
an
isolated,
node
threat
model.
So
two
examples
are
getting
a
Podge
to
match
a
service
that
shouldn't
be
routing
traffic
to
that
node,
which
can
be
used
for
man-in-the-middle
or
making
a
pod
match.
D
D
D
D
So
that's
that's
kind
of
a
high
level.
The
more
specific
proposal
is
basically
that
labels
and
annotations
be
restricted
to
a
white
listed
prefix,
which
is
they
have
to
start
with
unrestricted
nude
Corinna
tees.
That
I
am
just
to
indicate
that
these
are
labels
and
annotations
that
aren't
restricted
through
Ned
restriction,
and
the
motivation
for
that
is
that
there
are
examples
of
static
pods
serving
services
in
certain
environments.
F
D
D
This
I'm
just
clarifying
or
proposing
that
any
labels
that
don't
start
with
kubernetes
do
some
sub
path,
be
unrestricted
as
well
possibly,
and
the
rationale
was
that
kubernetes
shouldn't
have
opinions
about
non
kubernetes
labels
right
so
going
back
to
kind
of
the
motivating
examples
of
preventing
services,
their
thoughts
or
matching
other
services
and
controllers
I.
Think
it's
pretty
standard
to
have
the
labels
on
you
know
matched
by
a
replica
set
controller.
For
instance,
your
deployment
be
not
in
the
kubernetes
dot,
io
domain
and
so
I
think
that.
D
B
D
No,
so
that's
that's
another
point.
Is
that
so
there's
there's
two
ways
that
the
cubelet
has
of
labeling
pods
one
is
through
cubelet
doesn't
actually
have
the
update
pod
permission.
It
does
have
updated
status
which
can
change
labels.
The
cubelet
never
does
that
today,
and
so
that's
another
reason
to
restrict
these
is
that
we're
saying
that
this
is
behavior,
that
the
cubelet
doesn't
need,
so
we're
just
eliminating
that
the
other
way
that
cubelet
can
add
labels
is
through
mirror
pods,
which
are
for
those
who
don't
know
a
mirror.
D
Pod
is
sort
of
a
reflection
of
a
static
pod
that
the
cubelet
is
running
through
static
configuration.
So
it's
a
pod
that
isn't
created
through
the
API
server
it's
created
directly
through
the
cubelet,
and
then
the
cubelet
creates
this
mirror
pod
and
the
api
server
to
indicate
that
this
other
pod
is
running
locally,
and
so
you
could
put
any
label
you
want
on
your
pod
today,
and
what
this
proposal
is
saying
is
that
now
you're
only
going
to
be
able
to
create
newer
pods
with
this,
using
this
unrestricted
new
duct
these
that
I
Oh
label.
D
Yeah
I
think
that
owner
reference
bit
is
a
lot
more
of
a
slam-dunk
like
that
one,
especially
for
controller
references
like
it,
doesn't
make
any
sense
to
have
static
odds
that
declare
oh
no
references
to
things
with
controller
true
set
because
mirror
pods
aren't
spawned
by
a
controller
resource,
so
they've
just
flat
out
preventing
that
seems
very
sensible
and
that
would
prevent
kind
of
adoption
by
replica
sets
and
our
confusion
of
the
controllers
and
things
to
think
that
these
pods
belong
to
them
and
have
them
go
trigger
deletion
of
other
pods.
That
seems
very
clear.
D
G
And
maybe
this
is
even
another
thing
is
like
there's
been
a
discussion
in
terms
of
fixing
the
fact
that
it
will
actually
use
GC
for
pods.
We
have
all
sorts
of
like
guaranteed
failures,
around
nodes
going
away
and
coming
back.
There
is
a
proposal.
That's
I'm,
trying
to
Shepherd
through
where
it's
likely
that
binding
a
pod
or
in
theory
of
binding
a
pod
to
a
node,
is
going
to
add
an
owner
reference
of
that
node.
G
D
G
D
Of
clearest
wind
to
most
questionable,
like
the
owner
reference
piece,
seems
very
clear.
The
labels
things
seem
like
there
are
demonstrated
use
cases
where
another
being
able
to
label
pogs
can
cause
problems,
and
then
the
annotations,
when
it
sounds
like,
is
largely
speculative
like
there
may
be
get
others
that
may
be
using
annotations
for
things,
and
we
want
to
protect
them
like
what
we
did
for
other
parts
of
nerd
restrictions.
D
The
more
speculative
and
sort
of
hand-wavy
like
there
may
be
a
third-party
controller,
doing
the
thing
that
we
need
to
protect
the
more
speculative
it
got.
The
more
we
leaned
on
the
permission
like
if
someone
has
their
own
label
that
they
really
need
to
protect,
they
can
set
up
a
web
cognition
plugin
to
protect
it.
D
D
So
be
actually,
when
I
initially
wrote
this
proposal,
I
wasn't
thinking
about
the
other
controllers.
I
was
only
looking
at
the
the
man-in-the-middle
vector
for
services,
and
so
the
original
proposal
was
an
update
to
the
endpoint
controller.
That
said,
just
never
match,
mirror
pods
or
static
pods,
but
I
think
the
the
label
restrictions
are
sort
of
a
generalization
of
that.
That
applies
to
other
controllers
and,
at
the
same
time,
solves
the
services
problem.
A
F
D
F
D
Have
a
node
really
restriction
admission,
plugins,
all
right,
yeah
I,
think
outlining
like
taking
each
of
those
taking
each
of
those
aspects
and
kind
of
thinking
through
like
what
could
this
break
and
what
is
this
clearly
fixing?
And
what
is
this
speculatively
fixing
that
would
be
okay,
prioritizing.
A
B
B
F
B
H
A
I
A
B
B
B
G
B
G
And
I
don't
know
that
I
would
block
it
in
117,
based
on
that,
just
as
a
like,
it
always
makes
it
feel
much
more
warm
and
fuzzy
when
we're
like,
hey
we're
going
to
GA
and
here's
the
feedback
from
the
people
who
are
happy
with
it.
Just
in
the
end,
somebody
comes
around
and
yells
at
us
or
I,
don't
know
doing
things
that
help
people.
G
B
C
B
G
Will
I
will
note
that,
like
one
of
the
original
things
that
happened?
At
the
same
time,
we
started
talking
about
choking
requests
which
was
being
able
to
inject
tokens
or
secret
credentials
from
other
providers
with
1:16
in
the
external,
the
CSI
local
support
in,
in
theory,
it's
possible
to
go
right.
Local
agents
that
do
injection
so
like
I
I,
don't
know
that.
G
That's
something
that
we
have
to
do
for
this
cap,
but
it
might
be,
it
might
poke
some
people
and
see
if
their
integration
that
they
wanted
to
do
historically,
that
they
couldn't
as
still
as
possible.
Not
using
the
CSI
stuff
doesn't
have
to
block
this.
But
it
is
that
other
part
of
the
I
want
to
have
a
secret
injected.
That's
managed
by
the
platform
that
I
can't
corrupt
that
I'm,
not
the
one
running
the
agent
ticker
do
it,
and
the
CSI
external
or
the
CSI
local
volumes
is
in
theory.
What
was
supposed
to
enable
that.
B
G
H
H
E
H
Yeah,
okay,
so
that's
great
okay,
so
yeah
we,
you
know.
If
we
want
to
have
some
some
questions
we
had.
Actually
it
was
like.
Is
there
an
RFC
process
for
doing
this,
and
you
know
what
do
you
guys
expect
from
us
in
terms
of
like
putting
together
a
proposal,
or
do
you
just
want
a
side
channel
this
stuff.
H
We
are
actually
day,
slabs
is
donating
since
one
of
our
engineers
paired
up
with
days
on
writing
the
provider
for
that
CSI
driver
revolt,
they're.
Actually
we're
gonna
be
adopting
that
codebase
into
a
hash
incorporate,
though,
or
at
least
the
12
provider,
part
of
the
code
base,
and
that's
specifically
over
talking
about
here
right.
B
H
And,
with
the
token
request,
feature,
I
think
the
primary
problem
that
we
have
today
without
simplement
it
is
like
ELISA
provider,
the
provider
uses
the
service
account
its
own
service
account,
not
necessarily
the
service
account
of
the
pod
that
it's
servicing
does
that
make
any
sense?
Yes,
so
we
need
it
to
be.
Ideally
it
would
be
the
pods
service
account,
and
so
with
this,
allow
us
to
do
that.
I.
B
Think
that
I
looked
somewhat
recently,
there
were
some
minut
things
that
needed
some
work.
One
was
the
issuer.
Url
is
hard-coded
to
the
legacy
token
issuer
URL,
so
that
needs
to
be
configurable.
The
other
thing
was
that
it
would
be
nice
to
support
in
the
audience
configuration,
so
you
so
vault
could
use
a
token
that
it's
not
replayable
against
the
communities
API
server.
B
As
far
as
the
I
service
account
of
the
vault
credential
plug-in
I,
think
you
need
that
to
do
the
token
token
reviews
that
you're
currently
doing
today,
that
could
probably
be
a
bit
simpler
if
you
used
in
cluster
config,
so
you've
run
vaults
as
a
pod
and
then
vault
uses
the
credentials
that
are
automatically
injected
into
the
pod.
We
have
some
support
for
that
in
our
client.
B
Go
which
I,
don't
think
that
you
are
currently
using
today,
because
it's
a
headache
and
I
think
you
probably
still
want
the
what
you
have
today,
where
you
configure
a
separate
key
config
just
in
case
somebody
wants
to
run
vault
outside
of
kubernetes,
but
I
still
wants
to
use
the
kubernetes
auth
provider.
Okay,.
H
B
B
B
B
However,
it
is
not
currently
ready
to
replace
the
old
secret
volumes,
so
I
think
that
it
would
be
nice
to
have
a
little
better
understanding
of
how
we
would
replace
those
secret
volumes
before
taking
the
volume
projections
GA.
But
this
that's
probably
something
I'm
gonna
start
thinking
about
at
least
draft
the
GA
graduation
requirements
during
1:17
cycle,
I,
I.
D
Would
also
recommend
prioritizing
client
library
refresh
behavior
just
because
like
if
we
can
do
that
as
soon
as
possible
and
give
that
you
know
multiple
releases
to
kind
of
percolate
out
to
people's
projects.
I
know
it's
shocking,
but
most
people
don't
update
their
client
libraries
every
three
months
so
the
longer
we
can
give
that
to
soak
and
get
out
there,
the
better.
D
B
D
I
B
Been
done
for
actually
a
fairly
long
time
and
we
still
are
seeing
issues
with
things
like
calico
and
old
versions
of
hipster
and
even
I
heaps
there's
a
project
that
we
have
in
kubernetes
org
but
stuff
like
the
proportional
cluster,
proportional
autoscaler,
which
is
a
cube.
Runas
incubator
project
was
broken
until
fairly
recently
I
said
literally
just
updated.
D
D
Yeah
I
think
the
what
we've
done
in
past
releases
is
have
new
behavior
like
make
sure
there
ete
tests
for
it
and
then,
with
at
least
a
couple
releases
notice
flip
the
default,
but
then
have
a
way
for
people
to
get
back
to
the
previous
behavior
and
leave
it
in
that
state
for
awhile
and
just
flipping.
The
defaults
for
the
ecosystem
does
wonders
in,
like
you
know,
people
who
have
a
cluster
that
they're
managing
can
put
it
back
to
the
OL
defaults
and
run
using.
D
You
know
old,
tried-and-true
methods
for
multiple
releases,
but
as
soon
as
the
default
changes,
the
people
who
maintain
the
long
tail
of
components
in
the
ecosystem
start
getting.
Reports
like
hey,
I,
spun
up
a
developer
cluster
and
your
thing
didn't
work
and
here's
about
rapport,
and
maybe
you
should
update
your
libraries
to
be
within
the
last
few
years.
Yeah.
I
Now
I
remember
my
question:
like
do
we
actually
write
like
dogs
that
include,
like
example,
error
messages
that
people
could
find
with
such
engines
and
like
I,
know
at
one
point,
I
was
pointing
people
to
like
a
PR
by
Mike
when
he
changed
a
bunch
of
stuff
and
I
was
like.
Do
this
I,
don't
know
it's
like
the
Ducks
got
better
in
the
meantime
or
like
they
still
need
to
be
improved,
because
that
probably
should
also
happen.
Yeah.
I
D
It
might
also
be
worth
adjusting
the
service
account
took
an
authenticator
to
produce
a
more
helpful
error.
Message
like
you
are
using
an
awesome
service,
account
token
that
happened
to
expire,
which
means
you
didn't
pull
the
like.
Give
them
hints,
instead
of
just
write,
not
valid
after
go
away.
Yeah,
okay,
yeah
well,
I,
think
documenting
in
a
few
different
dimensions.
There
would
be
helpful
mm-hmm.
A
So,
in
relation
to,
like
you
know,
expiring
tokens
and
stuff
right,
like
once
people
kind
of
get
used
to
like
infinitely
long
token,
so
you
know
get
this
permanent
behavior
of
I
read
it
once
on
process
start
and
I,
never
read
it
again.
What
do
you
like?
What's
the
criteria
for
maybe
like
enough
of
the
ecosystem,
is
doing
better
all
right
like
like?
G
It's
weird
because
this
one
is
like
a
scale
problem
too
so
like
there
may
be
some
audiences
that
we
could
pitch
this
at.
So
anybody
who
has
lots
of
namespaces
would
want
this
Wow
I'm,
assuming
the
token
request,
is
actually
more
performant
than
our
current
secret
storage
path,
which
is
entirely
possible.
It's
not
Mike.
Actually,
how
far
is
has
been
pushed
so
far.
Do
you
know
if
anybody's
push
this
to
tens
of
thousands
of
namespaces
and
service
accounts
I?
B
G
Because
I,
like
I,
certainly
know
like
from
the
OpenShift
like
high
density
high
to
multi-tenant
one.
This
is
one
that,
like
is
really
exciting
to
push
so
like
anybody.
Who's
in
a
high-density
spot
might
just
want
something
like
this.
As
long
as
it
doesn't
fall
apart,
the
other
ones,
they
might
be
someone
that
we
could.
You
know
pitch
this,
as
is
like
hey
if
you're
running
high
scale
kubernetes,
you
should
consider
enabling
this
and
be
our
guinea
pigs.
B
F
G
Was
definitely
a
discussion
about
this
on
that
came
up
a
while
back,
which
would
be
would
we
should
we
begin
looking
at
gzipping
some
of
these
things
in
the
storage
layer
and
the
problem
with
cas?
Is
that
they'll
compress
well
against,
except
against
each
other?
So
there
has
been
some
crazy
things
there
just
there
and
guess,
if
it's
a
if
it's
not
really
a
net
wind
and
maybe
the
scale
guys
aren't
gonna,
be
excited
but
they're.
Definitely
people
have
lots
of
service
accounts,
so
Tim.
B
D
So
if
we,
if
we
continued
refreshing,
you
know
every
10
minutes,
but
just
granted
minted
tokens
with
longer
lifetimes,
then
we
would
be
able
to
tell
if
these
workloads
were
correctly
refreshing
or
not
by
continued
use
of
tokens,
past
10,
minutes
right
and
so
that
wouldn't
disrupt
anything
and
as
soon
as
the
pod
got
deleted.
You
know
if
you
get
the
revocation
benefits
and
then
you
could
have
metrics
around,
like
hey
you've
switched
to
these
new
tokens.
D
D
D
Maybe
probably
so
yeah,
so
that
seems
like
a
like
a
second
argh
to
the
API
server.
That
says,
continue
refreshing
every
10
minutes
refresh
every
10
minutes
but
mint
for
longer
and
like
warned,
if
you
see
things
past
this
time
periods,
I,
don't
know,
I
I
really
want
to
be
able
to
roll
this
out,
but
I
want
to
be
hosted
to
it
with
data
that
says
that
it's
safe
to
roll
out
yeah.
B
G
A
G
Talking
about
me
I,
don't
think
I
I
I
am
I
love
gr,
PC
I
am
Not
sure
that
Jaret
PC
is
our
most
successful
extension
mechanism,
but
if
like-
and
so
that
was
where
my
concern
was-
was
just
adding
more
and
more
of
those
without
like
a
little
bit
of
like
soul-searching
on.
Are
we
succeeding
with
the
goals
we
have
for
all
the
other
ones?
So
can
you
offer
an
alternative?
G
C
G
Ams
one
is
a
little
special
because
if
you
compromise
that
you
word
the
whole
cluster
or
it's
it's
in
that
same
vein,
this
one
is
I,
I
will
say
so.
The
other
thing
like
the
in
the
thought
that
I
think
led
to
this
I'm,
starting
to
remember
a
little
bit,
is
like
doing
these
kinds
of
plugins
on
masters
is
really
really
complicated,
because
everybody's
masters
are
different
and
so
having
masters
call
out
to
extensions,
has
always
been
somewhat
problematic
and
so
I
think
there's.
B
Yeah
I,
even
beyond
the
threat
of
a
compromised
master,
I,
think
there's
value
and
using
robust
key
management
systems
for
stuff
like
auditing
and
rotation
rotation
rotation,
calendars
that
we
probably
will
not
implement
so
I
guess
there
are
two
questions.
Is
this
broad?
Is
this
useful
and
then
the
second
question
is:
if
so,
what
mechanism
do
we
used
to
extend
the
API
server
I.
A
B
A
This
one
do
we
feel
the
same
need
that
we
felt
for
kms
to
have
like
G
RPC,
like
if
I,
if
I
understood
correctly
for
the
kms
stuff,
we
really
wanted
like
a
UNIX
domain
socket,
so
that
way
kind
of
didn't
have
to
do
authentication
on
it.
You
basically
knew
was
the
cube
API
server,
talking
to
you
on
that
domain
socket
and
once
it
connected
you
couldn't
disconnect
it
without
causing
auditive
events
in
the
kernel.
B
G
It
is
at
least
analogous
to
a
callback
style
pattern
for
some
of
this,
because
we're
not
storing
with
the
end
of
the
whole
goal.
This
is
not
to
store
the
tokens
and
NCD
so,
like
the
controller
pattern
is
straight
out.
I.
Think
most
of
most
of
my
concerns
that
we
talked
about
at
the
time
was
more
about
complexity,
although,
like
whenever
someone
suggests
your
pc
I'd
say
you
know
like
even
the
cube
low,
like
I,
think
cryo
CRI
has
not
been
as
good
of
an
extension
mechanism
for
the
node
as
it
could
have
been.
G
B
G
Novel
thing
about
web
hooks
is
that
web
hooks
are
transferring
kubernetes
objects
in
almost
every
case
or
they're,
a
complete,
opaque
content,
payload
actually
I,
don't
know
that
there
was
like
readiness,
probes
or,
like
the
other
example
of
like
it,
is
effectively
a
0/0
implementation.
There's
no
data,
payload
I
would
agree
for
anything.
That's
not
a
cube,
API
object
or.
B
G
Don't
know
that
it's
superior
to
G,
RPC
and
the
point
about
your
other
properties
is
fine.
I
guess
do
we
think
that
this
would
be
something
that
you
played
a
host
remotely
significantly
far
away
from
a
clock.
Mr.
wood
is
significantly
far
away
far
enough
away
that
you
are
going
to
feel
like
it's.
It's
likely
to
be
outside
of
the
device.
So,
like
another
argument,
webhooks
is
you
can
very
easily
do
things
like
serverless
implementations
of
them
and
you
can't
do
a
service
to
your
FEC
implementation
period.
I
mean
I.
G
Guess
you
can
but
like
so.
We
had
at
least
four
webhooks
discuss
the
idea
they
make.
It
run
any
back-end
that
can
deal
with
cube
API
objects
versus
this
is
a
little
bit
more
specific,
which
is.
Do
we
really
need?
Do
we
want
to
make
it
super
easy
for
people
to
go,
write
these
plug-ins
and
I
realized?
G
My
comment
is
almost
exactly
the
opposite
of
that,
but,
like
I
think
it's
a
reasonable
point
that
if
we're
gonna
pick
something
and
it's
not
a
cube
object
and
we
don't
intend
people
to
go,
be
able
to
just
do
it
in
the
language
they
want,
because
we're
trying
to
like
it's,
it's
a
super
there'll
be
tons
of
people.
Writing
these
plugins,
then
gr
pc-jr
pc
is
their
default
answer.
G
G
A
J
I'm
here,
if
you
want
I,
can
take
or
hear
me
guys,
yeah
so
condense
this,
because
we
have
pressed
in
time.
You
know
all
these
policies
been
hanging
for
quite
some
time,
but
I
believe
that
now,
we've
progressed
with
quite
well
to
basically
get
together
more
user
stories
and
rows
and
so
on
and
dig
a
little
bit
deeper
into
this
stuff.
So
we
can
really
elaborate
design
based
on
great
needs
and
I
think
we
are
somewhere
there.
J
So
right
now
you
see
the
shared
document,
its
measured
since
Friday
and
I've,
set
that
you
date
until
this
Friday,
so
that
we
can
time
box
the
whole
exercise
and
actually
move
to
the
next
step,
which
is
adviced,
is
going
to
be
a
cap
with
the
warnings
from
from
this.
This
document
that
I'm
going
to
put
out
again
I'm,
not
sure,
probably
we
need
another
one.
That's
going
to
be
sort
of
umbrella
cap,
because
I
see
this
is
integral
part
of
the
hold
it
so
sync
topic.
C
J
J
J
I
just
want
to
give
you
heads
up
that
it's
also
it's
its
first
of
all,
it's
coherent
with
all
the
concepts
that
we
know
from
the
legacy
policy
file
and
also
from
other
resource
selection
models
like
Arabic
and
so
on,
but
it
takes
a
kind
of
different
approach
at
realizing
these
things.
So
there
are
things
that
are
completely
new,
just
as
a
teaser,
for
example,
we
skip
stages
as
a
concept
inside
as
an
explicit
concept.
J
I
mean
Internet,
there's
to
support
it,
so
I
would
urge
you
to
take
a
look
and
see
if
something
pretty
bothers
you
here,
if
or
if
you
want
to
add
some
some
more
contexts
but
in
any
case
leave
your
opinion,
because
I
really
strive
to
move
things
forward
after
opinions
are
gathered
with
this
document
to
a
pull
request
and
consult.
Ok,
I.
A
Think
this
was
slightly
discussed
if
the
dynamic
policy
is
going
with
a
very
different
structure
of
API
right
then
versus
the
one.
That's
on
disk
right,
yeah
right
so
are
we
are
we
just
gonna
get?
Is
there
a
desire
to
try
to
have
some
level
of
compatibility
or
it's
just?
Basically,
we
learned
some
stuff
from
the
original
approach
and
we
don't
want
to
carry
it
all
forward
like
like.
We
don't
want
all
the
TechNet
of
that.
We
want
the
better
one
and
we'll
write
good,
Doc's
and
hope
you
can
understand
the
difference.
D
D
So,
with
the
new
dynamic
policy
one,
it
tries
to
fix
some
of
those
API
inconsistencies
and
kind
of
clean
up
some
stuff,
eliminating
the
tech
debt.
You
referred
to
the
other
thing
that
it
really
changes
is
its
optimizing
for
policies
to
be
extended
and
reused,
whereas
the
before
you
just
had
one
single
file
sort
of
statically
edited
by
an
administrator
or
something
now
you
could
have
multiple
audit
endpoints
and
we
want
to
be
able
to
reuse
parts
of
the
policy
between
those
endpoints
without
meaning
to
maintain
a
bunch
of
copies
with
possibly
slight
differences.
D
So
that's
a
bit
on
like
the
motivation
for
some
of
the
changes.
Now
there
is
a
question
sort
of
going
forward.
We
want
to
bring
these
changes
back
to
the
static
piece
in
some
cases.
I
think
the
answer
is
yes,
I'm,
not
sure
in
all
cases,
I
don't
think
that
we
should
optimize
for
that
at
this
stage,
but
I
definitely
think
it's
something
that
we
should
look
into.
Maybe
once
the
audit
policy
goes
to
beta,
how
to
reconcile
is
better.
A
It
is
it
possible
to
like,
if
you,
if
someone
wanted
to
like
our
the
API,
is
close
enough,
that
you
could
theoretically
write
like
a
little
tool
that
took
your
file
and
turned
it
into
like
an
occasion.
It's
done
at
fist.
That's
the
dynamic
API
like
if
you
kind
of
wanted
to
just
migrate
yourself
off
and
like
oh
yeah,.
D
J
E
E
Because
it's
gonna
be
an
upgrade
path:
form
right
where
you
upgrade
it.
You'll
have
your
static
file.
The
API
will
be
available
for
adding
dynamic
audit
policy.
Someone
adds
it.
What
do
we
expect
to
have
happen?
I'd
like
to
see
that
as
an
upgrade
concern
that
gets
listed
here
to
make
sure
we
handle
it.
D
The
static
paths
should
continue
to
be
supported
in
parallel
with
the
dynamic,
but
but
I
agree
that
there
are
some
some
concerns
around
how
the
great
works,
but
that
might
also
be
covered
in
the
current
dynamic
audit
proposal
or
kept
well
I,
don't
think
it's
anything
new
with
the
new
policy
design
it
it
is
covered.
The
ability
to
have
different
policy
for
the
dynamic
webhooks
sinks
makes
that
have
more
questions
around
how
it
works.
Yeah.