►
From YouTube: Kubernetes SIG Storage - bi-weekly meeting 20210422
Description
Kubernetes Storage Special-Interest-Group (SIG) bi-weekly meeting - 22 April 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Xing Yang (VMware)
A
Hello:
everyone
today
is
april
22nd
2021.
This
is
the
kubernetes
six
storage
meeting.
So
today
we
have
a
few
design
reviews.
So
first
we'll
go
over
this
volume
restrictions
for
part
security
and
the
psp
replacement
cap.
A
So
this
is
a
good
opportunity
for
new
contributors
to
learn
how
640
works
and
if
you
are
interested
in
anything,
you
can
speak
up
during
the
planning
in
this
meeting
or
you
can
also
pin
one
of
the
leads
michelle
young
or
myself
after
the
meeting
to
sign
up
for
some
work
and
looks
like
we
also
have
another
design
review
on
secret
production.
A
B
Yeah
hi,
thanks
for
hosting
our
topic
today,.
A
Should
I
make
your
co-host
so
you
can
you
want
to
share.
B
Second,
all
right:
can
you
see
the
pod
security
standards,
documentation.
A
B
All
right,
so
let
me
just
give
a
bit
of
a
a
bit
of
background
for
folks
who
don't
know
what
we're
talking
about
here
so
today
the
standard
way
of
limiting
limiting
pod
permissions
is
through
pod
security
policy,
but
starting
in
121
we've
deprecated
pod
security
policy.
We
sort
of
have
known
this
was
coming
for
a
long
time,
but
didn't
have
a
good
path
forward.
So
we
were
waiting
off
on
that.
B
So
pod
security
policy
is
deprecated
in
121
and
we'll
be
removing
it
in
125.,
so
we've
been
discussing.
Actually
let
me
switch
over
to
the
cap
been
discussing
for
a
while
what
the
best
way
to
replace
it
is,
and
we've
come
up
with
this
proposal.
That's
based
on
these
pod
security
standards
that
we
have
documented
here
and
one
of
the
key
differences
between
this
new
proposal
that
we're
hoping
will
replace
pod
security
policy
and
the
previous
pod
security
policy
is
pod.
B
You
can
say:
is
this
allowed
or
disallowed
under
this
policy
and
define
as
many
policies
as
you
want
and
bind
them
to
different
pods
in
a
somewhat
convoluted
way
and
the
new
policy
instead
just
defines
these
three
privilege
levels,
so
that's
privileged
baseline
and
restricted
and
privileged
is
defined
as
being
totally
unrestricted,
as
if
you
don't
even
have
the
admission
controller
turned
on
baseline
is
defined
as
allowing
a
minimal
pod
spec.
B
So
in
other
words,
if
you
define
a
pod
or
if
you,
if
you
create
a
pod
with
just
a
pod
name,
a
single
container
that
has
a
name
and
a
container
image
and
don't
fill
out
any
of
the
other
pods
back
that
should
be
allowed
under
baseline
and,
of
course,
anything
that's
more
restrictive
is
also
allowed
under
baseline
and
for
things
that
are
elevating
privileges
beyond
that
baseline
pod,
it's
a
bit
of
it's
a
little
fuzzy
in
terms
of
what
we
allow.
B
So
some
things
like
a
privileged
container
is
clearly
more
privileged,
so
we
disallow
it,
but
when
it
comes
to
volumes
which
is
actually
what
we're
talking
about
today,
we
allow
all
volumes
under
baseline,
except
for
host
path
volumes,
which
we
completely
forbid
and
then
the
restricted
profiles.
The
third
one
is
much
more
heavily
restricted
and
follows
what
we
consider
to
be
best
practices.
B
We
require
the
secomp
profile
default
second
profile,
which
I
wish
was
enabled
in
baseline,
but
since
it's
not,
we
enable
it
and
restricted,
and
then
notably,
we
are
much
more
restrictive
of
volume.
Types
and
the
philosophy
of
the
volume
types
was
sort
of
for
persistent
volumes.
Rather
than
using
the
built-in
inline
volumes.
Persistent
volumes
should
be
going
through
the
persistent
volume
and
persistent
volume
claim
features.
B
We
do
allow
the
built-in
ephemeral
volume
types
so
empty
gear,
projected
volumes,
secrets,
config
maps,
but
we
try
and
disallow
all
the
others,
but
in
yesterday's
meeting
about
as
we
try
and
kind
of
lock
down
these
the
definition
of
these
different
profile
levels,
we
had
ended
up
having
a
discussion
about
the
csi
volume
type
and
whether
we
should
allow
that
and
restricted.
B
B
B
B
So
so
the
the
way
the
the
new
policy
is
bound
is
through
a
label
on
the
namespace.
Let
me
just
pull
that
up
here,
yeah.
So,
basically,
we
put
a
label
on
the
namespace
that
says
what
privilege
level
the
namespace
should
run
as
so,
we
would
have
something
like
pod
isolation
policy,
slash,
allow
restricted
or
or
baseline,
and
this
is
yeah,
so
this
is
label
on
namespace
and
applies
to
all
pods
in
the
name
space,
with
with
some
exceptions
through
another
api.
B
The
idea
is
that
we
want
to
have
kind
of
well-defined,
basically
best
practice
policy
levels
that
we
think
apply
to
the
majority
of
users
and
then
for
someone
who
needs
more
fine-grained
control
or
has
kind
of
advanced
or
more
complicated
policy
requirements,
we're
just
recommending
that
they
use
a
third-party
admission
web
hook
so,
for
instance,
oppa
or
gatekeeper,
or
one
of
those
other
options.
E
A
third-party
option
to
also
build
on
top
of
the
the
ones
we
provide
like.
Can
you
use
these
profiles
along
with
gatekeeper,.
B
Yeah
definitely
so,
if
so,
the
way
mission
control
typically
works
is
each
admission.
Controller
can
say
kind
of
no
opinion
or
deny
so
once
one
admission
controller
rejects
the
request,
then
that
read
kind
of
rejects
it
flat
out.
So
if
you
want
to
layer
on
additional
restrictions
beyond
what's
here,
then
it's
easy
to
kind
of
have
this
work
in
parallel
with
a
custom
admission
plugin.
F
F
I
know
that
there
are
csi
drivers
being
implemented
for
many
of
those
persistent
disk
types
and
we
weren't
sure.
If,
if
we
allowed
pods
to
use
the
csi
volume
source,
could
they
effectively
make
use
of
the
persistent
disk
implementations
csi
drivers
like?
Could
they
could
they
create
a
gce,
persistent
disk
or
a
azure
file,
persistent
disk
via
the
csi
volume
source,
or
does
that
only
allow
using
ephemeral,
csi
drivers.
E
So
the
answer
is
possibly
the
the
way
that
these
csi
ephemeral
volumes
are
implemented.
Today
is
the
driver
needs
to
explicitly.
E
Of
it
and
I
think
for
the
most
part,
we
have
tried
to
sort
of
encourage
the
use
of
this
for
ephemeral,
use
cases
and
discourage
the
use
of
this
feature
for
persistent
use
cases.
But
you
know,
I
think
some
drivers
may
have
already
decided
to
go
ahead
and
implement
the
persistent
case
using
this
api,
even
though
it
wasn't
really
the
intended
option.
E
So
it's
the
answer
is,
it
depends
on
which
driver
and
I
think
most
of
the
drivers
on
this
list
that
are
in
restricted
their
equivalent
csi
drivers
have
not
implemented
this
part
of
support
for
this
feature.
But
I
think
there
are
like
a
handful,
maybe
one
or
two
that
have
is.
F
E
Yeah
we
can,
I
think
we
can
definitely
provide
the
table.
I
I
think,
though,
there's
also
like
this
other
concern
that
there's
actually
a
bunch
of
new
csi
drivers
that
are,
that
don't
have
intrigue
equivalents
and
but
they
may
have
also
implemented
support
for
this.
F
In
general,
is
it
reasonable
for
like
an
ephemeral
volume,
if
that
is
created,
does
that
does
an
ephemeral
volume
mean
the
volume
is
provisioned
for
this
pod
and
then
torn
down
when
the
pod
goes
away?
F
E
G
The
persistent
volumes
they
use
volume
handle
to
identify
the
volume
in
the
storage
back
end,
and
we
don't
have
that
in
the
inline
csi
volumes.
As
far
as
yeah.
G
C
F
E
B
So
the
the
philosophy
that
we've
gone
with
for
a
lot
of
kind
of
the
extension
points
in
the
pod
is,
if
it's
something
that's
controlled
by
the
cluster
administrator
or
like
a
you
know,
requires
kind
of
cluster
scoped
highly
privileged
permissions.
B
We
sort
of
are
saying,
like
trust
them,
to
do
the
right
thing
or
implement
kind
of
appropriate
protections,
if
necessary.
Of
course,
we'd
prefer
to
provide
more
guidance
than
that,
but
I.
G
F
Is
like
is
it?
Is
it
generally
reasonable
to
say,
restricted,
pods
can
use
ephemeral
volumes
and,
if
you're
going
to
deploy
an
ephemeral,
csi
driver
that
is
unsafe,
then
you
need
to
restrict
that
restrict
use
of
that
by
quota
or
by
custom
admission
like
I'm
trying
to
think
if
there
are
other
equivalent
things
on
a
restricted
pod.
F
That
would
open
holes
that
someone
might
an
admin
might
not
realize
exist
like
even
even
the
new
ephemeral
volume
type
in
the
pod
you're
you're
limited
to
things
that
you
can
express
on
a
persistent
volume
claim.
So
you
don't
have
that,
like
full
control
over
volume,
attributes
that
you
do
under
csi.
G
B
F
E
G
E
We
have,
we
have
the
csi
driver
object
where,
when
you
install
the
csi
driver,
you
have
to
declare
if
it
supports.
If
this
ephemeral
mode
or
not,
that
might
be
a
way.
We
can
potentially
restrict
try
to
restrict
things,
but
it
would
still
have
to
be
on,
like
a
per
driver
per
driver
basis,.
B
Yeah,
so
there's
sort
of
three
options:
we're
considering
here.
One
is
actually
what
we
had
originally
proposed
and
probably
over
restrictive
is
just
to
say
no,
you
can't
use
inline
csi
driver
volumes
with
the
restricted
profile.
B
The
next
option
is
to
basically
say
that's
the
default,
but
provide
some
sort
of
escape
hatch
or
configuration
mechanism,
either
statically
configured
or
through
the
the
object
that
you
just
mentioned
to
say
like
well.
We
say
this
volume
is
safe,
so
allow
this
one
kind
of
an
allow
list
approach,
the
other.
H
How
does
this
like?
We
know
that
this,
this
three
policies
can
be
added
to
the
names
just,
but
what
decides
which
name
specific
given
gets
which
policy
like
like?
What
is
the
default?
F
So
to
be
backwards
compatible
and
to
be
able
to
enable
this
by
default.
If
you
don't
indicate
a
policy
level
for
a
name
space,
it
doesn't
restrict
the
namespace.
So
it's
up
to
the
person
provisioning,
the
namespaces
generally,
the
in
a
cluster
where
you
want
to
lock
down
some
users
of
the
cluster.
F
You
don't
generally
give
those
locked
down
users
like
permission
to
create
their
own
namespaces
and
like
do
things
globally,
so
there's
usually
a
provisioning
step
and
an
access
granting
step
that
sets
up
a
namespace
and
then
gives
a
user
access
to
that
namespace,
and
so
part
of
that
would
be
deciding
what
access
level
that
user
should
have.
B
G
Go
back
to
the
three
options.
I
admit
I
kind
of
like
the
existing
psp
or
the
user,
where
the
admin
like
says
these
drivers
are
safe.
These
drivers
are
not
safe
level.
It
fall
in
your
options.
You
provided.
E
I
think
that
that
sounds
like
it
falls
into
either
like
the
second
option,
where
we
have
allow
lists
of
specific
drivers
or
it
falls
into
the.
Maybe
this
is
the
fourth
option,
where
we
defer
this
to
like
a
gatekeeper
or
some
or
someone
to
layer
on
more
restrictive
policies.
C
So
if
the
purpose
of
restricted
policy
is
kind
of
motor
was
unrestricted
right,
I
feel
like
I
think,
csi,
because.
C
Says
that
driver
could
be
very
arbitrary
and
we
don't
have
much
of
control
of
that,
and
so,
if
you
want
to
motorways
on
restricted
side,
I
feel
adding
the
csi
is
safer.
E
I
think
it's
pretty
it's
going
to
be
pretty
pervasive,
especially
for
there's
a
lot
of,
like
I
think,
cert
manager
and
like
vault
and
other
really
popular
secret
management
mechanisms.
There
they've
all
started
using
this.
F
E
Yeah,
so
I
guess
to
use
this
feature,
someone
needs
to
install
csi
driver
that
supports
it.
They
also
need
to
set
the
field
in
the
csi
driver
object,
which
is
a
non-namespace
object,
so
they
already
need
some
sort
of
higher
permissions
to
be
able
to
enable
this
to
be
able
to
enable
these
kinds
of
csi
drivers
in
the
cluster.
B
Given
a
csi
driver
implementation
like
the
the
actual
binary
piece
that
they
install,
can
you
toggle
that
ephemeral
or
inline
field
on
the
csi
object
on
and
off
and
like
have
it
kind
of
work
interchangeably
with
the
same
driver.
B
I
E
I
I
J
Well
for
safety,
you
would
want
to
create
it
with
the
feature
off
from
the
beginning,
so
it
was
never
on
right.
Yeah.
F
Since
baseline
allows
csi,
I
think
going
back
to
tim's
point.
If
you
have
an
ephemeral,
csi
driver
that
is
exposing
like
behavior,
you
would
consider
unsafe
or
escalating
like
host
path,
or
you
know,
consuming
volume,
attributes
that,
like
give
you
direct
control
over
like
kernel
parameters
or
some
something
like
really
significantly
escalating,
I
think
you
would
not
want
that
ephemeral
driver
accessible
or
you
would
want
to
protect
it,
but
with
quota
or
ignition.
F
F
For
me
would
be
if,
if
the
csi
driver
is
corresponding
to
those
known
inline
volume
sources,
support
ephemeral
by
default,
knowing
that
would
be
helpful.
B
I
think
I'm
leaning
towards
just
allowing
csi
drivers
for
restricted
and
basically
solving
this
through
documentation
on
the
kind
of
pod
security
side.
We
would
say
you
know
if
you
have
an
unsafe
csi
driver,
then
we
recommend
either
disabling
inline
definitions
or
adding
an
admission
control
and
then
maybe
on
the
csi
documentation
side,
we
could
say
we
recommend
only
using
the
inline
only
enabling
inline
volumes
for
things
that
should
be
safe
to
restricted
users.
G
F
This
was
one
of
the
options
we
were
considering,
and
I
am
not
a
fan
of
that,
because
it
makes
evaluation
of
a
policy
against
a
given
pod.
Spec
stateful
like
it,
if
you
say,
does
restricted,
allow
this
pod
spec
right
now.
We
don't
need
any
other
cluster
state
to
answer
that
question,
but
as
soon
as
you
make
it
configurable
per
driver
via
the
api.
Now
you
need
to
like
the
question
it
depends
like
it
depends
on
which
cluster
you're
in.
F
Yeah,
so
if
allowing
if
you're
talking
about
policy
level
like
what
pod
specs
are
allowed,
does
anything
prevent
creation
of
a
pod
that
has
a
csi
driver
that
references,
a
driver
name?
That
is
not
that
does
not
exist
or
is
not
ephemeral,
or
is
that
a
runtime
like
it
lets
me
create
the
pod
and
then
when
it
goes
and
gets
scheduled,
and
then
the
cube
tries
to
do
stuff
with
it?
At
that
point
it
I
think
it's
one.
G
F
I
don't
think
that's
true
when
I
traced
it
it
on
like
the
when
it.
If
it's
pulling
the
volume
source
from
a
pod
spec,
it
requires
the
associated
driver
to
have
like
the
ephemeral
mode.
H
F
Yeah,
a
given
driver
that
says
it
supports
a
thermal
mode,
could
always
be
buggy
or
could
like
I'm
not
as
concerned
about
that.
I'm
mostly
concerned
about
whether
whether
the
known
csi
drivers
for
the
existing
entry,
inline
types
are
saying
they
support
a
female
mode
or
not.
That's
one
question
and
then
whether
it's
reasonable
to
say,
if
you're
going
to
deploy
ephemeral,
csr
drivers
that
are
unsafe
or
allow
like
escalation
like
it's
your
job
to
protect
against
pods
using.
F
Those
we
don't
want
to
take
your
whole
meeting,
so
I
feel
like
we
should.
There
are
a
couple
questions.
I
think
we
can
get
answered
out
of
this
and
then
circle
back.
So
I'm
happy
to
let
you
continue
your
meeting
and
not
take
over
anymore.
A
Yeah,
so
we
want
to
start
with
that,
one
that
you
need
to
planning,
but
I
think
yeah.
This
is
a
great
discussion.
So
do
we
do
we
want
to
schedule
another
one
to
continue
or
do
we
want
to
continue
the
discussion
on
meaningless
or
how
do
you
want
to
maybe.
E
F
B
I
saw
that
david
eads
just
joined
the
meeting.
He
was
the
one
who
originally
raised
this
this
topic
to
us.
K
Yeah,
I
plan
to
watch
the
recording
the
weather's
already
in
progress,
and
it
definitely
was
so
I'll
watch
the
recording.
If
I
have
questions
I'll,
probably
start
on
slack.
B
It'll
probably
drop
off,
but
thanks
for
having
us
hey.
I
Shanks,
since
we
have
20
minutes
left,
should
we
just
do
the
other
design
review?
That's
on
the
agenda
today.
Get
done
with
that
and
then
move
the
planning
session
to
the
next
meeting.
I
I
Oh,
I
forgot
about
that.
Okay,
in
that
case,
let's
just
get.
A
It
we
should
start
a
little
bit,
I
think
a
lot
we,
we
may
have
a
lot
of
new
contributors
today.
So
probably
we
should
even
just
get
started
with.
A
Through
okay,
all
right
all
right,
so
this
is
our
pluniche
yeah.
So
I
think,
as
I
said,
you
copied
this
one
from
the
1.21,
so
we
start
with
this.
So
basically
we
will
continue
the
the
items
that
we
didn't
finish
in
in
1.21
and
then,
if
there
are
anything
new
that
are
not
included
here,
you
can
feel
free
to
add
on
this
spreadsheet
so
just
to
go
over
this.
Let's
see
how
much
we
can
we
can
cover
and
then
we'll
continue
the
next
meeting.
A
Okay,
so
the
first
one
is
delegate
fs
group
to
csr
driver
instead
of
cubelet.
This
is
so.
This
is
a
harmonic
this
we
will
continue
to
do
alpha
in
1.2.
I
A
I
H
A
Yeah
and
the
next
one
is
csi
online
offline,
resize,
okay,
so
this
one
and
the
next
one
those
are
related
right
so
come
on.
H
A
I
And
no
just
empty
the
cell,
okay
sure.
A
Okay
and
okay,
the
next
one
secret
links
recursive
permission
handling,
is
young
working
on
this
still
for
the
4.2.
Are
you
still
working
on
this.
A
G
M
A
N
Yeah,
so
for
this
one
for
the
one
pr
that
was
outstanding-
I
know
humble
I
think,
is
here
as
well.
If
you
don't
know,
if
you
don't
have
time
to
work
I'll
continue
that
pr,
I
get
I'm
happy
to
push
that
forward
as
well,
but
I
think
early
on
when
we
discuss
this
item,
if
I
remember
we
were
talking
about,
there
might
be
a
larger
csi
entry,
read-only
refactor.
A
N
So
for
the
pr
yeah
for
the
pr
I'm
good
to
go,
I
can
continue
working
on
that.
A
And
next
one
is
issues
related
to
assuming
volumes
are
mount
points.
Jin,
do
you
know
the
status
for
this.
C
I
think
merged
the
pr
reverted,
but
it's
not
ready
to
merge
yet
because
the
risk
condition
between
model
management
and
pot
is
a
no
issue.
So
we
don't
have
a
good
solution.
Yet
keep
that
item
like
in
mind
and
we
work
on
as
soon
as
possible.
Yeah.
A
A
O
Two,
I
see
that
you
straw
stroke
out
or
I
would
almost
deleted
the
csi
ephemeral
volume.
A
C
A
P
No,
we
are
not
here.
We
don't
have
plans
to
do
this
before
the
second
half
of
the
year,
which
will
miss
122.
A
I'll
change
three
then,
but
okay
or
does
anyone
else
want
to
work
on
this.
G
E
I
think
there
was
some
suggestion
about
changing
the
api
a
little
bit.
Oh.
O
Yeah
we,
when
we
discussed
the
generic
affirmative
volumes,
we
said
that
the
existing
fm
csi
type
is
just
too
confusing,
which
should
be
renamed.
The
the
enhancement
issue
for
it
has
has
captured
those
discussions,
but
yeah
there
is
an
api
change
proposed.
O
The
other
more
technical
or
processed
procedural
issue
is
that
this
whole
feature
doesn't
have
a
cap
in
the
current
format.
So
cable
reviewers
might
ask
before
making
changes
before
moving
into.
C
O
P
Okay,
a
question
I
have
around
this
I
mean,
given
that
I
understand
these
pieces
for
this.
Like
are
around
like
search
and
secrets
and
stuff
I
mean
I
know
in
google,
there
was
someone
there
was
a
security
team
that
was
using
this
or
interested
in
using
this.
I
don't
know,
if
other
I
mean,
I
guess
that
my
point
is
like
someone
who's
engaged
in
one
of
these
actual
use
cases
is
going
to
be
a
lot
more
motivated
to
drive
this
forward.
I
think
than
it
sounds
like
anyone
here.
A
P
P
A
Okay,
so
we'll
see
if
we
got
a
new
owner,
otherwise
we
continue
in
one
window.
23
then.
E
Like
maybe
would
it
be
worth
it
to?
Maybe
we
don't
have
to
do
this
here,
but
it
might
be
worth
it
to
like
separate
this
spreadsheet
into
like
items
where
we're
potentially
looking
for
help
or
owners
owners
on
that
could
help
us,
like
you,
know,
kind
of
ask
for
help
in
certain
areas.
A
Okay,
thank
you,
okay,
so
the
next
one
yeah.
So
this
is
a
spreading
over
filler
domain
and
the
next
time.
What
in
group
api
for
consistent
groups,
those
two
related,
the
spreading
one,
is
actually
dependent
on
the
second
one.
I
think
I
still
there's
still
some
design
issues
with
the
supporting
group.
So,
let's
see
if
we
can
get
the
half
merged,
that'll
be
great,
so
I
think
I'll
leave
that
in
design
for
now.
A
Next,
one
is
cs
out
of
three
movies:
cozy
driver
is
this:
oh,
this
is
christian
in
progress.
Okay,.
Q
A
Okay,
thanks
and
okay,
so
then
we
have
our
fs
parishioner
and
fs
client
provisioner,
so
those
okay,
so
karen
are
working
on
those.
It's
current
here
looks
like
it's.
Just
a
dog
update.
Do
we
I
don't
know
what
else
is
needed
for
this
one
other
than
doc?
E
Q
I
I
think
alpha
would
be
a
bit
aggressive.
We
still
there's
still
some
blockers
that
we
need
to
address
and
I'm
not
certain
how
to
address
them,
and
this
all
deals
with
transferring
csi
secrets
and
how
to
respect
those
boundaries.
So
it's
something
that
I
would
like
to
continue
working
on
the
design
in
this
next
release.
A
Okay,
so
design,
so
let's
just
do
okay,
I'll
just
so
we'll
keep
that.
A
Okay
and
that
next
one
is
the
boarding
house,
so
this
will
be
staying
in
alpha.
We
have
a
meeting
tomorrow
to
talk
about
use
cases
and
what
are
we
going
to
next
so
we'll
decide
yeah
if
we're
going
to?
Yes,
I
think
there
are
a
few
use
cases
that
I
that
I
heard
about
heard
about,
and
we
can
talk
more
about
that
in
tomorrow's
meeting.
J
A
So
I
think
I
think
beta
is
still
yeah.
I
think
beta
is
also
possible,
but
the
the
thing
is
right
now
I
don't
know
any
anyone
anything
that
has
implemented
yet
I'd
like
to
see
at
least
some
sisa
driver
has
implemented
that
right
now
only
the
we
only
have
that
in
this
hospital
driver.
Okay,
so
so
we'll
see
you
know
if
the
so
maybe
we'll
see
if
the
driver.
E
Station
yeah
yeah,
it
might
be
good,
it
might
be
good
to
like
send
out
an
email
to
the
csi
announce
list
or
something
like
that
saying
that
this
feature
is
now
available.
Please
try
it
out
and
give
us
your
feedback.
E
A
Yeah,
so
I
think
so
one
thing
is
the
existing
feature.
I
bring
it
to
beta
and
then
the
second
thing
is,
I
think
there
are
some
requests
on
making
this
feature
available,
so
we
can
actually
do
some
reaction
when
something
happens,
so
that
will
need
some
changes
to
the
existing
design,
but
I
think
that
can
be
like
a
new
new
feature
gate
or
something
or
see
what
that
is,
but
the
existing
one.
If
it's
just
events,
I
think
that's
your
helpful.
A
J
R
This
is
trini,
I
I
can
give
you
an
update.
We
are
waiting
on
tim
still
to
get
the
api
review.
Hopefully
we
we
need
to
get
that
done
as
soon
as
possible
and
about
the
community
meeting.
We
had
long
discussions
around
the
credential
rotations.
R
Essentially,
we
decided
that
most
of
the
cloud
providers
are
not
providing
support
for
that.
We
would
probably
provide
a
support
with
the
basic
service
account-based
auth.
Our
current
focus
is
on
the
development
and
we
are
ramping
it
up.
R
I
have
a
report
that
created
for
media
driver
and
the
pte
is
in
progress
so
that
we
could
test
with
media
driver,
as
our
reference
driver.
A
Okay,
changeable
tracking,
I
think,
we'll
still
be
doing
design.
This
is
quite
a
few
things.
We
need
to
sort
out.
A
Next
one
is
the
new
reroute,
only
access
mode.
Do
you
have
chris
here.
L
Yes,
so
I
have
a
prototype
of
this
working.
The
cap
is
filled
out
for
alpha
and
received
first
round
feedback
on
that
and
I'm
currently
working
on
documenting
the
version.
Sku
behavior
for
this.
J
A
M
A
E
There
is
a,
I
think,
there's
a
proposed
122
schedule
and
I
think
the
feature
freeze
is
tentatively
like
around
end
of
may,
although
they
are,
I
think
there
are
discussions
about
moving
it
two
weeks
earlier,
so
like
mid-may.
A
It
doesn't
seem,
I
thought
it's
four
months,
but
it
seems
to
be
still
pretty
well.
E
E
One
or
two
weeks
more
for
code
freeze,
but
the
actual
enhancement
series,
I
don't
think
is
gonna-
be
really
extended
in
terms
of
yeah.
E
A
Oh
yeah,
yeah,
okay,
pre-order
past
time;
okay,
thanks
all
right,
so
yeah.
I
think
I
think
we
went
through
more
than
half
of
this,
so
we're
continuing
our
next
meeting
yeah.
So
for
anyone
who
are
new
contributors
who
are
interested
in
contributing
you
know
if
you
have
any
questions
or
or
to
help
with
anything
feel
free
to
reach
out
to
us
after
the
meeting
all
right.
Thank
you.