►
From YouTube: Kubernetes Data Protection WG Bi-Weekly Meeting 20210908
Description
Kubernetes Data Protection WG Bi-Weekly Meeting - 08 September 2021
Meeting Notes/Agenda: -
Find out more about the Data Protection WG here: https://github.com/kubernetes/community/tree/master/wg-data-protection
Moderator: Xiangqian Yu (Google)
A
Oh
good,
all
right
good
morning
today
is
september
8
2021.
This
is
the
kubernetes
tab,
data
protection
working
group
meeting.
Today
we
have
a
couple
of
items
in
our
agenda.
A
Thanks
stephen,
he
offered
some
a
long
time
ago
to
talk
about
aws
and
the
diapers
controllers
for
kubernetes,
and
this
is
highly
related
to
or
data
protection
working
group,
and
he
and
someone-
I
guess
I
don't
know
the
name
sorry
about
that-
is
that
keith,
it's
kind
of
true,
but
we've
brought
a
few
other
people
from
the
kubernetes
people
to
speak
with
you
perfect,
it's
going
to
go
through
this
topic
and
then
we
have
a
little
bit
updates
on
the
data
protection
white
paper.
A
Then,
if
we
have
a
time
I
think
phone
wants
to
talk
about.
The
cap
was
change,
block
tracking
and
there
yeah.
If
we
still
have
time,
then
we
can
discuss
open
issues.
A
Now,
let's
just
get
started
with
the
elaboration
sdk
stephen
you
and
the
team
edwards
team,
you
have
the
control
and
machine.
Do
you
mind
make
them
the
co-owner?
Whoever
is
presenting.
D
Sorry,
I
I
actually
wasn't
prepared
to
present
anything.
I
I
didn't
realize
that
I
was
going
to
be
presenting.
I
actually
thought
I
was
just
going
to
be
answering
some
some
q,
a
about
ack,
but
I
can.
D
I
can
kind
of
give
you
guys
an
introduction
as
to
what
ack
is
all
about,
and
our
road
map
and
the
various
controllers
that
we
have
already
in
preview
and.
B
Happy
to
and
then
and
then
oh
just
as
context
for
for
people
in
the
group,
the
reason
the
reason,
the
reason
we
got
interested
is
again
all
of
us.
I
think
all
of
us
here.
You
know
we
spent
a
lot
of
time
thinking
about
what
we
can
back
up
within
a
cluster
and
that's
great,
but
I
think
we
all
know
that
customers
are
deploying
with
dependencies
outside
the
cluster,
whether
it's
databases
or
file
systems
or
object
stores
or
whatever
it
is.
B
You
know,
they're
they're,
linking
external
resources
as
well
as
internal,
and
so
when
I
first
heard
about
the
ack
from
aws,
some
of
our
challenges
were:
how
do
we
discover
this?
How
do
we?
How
do
we
link
to
it?
You
know
we
were
sort
of
asking
customers
to
kind
of
hand
detail
for
us
by
the
way
I've
got
these
dependencies,
which
is
both
inefficient
and
probably
always
wrong,
and
so
so
so
we
were
interested
in
sort
of
this.
B
This
path
that
aws
was
taking
just
from
a
discovery
and
potentially
protection
standpoint,
so
so
yeah,
so
so
that
was
that
was
sort
of
our
view
on
this,
and
I
know
I
think
tom
and
I
probably
at
one
point
during
one
of
the
meetings,
talked
about
different
ways
that
we
do
this,
and
so
so
that
was
that
was
really
sort
of
the
genesis
of
this.
B
Is
you
know
how
what
are
ways
that
we
can
better
understand
outside
dependent
resources,
so
so
that
we
can
better
discover,
protect
recover
them
in
in
conjunction
with
what's
in
the
cluster,
so
so
that
was
so.
That's
the
background,
jay
that
that's
kind
of
that's
kind
of
what
we
do.
That's.
Why
that's
why
we
said
boy
it'd
be
great.
If
someone
could
explain
to
us,
you
know
again
what
you're
doing
what
the
road
map
looks
like
and
and
then
like.
I
said
this
is
not
a
shy
group,
no
worries.
D
At
all
no
worries
at
all,
so
I
gave
you
guys
a
link
to
the
they
actually
brand
new
documentation
site
for
for
ack.
We
actually
just
changed
our
doc
site
from
a
make
docs
to
a
hugo
based
system.
Yesterday,
I
think
right
I
mean
so
anyway
welcome
to
our
new
doc
site,
we're
we're
still
we're
it's
a
still
work
in
progress,
and
obviously
we
would
love
your
feedback
and
input
as
to
ways
that
we
can
improve
the
documentation
and
user
experience
of
ack.
D
But
let
me
go
ahead
and
sort
of
rewind
a
little
bit
so
ack
is
for
control,
plane
operations,
not
for
data
plane
operations
and
I'll
get
to
the
reason
why
that's
important
here
in
a
second,
we
have
a
set
of
custom
kubernetes
controllers,
one
for
each
of
the
aws
service
apis.
So
we
have
an
s3
controller,
an
rds
controller,
an
ecr
controller,
etc.
D
Sorry,
someone's
outside
my
window,
so
we
have
a
like,
I
said:
an
s3
controller,
rds
controller,
ecr
controller,
ec2,
controller,
etc,
and
each
of
those
specific
service
controllers
interfaces
with
just
one
aws
service
api,
the
the
kubernetes
custom
controllers
that
are
published
as
docker
images
in
the
ack
project.
D
D
We
have
the
ability
to
support
cross
account
resource
management
as
well.
So
a
single
ack
service
controller
can
call
sts,
assume
role
and
pivot
its
client
to
manage
the
life
cycle
of
resources
across
different
aws
accounts.
What
this
means
is
you
don't
need
to
install
an
ack?
You
know
s3
controller
in
multiple
kubernetes
name
spaces
or
multiple
kubernetes
clusters.
D
D
D
One
of
the
big
reasons,
or
one
of
the
big
differences
between
ack's
design
principles
and
something
like
crossplane
is:
we
have
these
separate
controller
binaries
as
opposed
to
crossplay,
which
has
a
single
provider
binary
for
each
of
the
cloud
providers.
So
there's
like
a
provider
aws
binary
that
speaks
to
im
speaks
to
s3
speaks
to
rds,
etc.
D
Security
stance
and
level
of
comfort-
I
guess
with
having
a
single
binary,
have
I
am
permissions
to
speak
with
lots
and
lots
of
different
aws
services,
and
so
we
decided
it
was
a
more
tenable
security
approach
for
us
to
publish
these
individual
binary
images
which
we
could
scope
the
I
enroll
permissions
that
are
running
that
controller
just
for
the
specific
aws
service
in
question,
so
that
you
don't
have
like
these
god
level.
I
am
roles
right
or
super
user
ion
rules.
D
Another
big
difference
between
ack
and
its
predecessor
called
the
aws
service
operator.
Is
we
don't
use
cloud
formation,
so
many
of
you
are
probably
familiar
with
you
know,
using
cloudformation
templates
to
structure
the
creation
or
management
of
a
group
of
aws
resources
as
a
as
a
unit
as
a
cloud
formation
stack.
We
don't
do
that.
We
don't
use
cloud
formation,
we
actually
we
call
the
the
direct
aws
service
apis
and
not
wrap
the
entire
thing
in
a
cloud
formation
transaction
and
the
reason
that
we
do.
D
That
is
because
we
we
found
that,
because
kubernetes
wants
to
be
the
desired
state
of
the
source
of
desired
state
truth
and
cloud
formation
also
wants
to
be
the
desired
state
of
of
truth,
and
those
two
things
have
different
ideas
of
what
that
desired.
State
of
truth
is
in
the
case
of
cloud
formation.
It
views
everything
in
terms
of
the
stack
right
like
the
status
of
the
stack
as
it
you
know,
moves
through
a
series
of
resources.
D
That's
its
definition
of
you
know
like
the
the
observed
state
of
those
resources,
whereas
kubernetes
has
a
different
idea
of
what
the
the
desired
state
and
the
observed
state
of
a
resource
is,
and
it's
more
granular
than
the
cloud
formation
idea.
D
E
I
am
a
little
unprepared
for
this.
I
apologize,
I
mean
aj.
I
had
a
quick
question
for
you
on
that.
How
does
that
work
with
the
role
mapper
like
the
iam
roll
mapper
for
eks?
Do
you?
E
It
seems
like
you're,
setting
your
own
roles
directly,
but
you
know
the
way
that
we
typically
see
people
configure
eks
is
through
the
role
mapper
two
things.
D
D
Funnily
enough,
we
actually
have
an
eks
controller
in
ack,
which
is
sort
of
inception
like
and
very
much
kind
of
similar
to
cluster
cluster
api
provider
aws
in
a
little
in
a
little
ways,
but
we
we
communicate
with
the
eks
api
in
the
ack
controller
for
eks
the
im
roles
that
you
need
to
execute
the
eks
controller
for
ack,
that's
kind
of
related
to
the
role
mapper
that
you
set
up
with
with
eks
but
other
than
that.
D
D
So
I
wanted
to
introduce,
introduce
amin,
so
amin
say
hi
from
morocco.
D
Yes
and
amin
actually
developed
the
cross
account
resource
management
functionality
in
ack
about
a
year
ago.
Is
one
of
the
earlier
features
that
we
added
to
ack?
I
can
give
you
a
link
to
some
of
the
documents
that
we
put
together
for
this
right
here.
D
A
Right,
if
I
understand
you
correctly,
a
ck
is
a
a
per
service,
plugin
or
controller
model
which
manages
your
your
aws
resources
is.
That
is
that
the.
D
These
api
model
definitions,
something
internally,
that
we
call
the
coral
model
definitions
and
we
output
the
entire
controller
implementation
for
each
of
these
service
controllers,
and
this
is
what
makes
ack
sort
of
a
little
bit
different
than
something
like
coup
builder
like
if
you
were
just
to
use
coupe
builder
and
generate
a
controller
that
talks
to
s3.
D
But
at
the
end
of
the
day,
what
two
builder
will
give
you
is
sort
of
like
a
stub
implementation
of
reconcile,
and
then
you
gotta
go
and
implement
the
the
the
meat
in
ack.
We
have
a
common
run
time
and
a
code
generator
that
actually
consumes
these
api
models
and
actually
spits
out
the
full
implementation
of
the
service
controller.
By
examining
the
api
models,
the
input
and
output
shapes,
we
use,
the
aws
sdk
go
and
it
looks
at
the
input
and
output
shapes
of
the
individual
api
calls
and
maps.
D
Those
input
and
output
shapes
to
a
custom
resource
definition
right.
So,
for
instance,
in
the
s3
controller
we
have
the
bucket
crd
and
when
we
call
the
create
bucket
api
in
the
aws
sdk,
go
we're
creating
the
create
bucket
input,
shape
and
we're
populating
fields
on
that
create
bucket
input,
shape
with
fields
from
the
bucket
cr
that
someone
has
created,
and
likewise
the
response
from
create
bucket
we're
setting
certain
fields
in
the
cr
status
based
on
the
the
output
shape.
D
All
of
that
code,
which
I
like
to
call
like
sdk
linkage
code,
it's
really
repetitive,
super
boring
and
tedious
and
nobody
likes
to
write
it,
and
so
we
we
wrote
a
code
generator
that
basically
generates
all
of
that
really
annoying
tedious
code
and
that
code
generator
ack
generate
which
I
can
give
you
a
link
to
the
source
repo
here
code
generator.
D
D
Thanks,
I
mean
you're
already
ahead
of
me,
so
that
code
generator
is
used
to
generate
all
of
our
ack
service
controllers
and
it's
also
used
to
generate
the
crossplane
aws
provider.
D
So
a
lot
of
people
aren't
actually
aware
of
this,
but
yes
in
cross
plane,
even
though
we
have
you
know
a
different
mission
in
ack
we're
a
sort
of
lower
level
than
cosplaying.
We
don't
offer
that
abstraction
layer
above
the
individual
cloud
provider.
They
actually
use
the
ack
generate
code
generator
to
take
care
of
all
that
tedious.
You
know
sdk
linkage
code
that
I
talked
about
earlier
and
also
to
generate
their
customer
resource
definitions
that
are
specific
to
the
aws
provider
inside
of
crossplane.
D
So
it's
actually
been
a
really
cool
experience,
collaborating
with
the
the
crossplane
contributor
community
they've.
Given
us
a
lot
of
feedback
on
how
the
early
versions
of
the
code
generator
worked
or
didn't
work,
and
you
know
helped
us
fix
bugs
in
the
early
versions
of
the
code
generator
and
now
their
ci
systems
are
like
linked
in
with
our
our
code,
generator
ci
systems
as
well,
so
that
we
can
generate
when
we
push
out
a
new
generation
of
our
new
version
of
the
code
generator
they
can
regenerate
their
their
provider.
Aws
a.
A
Couple
of
questions
that
do
those
controllers
actually
manage
the
full
life
cycle
and
how
does
discovery
happen
because
this
I
mean
stephen,
was
talking
about
the
discovery
of
externally
managed
services
in
the
cloud
or
in
any
places
and
how
we
backup
them
right.
So
I
want.
I
want
to
understand
how
this
is
associated.
D
Well,
most
api
calls
to
the
service
if
you
can
make
that
control
plane.
Api
call
right,
create
database
instance,
create
db,
replica,
create
db
cluster,
those
kinds
of
things
and
the
various
update
code
paths
and
deletes
if
you
can
make
that
call
in
the
aws
api
generally.
The
ack
controller
is
going
to
support
that
functionality.
D
However,
for
doing
things
like
backing
up
and
restoring
of
like
your
own
owned
mysql
database
instance,
or
something
like
that,
we
we
don't
support
that
the,
and
also
we
don't
support
like
the
s3
object,
api
or
any
any
data
plane,
api
really
right.
So
what
we
will
do
is
give
you,
for
instance,
the
endpoints
to
your
elasticsearch
domain
right,
because
that
appears
in
the
status
of
your
elasticsearch
domain
cr,
but
we're
not
going
to
communicate
over
that
that
uri
right
just
like
we're
not
going
to
send
sql
commands
over.
D
B
Take
rds
so
let's
take
rds
specifically,
though
right,
because
so
many
people
use
rds
or
dynamo
right,
because
I
know
you've
got
controllers
for
either
of
those.
So
so,
if
today
you
know
I
I
went
and
and
created
an
application.
That
said,
I
you
know,
I
want
an
rds
which,
whichever
you
know,
aurora
or
something
right
doesn't
matter
right.
B
So
so
I
I
know
with
ack
how
to
to
specify
that.
I
want
that.
How
how
I
mean
are
there
apis,
then,
where
I
could
say
trigger
an
rds
backup
or
even
just
I
can
find
out.
D
So
I'm
currently
working
on
that,
so
there
there
is
a.
I
know
like
yeah
it'll
be
done
tomorrow.
No,
we
won't
I'm
working
on
it.
But
now,
if
you
can
call
like
the
create
db
snapshot
command
in
the
aws
api,
then
the
ack
controller
for
rds
should
be
able
to
do
that.
Each
of
those
each
of
those
create
call
and
create
api
calls.
D
Actually
we
represent
as
a
separate
resource
so,
for
instance,
for
db
snapshot.
It's
it's
a
separate
resource
from
db
instance
right
and
so
to
create
a
snapshot.
You
would
do
coop
cuddle,
apply
or
create
dash
f
pass
in
a
gamble
manifest
that
describes
the
you
know
the
the
db
instance
that
you
want
to
snapchat,
etc.
D
So,
yes,
we're
gonna,
get
there
sooner
rather
than
later,
I'm
working
real
hard
to
get
the
snapshot.
Support
into
the
rds
controller
rds,
specifically
because
of
there
are
some
quirks
and
idiosyncrasies
between
the
db
cluster
resource
and
the
db
instance
resource
and
like
db,
read
replica
and
some
things
like
that.
D
But
I
develop
these
resource
managers
within
the
rds
controller
to
handle
snapshots
and
db
proxy,
and
that
kind
of
thing-
and
I
run
into
these
little
idiosyncrasies-
I'm
working
with
the
rds
service
team
to
sort
of
smooth
over
any
of
those
weird
oddities
in
the
api.
I'm
logging
bugs
with
the
the
rds
team
to
show
them
sort
of
like
things
that
I
ran
into
and
just
generally
working
with
them
to
to
smooth
over
any
of
the
weird
behavior
and
make
sure
that
the
rds
controller
hides
that
weirdness
from
you.
H
D
That's
it
so
that's
the
goal
makes
sense.
F
B
B
A
Let's
take
a
step
back
jay,
I
I
really
I'm
really
interested
to
understand.
How
do
you
model
in
ack
all
the
various
actions
like
each
aws
api
course
into
a
corresponding
cr?
I
mean
I
can
imagine
you
have
thousands
of
apis
right.
How
how?
How
did
you
model
that.
D
D
Okay,
so
I
gave
you
a
link
to
our
service
controller
roadmap,
which
kind
of
lists
the
the
various
controllers
that
we
are
currently
working
on
and
what's
sort
of
on,
our
radar
tom
feel
free
to
add
aws
backup
in
there,
and
we
will
happily
flesh
out
a
plan
to
get
that
developed
and
then
is
it.
D
Okay,
so
I
there's
a
second
link
that
I
put
in
here
called
api
inference,
and
this
actually
goes
into
quite
a
bit
of
detail
about
how
we
read
those
api
model
files
and
determine
you
know
what
is
a
resource
in
the
api
and
how
do
we
map
specific
api
calls?
You
know
to
create
operation,
update
operation,
delete
operation.
We
also
have
this
thing
called
the
generator.yaml
file.
It's
basically
a
set
of
instructions
to
the
code
generator
about
those
weird
oddities
and
idiosyncrasies,
and
inconsistencies
in
each
of
the
aws
service
api.
D
So
we
can
basically
say
okay
well,
for
this
particular
api.
It
mostly
follows
like
a
normal
crud
type
of
pattern,
but
I
don't
know
the
the
update
operation.
The
update
api
operation
is
called
this
instead
of
like
update
or
put,
or
you
know,
modify
right,
so
we
can
kind
of
give
it
give
it
some
hints
as
to
how
to
find
the
particular
api
operations
that
correspond
to
update,
calls
or
deletes
or
creates.
D
We
also
have
the
ability
to
put
these
generic
hook
points
into
the
code.
So
if
you
think
about
the
the
resource
managers
that
go
into
each
ack
service
controller
for
each
of
the
custom
resources,
so,
for
instance,
in
the
rdis
controller,
we've
got
a
resource
manager
for
db
instance.
We
have
a
resource
manager
for
db,
cluster
and
db
security
group,
etc.
Each
of
those
is
a
separate
resource
manager.
D
That's
actually
a
ghost
struct
that
implements
a
certain
interface
for
looking
up
a
single
resource,
creating
a
resource
updating
resource,
et
cetera
and
those
code
paths
for
finding
a
single
resource,
creating
a
resource
updating
resource,
et
cetera
sort
of
follows
a
standardized
pipeline
of
events.
You
construct
the
input
shape.
You
call
the
api,
you
process,
the
output
shape
that
you
get
back
from
the
api
and
set
attributes
on
the
custom
resource
and
return
the
custom
resource.
D
We
have
hook
points
within
our
code
generator
where
we
can
override
the
behavior
of
a
particular
service
controller.
So
if,
for
instance,
there's
you
know
a
weird
update
code
path,
let's
just
say
s3
buckets
right.
I
don't
know
how
many
of
you
know
this,
but
there
are
23
separate
http
api
calls
to
update
s3
bucket
attributes.
D
It's
crazy.
Anyway,
there's
really
no
way
that
we
can
auto
generate
that
that
kind
of
a
weird
situation,
so
we
add
custom
code
hooks
into
the
s3
controller
for
the
update
code
path
that
calls
put
bucket
lifecycle,
configuration
and
put
bucket
cores
configuration
and
all
those
separate
api
calls
in
this.
D
In
the
same
way,
for
like
the
rds
controller,
there
are
some
slight
weirdnesses
in
the
update
code
pass
where
we
know,
for
instance,
that
if
you
call
a
modify
operation
on
a
db
cluster
and
the
engine
mode
is
serverless,
that
some
of
the
fields
cannot
be
modified.
Things
like
the
preferred
maintenance
window,
like
things
like
that,
you
just
can't
set
those
otherwise
rds
will
return
you
a
big
fat
error
and
then
the
rds
controller
will
place
your
custom
resource
into
a
terminal
condition
which
isn't
good.
D
A
One
of
the
aspects
with
we
were
trying
to
explode
by
find
out
is
too
hard
is
how
do
we
effectively
back
up
services
that
runs
outside
of
the
kubernetes
cluster,
which
in
this
case,
for
example,
s3
bucket
or
gcs
bucket,
is
exactly
the
same
thing
right?
It's
exactly
those
services.
D
Fact,
if
you
take
a
look
at
the
cross,
plane
x,
xrm
the
extensible
resource
model
and
how
they
sort
of
like
obviously
cross
plane,
does
multi-cloud
stuff
right,
so
they
work
with
gcp
and
aws
and
azure
and
equinix,
and
all
sorts
of
things
right
that
have
different
managed
services
for
things
like
databases
and
object,
storage.
D
So
crossplane
has
this
multi-layered
architecture
where
they
have
an
abstraction
at
the
top
that
they
call
a
database
right
and
then
underneath
that
they
have
separate
providers
for
the
clouds
and
those
cloud
providers
modules
have
their
own
custom
resource
definitions
that
are
scoped
to
that
specific
provider.
D
So,
for
instance,
they'll
have
the
rd
or
aws.crossplay.io
database,
and
that
is
sort
of
specialized
for
an
rds
database
instance
right
and
in
that
way
they
sort
of
have
this
layered
of
well,
it's
driver
or
provider
specific
and
then,
on
top
of
that
they
have
this
extensible
abstraction
layer
that
allows
them
to
sort
of
you
know,
have
some
flexibility
in
sort
of
farming
out
the
specialized
logic
to
the
individual
cloud,
apis,
the
back
those
those
providers.
D
Obviously,
in
ack
land
we
don't
we
don't
do
that
right.
We're
just
something
that
they
build
on
top
of
right,
so
our
our
crds
are
namespaced
under
services.kates.aws,
so
it'll
be
like
rds.services.kates.aws
forward.
Slash
db
instance,
right
that'll,
be
our
crd
yeah.
A
D
Like
if
you
were
interested
in
making
a
generic
backup
operator
you
could
you
could
work
on
top
of
an
ack,
rds
controller
right
and
read
and
write
ack
crs
and
rely
on
the
rds
controller
to
do
its
thing
and
then
focus
on
the
more
data
plane
aspects.
You
know
like
rely
on
the
rds
controller,
to
do
the
the
crud
operations
for
things
like
db
instance,
and
then
maybe
you
know,
focus
on
the
more
data
plane
operations
and
like,
if
you,
for
instance,
had
to
you
know,
execute
an
sql.
D
You
know
ddl
query
or
something
like
that.
You
might
do
that
in
your
operator
sitting
above
the
ack
land.
A
Totally
but
yeah
the
the
the
exis
there's
an
existing
pattern
in
the
kubernetes
community,
which
is
csi
right.
It
just
matters
instead
of
using
taking
the
approach
of
using
crds,
it
is
via
gipc
interface.
We
have
the
csi
interface
right,
so
this
is
a
slightly
different
approach,
but
thank
you
yeah,
of
course,.
J
So
if
I
may
summarize
what
was
just
said,
basically,
the
goal
with
the
acs
is
to
sorry
with
ack
above
the
sdk
is
to
expose
aws
apis
through
crds
custom
resources
and.
J
Anything
beyond
that
is
basically
up
to
us.
So,
for
example,
if
somebody
wants
to
back
up
an
application
that
has
multiple
components
like
s3
abs,
you
know
dynamic
db.
They
can
leverage
these
crds
and
controllers
to
do
this
job,
but.
D
But
we
don't
have
like
a
a
mission
more
than
just
create
the
best
generated
controllers
for
each
of
the
individual
aws
service
apis.
We
don't
have
like
a
a
a
larger
mission
or
a
higher
scoped
mission
than
that
yeah.
A
D
A
D
D
You
know
kinds,
so
just
you
you'd
use
the
kubernetes
rbok
model
for
that.
A
Basically,
the
ack
controller
for
will
use
whatever
the
annotation
on
the
particular
name
space
as
the
identity
to
talk
to
adobe
servers,
correct.
D
That's
something
the
ack
side
right,
yeah,
the
I
enroll,
that's
associated
with
the
service
accounts,
is
what
the
ack
controller
will
use
when
it's
communicating
with
the
aws
apis,
but
your
sort
of
higher
level
controller
you
wouldn't
need
to
call
to
the
aws
apis
right,
you'd,
be
communicating
only
with
kubernetes
api
and
writing
custom
resources
of
kind.
D
You
know
s3.services.kates.aws
forward,
slash
bucket,
so
your
service
account
that
your
controllers
running
in
would
need
to
have
permission,
read
and
write
permissions
for
that
kind.
In
the
kubernetes
rbok
model.
You
wouldn't
need
to
worry
about
any
of
the.
I
am
permissions
because
that's
on
the
ac
side.
E
And
the
granularity
is
at
the
kind
of
top
level
you
know
top
level
service,
not
the
instance
level.
It's
per
account
right.
So
if
I
have
multiple
instances
of
rds,
if
I
have
access
to
the
control
to
create
objects
in
kubernetes
through
the
controller
or
that
the
controller
responds
to,
I
wouldn't
be
able
to
get
access
to
individual
rds
or
revoke
access
to
digital
rds
instances.
D
D
I'm
trying
to
think
whether
there'd
be
a
way
to
do
that
on
the
kubernetes
side
in
ack.
I
suppose
you
could
remove
the
the
config
map
item
for
that
aws
account
to
prevent
the
controller
from
being
able
to
assume
that
role
and
and
target
that
that
account
like.
If
you
were
worrying
about
like
a
you
know,
you
got
hacked
or
something
and
then
a
particular
account
is
breached
and
you
want
to
take
remediated.
J
I
guess
a
follow-up
question
would
be
let's
say
some
changes
happen
outside
of
ack
through
you
know,
regulators
apis
or
let's
say
we
already
have
a
deployment
aws.
I
want
to
be
able
to
switch
over
to
ack.
Does
ack
controller?
Do
the
controllers
discover
the
state
in
aws
on
auto
populate
these
crcrs
or
not.
D
Auto
populate,
so
we
have
this.
This
thing
called
adopted
resource,
so
we
we
thought
that
the
auto
adoption
or,
like
you,
know,
automatically
taking
a
resource
that
was
pre-existing
and
bringing
it
under
ack
management.
D
We
didn't
like
the
safety
aspects
or
lack
of
safety
aspects
of
that,
so
instead
our
resource
adoption,
it's
explicit,
like
you
have
to
you-
have
to
call
cube
cuddle,
create
you
pass
in
a
manifest
that
describes
the
resource
that
you
are
adopting
and
the
ack
controller
sees
that
adopted
resource
kind
that
cr,
that
is
a
type
adopted
resource
and
will
fetch
the
latest
observed
state
and
write
that
latest
observed
state
into
desire.
D
So
it's
it's
like
an
explicit
thing.
You
have
to
say
I
am
asking
the
ack
controller
to
take
this
particular
resource
under.
J
Management
and
do
these
controllers
and
crds
do
they
have
like
a
dictator
style,
or
this
is
just
more
like
a.
D
Oh
absolutely,
yes,
let
me
do.
We
have
docs
on.
H
I
think
the
stage
maker
controller
have
some
of
these.
H
Well,
I
don't
think
we
have
some,
that's
our
genetic
for
all
all
controllers.
D
Yeah,
it's
definitely
a
missing
in
our
documentation.
Sorry,
everyone.
D
It's
it's
a
fairly
simple
custom
resource
and
I
can
actually
I'll
just
show
you
the
definition
of
some
resources.
That's
easier,
one
sec.
A
Cool
while
you
are
searching,
are
there
any
questions
for
jay
and
amin
and
stephen.
J
A
All
right,
if
there's
no
more
questions.
Thank
you
so
much
jay,
ami
and
stephen
for
having
this
conversation
over
here
a
discussion
over
here.
It's
totally
fine
you're
not
prepared,
that's,
not
okay,
it
happens
all
the
time
and
I
think
he
did
a
nice
job.
A
And
if
not
more
questions,
then
I
would
like
to
move
to
the
next
agenda.
Let
me
present
my
screen.
One
second.
D
Thank
you
very
much.
All
I'm
gonna
drop
off.
Now
I
put
a
link
to
our
community
meeting.
We
have
a
zoom
community
meeting
on
mondays.
Thank
you
all.
I
really
appreciate
the
the
questions
thanks.
Thank
you.
Thank
you.
A
A
A
Well
after
I
read
through
it,
I
think
me
personally
benefited
a
lot
from
all
the
use
cases
from
all
the
sections
we
discussed
and
also
the
component
is
missing,
and
it's
there.
If
you
have
more
updates,
please
do
so.
I
think
we
shin-
and
I
will
discuss-
maybe
a
final
date
so
that
we
have.
We
can
have
the
first
draft
out
for
review.
C
No,
I
think,
we're
we're
good
yeah,
I
think
probably
a
couple
of
sections.
We
still
need
a
little
bit
update,
but
we
are
getting
pretty
close.
A
Yeah
we'll
be
actively
paying
again.
Thank
you
all
who
have
been
actively
working
on
this
really
appreciate
that
we
got
about
10
minutes,
so
please
you
can
take
it
over
for
now.
K
Oh,
I
just
want
to
say
that
we
have,
I
kept
just
like
it
be
a
very
initial
document
of
it
there's
a
lot
of
things
that
we
need
to
design
need
to
discuss.
I
think
the
main
thing
is
that,
right
now
we
define
an
api
for
our
service
for
the
cbt
service.
K
My
to
our
view
is:
do
we
go
ahead
and
try
to
design
that
service,
or
we
just
kind
of
state
what
it
look
like,
what
it's,
what
it
should
do
in
the
general
weight?
What
do
you
guys
think
because
for
my
part
of
viewing
we
to
create
like
a
complete
service
and
then
let
vendor
you
know
attach
to
it
or
implementing
like
a
plug-in
or
something
like
that?
I
think
that
would
be
more
useful.
K
Otherwise
we
just
state
the
api.
It
is
very
it's
very
vacant.
No,
I
mean
it's
kind
of
it's
not
really
useful
at
all,
so
that
I,
I
would
say
we
implement
a
service
and
make
it
a
way
to
help
vendors
implementing
an
add-on
or
something
or
hook
on
to
it
easily.
So
that's
in
my
opinion.
What
do
you
guys
think?
K
We,
let's
just
say,
like
a
kubernetes
service,
that
that
any
backup
vendors
can
talk
to
when
they
want
to
get
the
chain
block
check
the
cbt.
That's
the
differential
snapshot
of
some
volume
right.
K
That
is
what
I
have
in
mind,
but
it
has
to
be
done
in
a
way
that
any
storage
vendor
can
easily
hook
up
their
started
to
it,
so
that
it
can
serve
these
different
snapshots
right
now.
The
the
document
that
you
guys
seen
on
the
screen
right
now
is
very,
very
it.
It
doesn't
have
a
lot
of
material
and
it
is
only
have
a
defined
api
and
nothing
else
substantial.
K
Of
the
whole
service,
instead
of
just
stating
an
api.
C
A
I
yeah
I
just
want
to
clarify
right.
This
is
the
idea
still
gonna
take
the
csi
route
or
I
understand
your
service
that
can
be
built
on
top
of
the
apis,
but
my
experience
tell
me
that
we'd
better
take
these
baby
steps.
First,
we
can
have
a
full
picture
like
how
these
apis
will
be
used,
but
it
will
be
very
challenging
to
combine
everything
you
want
and
I'm
not
sure
whether
we
want
to
provide
a
service,
or
rather
a
crv
based
controllers.
K
K
You
know
hogging
the
resource
from
the
api
server
and
then
we
discuss
and
we
decided
hey,
let's
make
it
a
service
instead,
in
that
case
and
whoever
providing
the
service
to
backup
vendors
and
backup
vendors
simply
talking
to
that
service,
to
get
the
I
mean,
the
the
backup
vendor
is
simply
talking
to
this
service
to
get
this
differential
snapshot
right.
I.
A
Remember
that
the
the
conversation
it
just
whoever
is
using
this
api
was
the
format
of
communicating
at
the
service.
It's
yeah.
K
C
C
C
C
C
C
A
J
Netapp
is
like
the
snap
effect
that
you
know
that
provides,
I
imagine,
other
vendors.
They
have
their
own
apis.
You
know
so
obviously
that
part
of
the
api
is
different,
but
I
think
there's
still
a
value
like
if
for
the
backup
provider,
so
I
guess
a
backup
provider
will
have
different
plugins
or
modules
for
each
one
of
these
vendors
and
that's
something
the
backup
providers
will
provide.
But
the
question
is:
is
there
any
value
on
standardizing
the
front
end
and.
J
Because
I
feel,
like
you
know
this
back
basically
back
over,
this,
would
handle
the
back-end
side
for
different
vendors,
and
this
service
is
not
really
is
mainly
meant
to
be
used
by
backup
vendors
internally,
like
users,
don't
see
any
benefits
of
knowing
the
change
blocks.
There's
really
no
reason
for
them
to
know
to
be
aware
of
the
change
box.
As
long
as.
I
So
I
had
one
question
so
why
can't
we
go
via
csi
used
csi
like
this
snapshot
right,
so
each
vendor
has
to
provide
some
way
right
to
provide
give
that
differential
snapshot
list.
So
can't
these
be
like
a
csi
sidecar
for
that
csi
driver,
which
that's
exactly
my
point,
so
you
thought.
I
J
Okay,
so
that
so
then
we
are
deciding
this
has
to
be
so
basically,
I
think
this
kind
of
changed
the
discussion
now,
because
no,
I
don't
think
that
was
initially.
That
was
the
idea.
Okay,
okay,.
K
A
K
A
Yeah,
it's
a
grpc
interface
right.
What
we
expose
in
kubernetes
I
mean
we
do
have
the
concerns
around
the
bigger
the
size
of
the
cr,
but
we
can.
We
can
try
to
see
whether
we
can
find
a
way
out
of
this,
or
maybe,
as
you
said,
just
provide
a
standardized
service,
but
the
interface
to
the
storage,
vendors
right.
It
should
stay
down.
That's
my
opinion.
I
think
it
is
still
feasible
to
do
this
in
the
csi
level.
C
A
J
So
going
back
to
the
discussion
we're
having
a
few
months
ago,
we
were
saying,
like
you
know,
api
server,
kubernetes
handles
20
queries
per
second
right.
Scd
only
handles
you
know:
small
objects,
100,
meg
or
smaller.
You
know,
so
we
all
discuss
limitations
of
using
kubernetes
apis
for
this
purpose.
Right
no.
A
No,
I
I
don't
know,
there's
a
there's,
an
extension
api
service
in
kubernetes.
Instead
of
using
the
code
xcd,
you
can
use
that
and
that
one
is
more
customizable.
I
see
okay
right
that,
that's
that
again,
I
think
we
should
take.
Maybe
maybe
maybe
for
let's
discuss
this
right.
Having
a
smaller
group
discussion
on
this,
I
think
we
should
revisit
a
little
bit
on
the
cab
on
whether
we
should
do
this
in
csi.
A
I
believe
personally
doing
this
in
the
csi
is
the
right
way
to
do
right,
but,
but
I
I
also
see
your
point-
I
see
your
concerns
as
well,
but
to
shin's
point
right.
If
we
make
this
a
pure
service
and
just
define
a
service
and
having
different
implementations,
this
doesn't
make
a
cap
in
kubernetes.
A
Yeah
sounds
good
all
right
all
right
thanks
all!
We
are
about
time,
one
more
minute,
any
last
minute
questions.
A
C
C
K
Because
we
already
take
most
of
the
time
for
this
for
their
previous
topic.
So
maybe,
let's
get
to
the
next
meeting.
A
A
Yes,
what
we
can
do
is
there's
a
meeting
agenda,
a
doc
right.
Whoever
is
interested
just
put
your
name
over
there.
Then
we
can
send
out
the
invitation.
How
does
that
sound,
or
we
can
just
recite
it
next?
Two
weeks
later,
this
slot,
I'm
fine
either
ways.
A
If
normal
comments,
then-
or
let's,
let's
make
two
weeks
later,
this
slot,
we
will
discuss
this
change.
Blockchain
sounds
good.
Okay,
all
right!
Thank
you.
Bye,
bye,.