►
Description
Kubernetes Data Protection WG Bi-Weekly Meeting - 29 June 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/wg-data-protection
Moderator: Xing Yang (VMware)
A
B
Yes,
okay,
yeah,
just
a
very
quick
recap
for
those
of
us
who
may
have
missed
like
the
last
couple
meetings,
so
we
have
been
working
on
like
the
change
block
tracking
cap
for
the
past
couple
weeks.
B
Last
week
we
got
some
feedback
from
the
production
readiness
review
group,
basically
asking
us
about,
like
our
proposed
proposed
dump
design
with
the
two
hop
type
of
you
know,
requests
mechanism.
B
B
So
just
want
to
clarify
that,
like
the
production
readiness
review
group
did
not,
you
know
outright
reject
our
proposal.
You
know
if
they're,
just
saying
that
you
know
you
should
consider
you
know
like
something
like
an
aggregated
api
server.
B
Ultimately,
the
approval
decision
rests
on
the
the
the
storage
whether
to
go
ahead
with
this
or
not,
especially
since
this
is
an
out
of
tree
like
implementation,
I
think,
like
the
implication
there
is
more
like
you
know
like
within
kubernetes
like
so
far,
there
haven't
been
any
like
this
type
of
two
hop
requests
type
of
mechanism,
so
you
know
it's
just
more
like
hey,
you
know.
B
If
we
go
ahead
with
this,
we
might
end
up
with
more
questions
down
the
road
in
terms
of
supporting
it
in
terms
of
maintaining
it,
etc.
B
So
being
good
against
cap
citizen,
like
we
just
decided
to
take
a
step
back
and
re-evaluate,
you
know
the
very,
very
early
proposal
on
using
an
aggregated
api
server
and
since,
like
I
think
late
last
week,
I
had
a
chance
to
speak
with
stefan
szymanski,
like
some
of
you
might
know
him
from
red
hat.
He
is
you
know,
part
of
he
has
been
very
active
with
the
kubernetes
api
server
he's
been
very
active
with
the
api
machinery
group.
B
I
think
he's
now
part
of
the
kcp
group
as
well,
so
basically
share
like
the
things.
The
problems
that
we
are
trying
to
solve
with
him,
especially
in
light
of
how
aggregated
api
server
can
help
us
out
here,
I
think
like
and
then
I
just
like,
documented
some
of
the
the
key
point
that
we
talked
about
inside
the
in
the
under
the
alternative
section
inside
the
cap.
I
didn't
you
know,
as
I
mentioned
to
shingle
I
didn't
read.
I
didn't
write
like
a
sub
proposal
inside
this
cap.
B
I
can't
write
a
sub
cap
inside
this.
This
mean
cap.
The
idea,
I
guess
the
general
feedback
from
stefan
is
that,
like
we
don't
yes,
we
can
use
aggregated
api
server,
but
to
accomplish
what
we
want
to
accomplish.
We
don't
need
an
aggregated
api
server,
so
the
feedback
there
is
not
like.
No,
you
know
that
that
is
evil,
don't
even
try
it,
but
it's
like
you
can
you
can
do
it,
but
not
sure
if
it's
going
to
help
you
to
do
what
you
want
to
do
and
as.
B
Yes,
we
can,
like
you,
know,
like
modify
the
storage
and
say:
hey,
don't
put
anything
into
lcd,
but
like
now,
we
like
the
request
and
the
response
path
will
go
through
the
kubernetes
api
server
and,
and
the
only
way
to
like
not
like
block
down
like
the
kubernetes
api
server
is
to
again
like
using
that
mechanism
that
we
have
proposed
is
to
provide
some
sort
of
quote
like
an
out-of-band
like
callback.
B
Url
right
this,
and
with
that,
like
the
main
thing
there
is
that
you
we
don't
need
an
aggregated
api
server
to
do
that,
because
we.
A
B
Yeah,
so
basically
it's
like
yeah.
It
really
boils
down
to
like
it's
like
an
implementation
details,
so
in
a
very.
B
Not
yeah
not
a
diagram
with
aggregated
api
server.
I
think
in
general,
like
for
those
of
us
who
are
not
familiar
with
it.
It's
just
like
you
know
like
we
it's
it
creates.
You
still
create
a
custom
resource,
but
you
don't
need
to
define
like
a
crd,
you
like
with
aggregated
api
server.
You
do
your
own,
like
wiring,
you
say
like
this
is
the
go
gvk
and
gvr,
and
then
you
bind
it
to
like
an
endpoint.
A
Hold
on
so,
but
you
would
still
need
to
submit
the
request
right.
So
let's
say:
okay,
can
you
just
maybe
work
through
the
steps,
because
otherwise
I'm
not
getting
it
so
you,
I
don't
know
if
you
have
more
details
under
the
infrastructure
needed
section.
So
let's
say:
if
you
just
go
through
this
cbt,
for
example.
Let's
say
I
want
to
do
what
the
right.
Now
we
have
the
crd
right
there
yeah.
What
is
that?
What
is
that
called
delta?
What
is
it.
A
B
Yeah,
so
so,
from
from
a
user
perspective,
like
a
user
again
can
be
a
human
or
like
another
software
process,
it
will
create,
like
a
volume,
snapshot
delta
custom
resource,
but
with
an
aggregated
api
server,
you
don't
need
to
deploy
the
custom
resource
definition.
B
A
Deployed
in
some
like
this
other
in
a
different
place,.
B
Yeah
yeah-
and
this
is
the
with
an
aggregated
api
server
like
it's
like.
We,
we
can't
like
staphon,
highly
recom,
recommend
that
we
don't
run
it
as
a
sidecar
like
it's.
He
said
one
of
the
limitation
of
aggregated
api
server.
Is
it's
not
scalable
in
a
sense
like
if
you
look
at
if
we
look
at
a
matrix
server
like
there's
only
one
instance
of
it,
you
can
scale
vertically,
but
you
can't
scale
horizontally
so
anyway,
so
like
yeah,
so
I
guess
like
so
from
so
shinto.
E
So
the
original
reasoning
behind
putting
in
proposing
the
aggregated
api
server
was
when
we
started
looking
at
the
amount
of
cbt
data
and
storing
it
all
in
the
api
server
in
that
cpd
it
was
like.
Oh
that's,
probably
too
much
more
than
we
want
to
store.
So
the
idea
behind
proposing
the
aggregated
api
server
was,
it
doesn't
need
to
store
things
in
fcd.
F
E
B
A
cd
you
so
like,
I
guess,
sorry,
going
back
to
ben's
earlier
point
like
with
the
proposal
that
we
have
you
know
given
like
we
didn't
like
we
can,
like
stream
the
or
send
the
response,
the
payload
directly
back
to
user
via,
like
a
separate
like
direct
endpoint,
and
whether
it
is
aggregated
api
server
or
just
a
plain
or
http
server.
B
We
we
can
do
that,
and
you
know
the
point
around
like
so
I
think
some
folk,
some
of
us
brought
up
like
a
dedicated
authentication
and
authorization
to
the
api
server
again
like
that
problem
has
been
solved
like
prasad
and
abbey
check,
work
on
prototype
for
the
past
couple
weeks
on
it
like
it's
basically
just
boils
down
to,
like
some
token
review
api,
like
subject
access
review
api
that
you
just
send
like
the
token
or
or
whatever
like
them
to
api
server
and
said,
can
I
trust
this
request?
A
Can
you
for
now?
Can
you
don't
go
back
to
the
like
the
existing
proposal?
Can
you
just
focus
on
this
alternative
just
make
sure
that
we
all
understand
how
this
actually
works,
because
I
think
I
see
michelle
is
on
the
corner
last
time.
I
think
she
asked
you
to
write
up
like
how
you
know
how
this
works
right.
So
maybe
can
you
just
go
step
by
step?
Okay,
so
he
said
so.
A
B
Right,
so
we
okay!
Yes,
I
can
go
through
this
step
by
step,
but
one
thing
I
want
to
make
sure
we
decouple
from
each
other
is
like
the
api,
the
crc
id,
whatever
we
will
on
it,
we're
going
to
call
it
it's
separated
from
the
aggregated
api
server.
Okay,
like.
A
B
Api
doesn't
change
with
api
server,
aggregated
api
server
or
http
server
or
whatever
you
want
to
call
it
so
aggregated.
Api
server
is
an
implementation
details
and
it's
just
the
transport
mechanism,
but
I.
A
B
B
It
would
be
the
same
as
like,
so
it
would
look
exactly.
E
E
E
B
Okay,
let's
see
what
is
the
okay,
so
okay,
so
how
it
will
work?
Okay,
so
with
an
aggregated
api
server,
I
don't
have
to
any
yammer
example
here
so
from
the
user
perspective
like
they
can
still
create
a
custom
resource
that
do
this.
They
give
me,
you
know
the
base
volume
snapshot
target
and
then
the
aggregate
you
know
they
just
created
in
the
form
of
yaml
and
then
the
aggregated
api
server.
You
know
it
would
watch
and
respond
to
it.
B
B
So
yeah,
so
so
let
me
finish
so:
yes
like,
basically
it
will
do
back.
It
will
do
what
we
have
been
talking
about
so
to
aggregate
api
server
like
you
know,
so,.
B
B
E
E
E
So
if
you
say,
for
example,
I
want
a
you
put
in
a
request,
what
is
it
called
the?
So
you.
E
E
A
B
E
We
don't
that's
so
I
think
the
original,
so
the
original
thought
was
that
everything
works
the
way
it
works,
except
for
the
cbt
records
themselves.
That
was
my
thinking,
so
you
write
your
volume
snapshot
delta.
Was
it
a
request
that
all
goes
to
the
normal
mechanisms
that
goes
into
the
api
server?
It
goes
into
lcd.
The
controller
gets
triggered,
it
updates
the
spec
and
all
the
rest
of
that,
so
that
all
works
the
way
it
normally
works.
E
B
So
I
guess
so
that
goes
back
to
like
early,
so
okay.
So
if
we
so
so
following
that
pattern
following
that
thought,
so
something
okay
so
like
there
is
a
separate
route
for
the
cpt
request:
okay,
cbt
right.
E
B
Yeah
exactly
so
now
you
go
back
to
like
ben's
very,
very
early
point.
Why
does
we
need
an
aggregated
aps
server
for
the
second
hub.
E
E
So
rather
it
appears
that
all
the
records
existed
or
you
could
query
you
could
list,
you
could
retrieve
records
individually
and
those
records
as
cbt
records
would
from
the
kubernetes
user
point
of
view,
look
like
you
had
a
thousand
cbt
records
for
each
individual
segment
sitting
in
the
api
server,
but
they're
actually
handled
by
the
aggregated
api
server
instead,
so
they
don't
have
to
live
in
that
cd.
That
I
thought
was
was
the
idea
behind
it.
B
A
Sounds
like
we
have
not
understood.
How
does
the
aggregate
apsor
mean?
I
mean
I
because
it
sounds
like
dave
was
saying
that
you
can
actually
have
a
normal
eps
or
an
aggregated
like
like
two
things,
but
then
yeah.
You.
A
B
I
feel
like
that's
the
thing
that
we
have
a
bit
of
disconnect
right
now.
I'm
saying
we
do
not
need
a
controller
and
an
aggregated
api
server.
A
B
A
A
D
There's
one
more
thing
about
the
aggregated
api
server:
it
sounded
like
there's
two
ways
to
do
it,
and
one
of
them
would
actually
involve
three
hops,
not
two
right.
If
you
have
to
go
from
the
client
to
the
kubernetes
api
server
and
then
from
the
kubernetes
api
server
to
to
our
server
and
then
from
our
server
to
the
actual
csi
implementation.
Like
there's
three
hops.
E
D
D
E
B
Server,
actually,
sorry,
can
I
sorry
can,
let's,
I
guess,
like
it's
aggregated
api
server.
Okay,
so,
like
I
know,
like
you
know,
your
earlier
question
is
still
like
how
you
want
to
get
to
know
how
it
works
like
in
the
context
of
this
cap
right.
B
Fair
question
a
fair
thing
to
do
so:
okay,
let's
try
that
first,
okay,
can
I
just
quickly
walk
us
through
like
without
like
so
like
just
a
just
a
normal
aggregated
api
server?
Without
the
extra
hops
kick
and
then
we
can
just
decide
and
part
of
it
is,
I
feel
like
unless
there's
a
hard
requirement
on
aggregate
api
server.
B
You
know
like
okay,
let's
try
this,
so
I
think,
okay,
so
with
an
aggregated
api
server,
we
will
deploy
it
as
an
independent
service
like
a
deployment
workload,
it
cannot
run
as
a
sidecar
because,
like
it
doesn't
scale
you
look
at
matrix
server
like
there's
only
one
instance
of
it.
You
cannot
have
multiple
replicas
of
metric
servers,
so
from
a
user
perspective
yeah
they
would
create
the
requests
right.
B
B
You
pick
up
the
request
and
then
like
now
it
makes
that
second
grpc
call
to
the
csi
driver.
So
and
then,
of
course,.
B
It
needs
some
way
to
discover
it
so,
whether
it's
a
config
map
or
inside
the
csi
driver
object.
There
has
to
be
like
a
url
that
says:
if
you
see
a
volume
snapshot
delta
with
this
csi
driver
name,
you
call
this
cvt
endpoint.
It
has.
You
need
some
sort
of
discovery
mechanism,
but.
B
B
B
Is
a
scaling
one
right
scaling
one
and,
like
you
know,
we
the
the
the
reconciliation
like
you
know
like.
We
need
to
make
sure
that
there's
only
one
api
server
is
doing
the
reconciliation.
D
I
I
I
would
separate
the
does
it
scale
question
and,
and
maybe
say
for
the
first
version
like
assume
that
one
sidecar
on
onenote
is
enough
and
see
if
we
can
make
that
work
and
then
tackle
the
scaling
question
as
a
more
generalized.
D
Can
you
make
the
csi
driver
a
scale,
because
I
mean
cbt
only
matters
if
you're
generating
lots
of
snapshots
and
snapshots
means
something
has
to
be
creating
those
snapshots
and
that,
if
that
part
can't
scale,
you
have
to
scale
that
too.
There's
lots
of
scaling
problems
in
csi
that
haven't
been
tackled.
D
A
B
Okay,
so
assume
that
somehow,
like
the
aggregated
api
server,
you
know
is,
is
able
to
find,
like
the
driver,
plug-in
whatsoever,
make
the
grpc,
csi
grpc
call
and
then,
like
you
know,
I
think
we're
familiar
with
that
part
of
the
segment
of
the
request
right.
So
grpc
response
come
back
to
aggregate
api
server.
B
So
now
the
challenge
is:
how
does
the
aggregated
api
server
send
those
response
payload
back
to
the
user?
It
cannot
put
it
into
the
status
of
resource
because
you
put
it
into
the
status
of
resource
the
only
way
the
user
can
see.
It
is
that
you
persist
those
data
into
scd,
because
it's
declarative
right,
like
it's
not
like
a
http
request
response
call.
B
So
I'm
just
trying
to
like
so
so.
E
I
aggregated
api
server
versus
the
cbt
server
right
because
putting
things
so
so
I
think
the
the
question
is:
what's
the
api
for
getting
cbt
data
back,
so
one
option
was
put
it
all
into
that
cd
and
try
to
retrieve
it
that
way,
didn't
think
that
would
work.
Well.
We
didn't
like
that.
One.
The
aggregate
api
server
idea
was
that
you
could
access
it
via
the
kubernetes
mechanisms
but
not
stored
in
fcd,
and
when
you
tried
to
read
a
cbt
record
that
could
go
through
the
aggregated
api
server,
which
would
then
do
the
grpg
call.
B
Yeah
yeah
so
before
we
go
down
there
so
so
so
far
like
shingles.
Is
it
clear
like
about
like
how
things
are.
A
A
One:
okay,
so
there's
one
aggregated
api
server,
that's
the
central
controller
right,
so
we
talked
about
that
one.
Somehow
we
need
to
be
able
to
make
calls
to
a
csr
driver.
So
we
just
we
forgot
about.
You
know
how
to
do
that,
but
I'll
just
assume
that
can
be
done,
then,
okay
and
then
what?
How
are
we
going
to
get
the
status
back?
Okay,
going
back
to
dave's
question!
You
know
I
was
I'm
not
picturing
how
this
would
work.
This
is
everything
processed
in
this
one
controller
and
then
what
I'm
guessing
not.
A
B
A
A
One
controller
there's
no
side
cut,
but
that's
not
a
point.
We
what
we
want.
The
problem
we
want
to
solve
is:
how
do
we
have
those?
You
know
the
data
back,
we're
getting
those
change
blocks
back
without
storing
that
in
the
main
api
server,
but
I'm
not
getting
question.
C
If
you
remember
kubernetes
api,
you
can
have,
I
don't
know
both
would
have
subresource
with
logs
and
you
can
stream
the
logs
from
cubelet
through
the
api
server
to
the
client
and
the
logs
can
be
megabytes
and
gigabytes,
and
nobody
cares.
This
is
not
stored
in
that
cd
and
similarly,
you
can
have
a
subresource
with
cbt.
C
No,
it's
something
completely
different
like
you
can
still
have
like.
You
will
still
have
the
delta.
The
data
object
with
spec
and
status,
and
the
data
object
will
also
have
a
sub
resource
with
custom
protocol.
I
don't
know
it
can
be
like
simple.json
or
anything
where
you
can
stream
the
cbt
data
from
the
driver.
C
So
you
can
do
that
with
the
aggregated
api
server.
A
A
C
A
E
A
Except
for
something
else,
I
forgot
what
that
was
yeah.
We
actually
looked
at
that
before
the
subreddits
were
saying.
Michelle
was
actually
she
actually
reviewed
something
yeah
but
yeah,
but
that's,
but
I
think
we
will
try
not
to
do
that.
That's
like
proposing
something
in
core.
A
External
snapshotter,
it
can
okay.
B
C
C
Just
as
a
proxy
so
like,
if
you
do.
E
A
Hey
so
dave,
I
don't
know
if
you
see
the
chat
from
michael.
I
think
michelle
has
problems.
I'm
like
michelle,
you
want
you
want
to
try
rejoin.
This
would
be
good
if
you
can,
if
you
can
talk,
you
know,
what's
what's
what's
going
on,
let's
see
michelle.
A
E
A
But
michelle
do
you
want
michelle?
You
want
to
see
how
this
high-level
user
workflow
work
with
aggregated
api
server.
Is
that
the
is
that
your
question,
because
if
we
only
looking
okay
yeah,
so
I
think
michelle
still
wants
to
have
this.
I
think
that
would
be
helpful
for
me
as
well.
I'm
still
getting
mixed
answers.
E
I
think,
of
course,
at
least
look
at
the
different
options
and
then
say
this
is
why
we
didn't
take
it,
and
here
are
the
pros-
and
here
are
the
cons
and
the
cons
outweigh
the
pros,
so
the
problem
we
had
was:
how
do
we
get
the
cbt
data
back
in
a
way
that
doesn't
blow
up
fcd,
because
it's
not
just
the
single
spec
or
the
single
delta
request?
You
might
have
hundreds
or
thousands
of
them
if
somebody
doesn't
put
them
away.
You
know
you
wind
up
with
large
data,
so
every
proposal
right
now
is
going.
E
A
So
yeah
so
dave
it
sounds
like
the
aggregated
api.
So
you
still,
you
would
be
kind
of
similar
to
the
main
purpose
of
well.
You
could
have
like
two
hops,
but
it's
just
going
through
this
aggregated
aps,
throw
away
maybe
more.
E
B
Thought,
like
I
thought,
like
you
know.
Okay
still,
so
that's
slightly
different
from
my
understanding
of
what
young
was
talking
about.
We
don't
need
a
tool
hop
with
the
aggregate
api
server
because
we
proxy
it
to
we
proxy
it
through
kubernetes
api
server.
That's.
E
Still,
the
extra
hop
I
mean,
there's
basically
two,
so
I
think
there's
two
ways
to
use
the
aggregated
api
server,
so
one
would
be
to
essentially
return
a
stream
like
the
logs
do
as
the
sub
resource
and
the
other.
I
believe
you
could
simply
provide
a
set
of
records
through
the
aggregated
api.
E
B
Okay,
so
yeah,
that
sounds
like
a
reasonable
next
step
to
research.
I
wanna,
like
I
wanna,
talk
about
unless
there's
something
like
we
really
want
to
dive
into
with
aggregated
api
server
right
now,
can
I
share
like
a
different
approach
without
like
the
two
hops
and
without
aggregated
apis
or
server.
B
Okay,
so
what
if
like,
instead
of
like
you,
know
like
providing
either
a
callback
url
or
tell
the
caller
to
call
a
sub
resource
endpoint?
B
Can
we
tell?
Can
we
like
have
inside
the
spec?
Ask
the
user
provide
us
with
a
url
to
send
the
data
back
to
so
you
know,
essentially
so
I've
been
thinking
about
this
like
thing
about
this,
so
potentially
what
we
do
is
like,
if
you
guys,
can
see
my
slide
inside
the
spec.
B
So
what
that
looks
like
is,
if
you
have,
if
some
of
us
have
seen
like
the
admission
web
hooks
resource
before
it's
really
like,
you
tell
the
service,
what
to
call
to
finish
to
complete
the
the
request.
Basically,
so
in
this
case
would
be
like
the
backup
server
create
this
volume
snapshot
delta.
B
It
provides
us
either
with
an
in-cluster
service
name
or
an
external
url.
You
know
either
one
not
both
and
then
provide
us
with
like
a
ca
bundle
that
we
can
trust
the
endpoint
with,
and
then
our
controller
still
function
as
a
site
card.
We
get
back
to
grp.
We
make
the
grpc
call
get
back
to
workload,
and
then
we
send
the
workload
to
the
endpoint
or
the
service
provided
by
the
backup
server.
D
B
Yeah,
so
that
would
be
so
that
the
pagination
would
have
to
happen
inside
the
either
the
the
the
csi
driver
so
that
that
that.
B
You
get
back
like
yes,
a
stream
of
like
paginated,
like
responses
so
like.
E
B
We
can
so
we
can
update
the
the
the
not
in
mid
flight,
not
not
in
my
flight,
so
it
has
some
sort
of
like
the
the
spec
can
contain
like
some
offset,
and
you
know
pagination
parameters.
So
that
would
be
the
starting
point
and
then,
after
that
it
would
just
be
like
the
cuddle
or
kubernetes
list
command
and
they,
like
the
back
end,
will
send
you
back
like
chunks
of
responses
or
paginated
responses,
but
over
like
a
one
single
stream.
B
A
F
B
It's
it's
either
or,
like
you
know
like.
If
it
is
in-service
in
cluster
service,
it
would
be
the
service
name.
If
it
is
some
external
endpoint,
it
would
just
be
a
url.
So
this
approach
is
very
similar
is
exactly
the
same
of
what
the
admission
web
hook
is
doing.
You
tell
the
api
server
where
to
find
the
web
hook
and
then
like
yeah,
you
just
call
it
and
then
it
will
trust
it.
Only
if,
like
the
trs
endpoint
on
the
other
side,
is
signed
by
like
the
cert
that
it
trusts.
E
Yeah,
I
don't
know,
there's
I
don't
know
this.
This
one
is
like
a
I
don't
know
it
doesn't
hit
gut
feeling
is
like
I
don't.
I
wouldn't
want
to
do
that.
I
mean
things
like
what
happens
if,
if
it
can't
contact
the
end
point.
I
E
E
So
you
know
like
you,
can
look
at
like
the
the
way
the
grpc
mechanism
works
like
yeah,
the
csi
driver
calls
a
grpc,
that's,
but
grpc
is
a
server.
That's
providing
services
to
kubernetes
the
same
way
with
like
the
admission
webhook
right,
that's
providing
services
to
kubernetes,
saying
yes,
I'm
here
to
tell
you
when
you
can
admit
something.
E
Yeah
like
like
right
now
like
when
I
was
working
on
valero,
the
things
I
like
to
do
with
valero
is,
I
just
run
the
whole
thing
on
my
desktop
and
because
it's
talking
to
the
api
server,
it
doesn't
matter
where
it
runs,
and
I
didn't
have
to
have
like
a
an
ipa
address,
that
the
api
server
that
the
kubernetes
class
could
talk
to.
E
You
know
what
I
mean,
though
I
mean
it
has
to
be
accessible
so
like
if
I
try
to
run.
If
I
try
to
debug,
for
example,
the
cbt
driver
on
my
desktop,
it's
got
to
provide
an
endpoint
that
the
kubernetes
cluster
I'm
working
with,
can
talk
to
you.
E
E
D
E
B
So
yeah
so
so,
like
yeah
undeniably
like
it,
will
mean
like
more
sophistication
on
the
backups
for
backup
software
side,
but
I
think
you
know
it
feels
like
it
will
still
boils
down
to
kubernetes
networking,
though
right,
like
in
terms
of
whether.
D
No,
no,
like
it's
a
question
of
like
crosstalking,
nat
boundaries
and
crossing
firewall
boundaries
like
like
today,
you
can
on
your
laptop
on
an
airplane
connect
to
a
kubernetes
cluster,
that's
running
in
california
and
everything's,
fine
right,
because
all
the
firewalls
and
all
that
lets
your
packets
travel
from
your
laptop
to
the
cluster.
But
like
the
likelihood
of
that
cluster
being
able
to
ping
your
listener
on
the
airplane
is
nil.
Right,
it'll,
never
get
through
all
the
firewalls
and
all
the
nat
that
happens
in
that
scenario.
D
D
I
think
like
like
I
don't.
E
I
There's
also
another
advantage
to
having
the
the
site,
with
the
controller,
be
able
to
return
a
url
in
the
status.
In
that
this,
the
the
controller
then
has
the
ability
to
implement
itself
in
different
ways.
It
could,
for
example,
kick
off
a
separate
process
just
to
handle
each
volume
independently
right
or
each
session
independently,
so
because
it
has
to
look
after
its
own
memory
and
scale
if
you
know
so,
there
are
a
lot
of
advantages
to
just
staying
with
the
traditional
model
over
this.
E
B
Yeah,
well,
I
yeah,
I
think,
like
yeah,
I
don't
think
you
can
lick
it
because
we're
not
exposing
anything
on
the
controller
site,
but
I
think
like
okay,
it
sounds
like
the
main.
B
E
B
Yeah,
it
basically
shifts
like
the
complexity
to
the
backup
software
site
right.
The
back
well.
E
Well,
what
do
you
expect
to
happen?
So,
for
example,
it
starts
streaming
data.
It
streams,
cbt
records
to
me
good
halfway
through
some
stupid
thing
happens.
Somebody
restarts
the
backup
software.
We
hit
a
bug
whatever
it
is:
the
backup
software
crashes.
Now,
what's
how
do
you?
How
do
you
restart
that
stream
of
data
coming
across.
E
So
like
this
volume,
snapshot
delta
would
have
to
get
written
like
once
per
out.
You
know
per
request
of
sending
data
right.
B
Well,
yeah
we'll
probably
have
the
same
thing
with
on
the
controls
which
yeah
it's
feels
like
it's
something
that
we
have
to
address
on
the
back
on
this.
You.
D
E
And
the
client
needs
to
be
able
to
handle
all
these
inbound
scenarios
which
currently
it
didn't
have
to
I
mean
you,
you
could
see
like
you
know
currently
like
a
retry
loop
would
be
just
okay
here,
you
know:
is
there
a
volume
snapshot
delta
record?
Yes,
okay,
then
I'll
pull
the
url
from
it
or
the
records
and
I'll
pull
those
out,
and
I
can
restart
on
that
versus
you
know
it's
like
okay,
I
want
it,
it
becomes
it.
This
becomes
very
imperative,
not
declarative
right,
because
it's.
E
B
I
don't
want
to
go
down
the
path,
but
okay,
so
cool
thanks.
Thanks
for
the
feedback
on
this,
it
looks
like
this
does
not
work
but
hey
at
least
we
talked
about
it.
We
explore
it.
It's
great.
I
can
add
this
to
the
alternative
thumb
section
in
the
cap,
so
with
five
minutes
left,
I
wanna
see
like
what
I
wanna
like
just
confirm,
like
I'm
the
next
step
forward
with
this
cap.
E
G
As
a
as
a
resource
and
then
when
client
does
list
on
change
logs,
our
aggregate
api
server
to
will
do
on
demand,
css
snapshot,
csi
call
and
then
returns
the
response.
As
as
a
part
of
list,
api
response
is
called.
E
B
Cool
thanks
prasad,
so
right
now
like
at
least
like
I'm
hearing
like
two
streams
of
dots
here,
which
I
think
is
not
hard
to
reconcile
like
there
was
still
that
lingering
question
around
like
do.
We
need
an
aggregated
api
server
and
it
sounds
like
it's
worth
spending
more
time
investigating
into
it.
A
So
one
action
item
would
be:
maybe
dave
can
help
with
this
just
to
draft
a
high-level
usable
user
workflow
with
aggregated
api
server
and
then
compile
that
with
the
proposed
approach
for
the
existing
approach.
B
A
Good,
so
that
would
be
something
that
we
could
go
through
next
in
the
next
meeting.
B
And
so
far
like
it
found
sounds
like
the
only
benefits
of
an
aggregated
api
server
is
that
it
allowed
us
to
call
the
sub
resource
in
somewhat
infrared
imperative
in
prerelative,
like
a
manner.
D
Well-
and
you
get
all
the
the
ergonomics
with
the
the
rbac
and
the
you
know,
just
the
same
api
endpoints
that
you
would
get
you
know,
so
you
only
have
to
talk
to
one
thing.
Instead
of
having
to
talk
to
a
whole
other
api
server
with
its
own
certificates
and
its
own
authentication,
I
mean,
even
if
it
piggybacks
on
the
authentication,
it's
going
to
have
its
own
certificates,
and
it's
going
to
have
to
call
out
to
do.
Delegated
authentication
like
aggregated
api
does
have
the
benefit
that
it
just
wraps
all
that
up.
D
Needs
like
yeah,
it.
D
E
D
I
I
D
B
The
other
like
yeah,
okay,
so
what
are
the?
Are
there
any
specific
like
thing
that
we
have
a
criteria
that
we
are
looking
for
at
this
point?
Or
do
we
just
want
to
get
a
sense
of
like
how
aggregated
api
server
would
look
like
because,
like.
A
Right,
I
think
it's
not
clear
today.
I
think
we,
because
there
are
people
talking
about
different
things.
It
was
not
very
clear.
That's
why
you
know.
If
you
look
at
this
section,
this
is
very
short
right,
so
we
like
to
have
that
you
know
talk
about
how
that
would
work.
It's
a
high
level
user
workflow.
A
E
No,
but
I've
been
asking
like
what
are
we
doing
so
I
think
the
the
goal
is
to
get
pros
and
cons
for
the
different
cases.
There's
always
a
compromise,
there's
always
something
that
doesn't
work
quite
right,
so
we
want
to
at
least
be
able
to
put
put
it
all
on
the
table
and
say
here's
the
one
that
we
picked
and
here's.
Why.
D
I
I
would
say,
like
some
of
the
options
we
can
rule
out
immediately
like,
like
you
know
the
having
the
them
call
call
back
mechanism.
I
think
we
can
rule
out
anything
that's
going
to
persist
data
on
ncd,
I
think
we
can
rule
out
but
like
once,
you
eliminate
the
things
that
are
definite
knows,
there's
still
some
different
options,
and
at
that
point
I
think
it's
going
to
come
down
to
you,
know
user
experience
and
how
much
implementation
work
it's
going
to
be
and
how
hard
it's
going
to
be
to
maintain
and
questions
of
scaling.
E
A
Hey
sorry,
I
think
we
are
at
the
top
of
the
hour.
We
have
another
meeting
coming
up
so
yeah.
So,
let's
you
know
meet
again
talk
about
this
again
in
the
next
meeting
and
about
we
can
we
can
check
chat
on
the
slack.