►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review Meeting - 29 April 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
Purposes,
you'd
rather
be
not
running
in
a
container
you'd
rather
have
the
sidecar
in
a
container,
but
the
the
driver
itself
running
as
a
binary
right.
Just
because
it's
easier.
B
A
That
way,
so
was
the
old
way
more
conducive
to
running,
not
in
a
container.
C
C
A
C
A
C
How
about
this?
Okay,
so
I'll,
add
a
developer's
dock
like
building
and
how
to
run
it
locally
without
putting
it
in
a
container.
Would
that
help?
Okay.
A
C
We're
doing
that
so
I
don't
know
if
you've
looked
at
this
website
containerobjectstorageinterface.github.io
and
then
you
scrolled
up
to
docs,
and
then
you
looked
at
authoring,
cozy
drivers,
it's
it's
a
it's
a
it's
a
reflection
of
what's
there
in
github
by
the
way,
so
it
needs
to
be
updated.
But
we
are.
We
are
working
on
filling
this
up.
D
E
Ben,
it's
it's
good
news
to
hear
netapp's
gonna
look
is
gonna,
be
riding
a
cozy
driver.
I
wanted
to
also
just
share
in
the
meeting
that
we
have
a
guy
at
red
hat.
Who
is
working
on
a
ceph
object?
You
know
rgw
stuff
driver.
I
did
an
initial
pass
at
the
code.
It's
it's
rough
right
now,
but
you
know
it's
going
to
get
better
and
we'll
contribute
that
as
soon
as
we
get
something
working.
A
A
C
It
is
it's
a
great
thing,
yeah,
I'm
very
happy
to
hear
this.
I
was
waiting
for
this
to
happen
so
now
all
of
us
can
push
the
api
together.
Then.
E
E
This
is
flushing
out
any
weaknesses.
We
have
we're
gonna,
we're
gonna,
discover
them
now.
C
Right
right,
no,
it's
very
important.
We
do
this
now.
This
is
the
way
to
improve.
So
here's
the
thing
all
the
vendors
and
everyone.
I
think
we
should.
We
should
all
get
together
and
ping
tim
yes
to
do
the
api
review.
C
So
what
I'll
do
is
I'll
start
a
new
email
thread
and
and
include
everyone
here
and
and
we'll
go
from
there
with
tim
and
and
I'll
ask
him
to
take
a
look
at
our
api
one
more
time
and
and
if
any
of
you
can
can
reach
out
to
him
through
other
channels.
Other
means,
please
do
hey
sid.
E
C
We
have
we
have
scality
is
also
working.
I
believe
scality
driver
is
working
right.
E
On
it
I
mean,
maybe
we
should
highlight
that
that
you
know
it's
at
least
some
validation
of
the
api
being
reasonable
right.
C
C
Okay,
so
yeah:
let's
do
that?
Let's
let's
do
that
today.
So
after
the
meeting
I'll
draft
an
email
and
have
you
all,
you
know
cc'd
and
you
know-
maybe
maybe
if
you
have
links
if
they're
public-
and
you
know
if
they're
open
source
projects
send
me
a
link
to
them
so
so
I
can
also
include
those
in
the
in
the
email.
C
So
the
idea
is
may
13th
is
our
feature.
Freeze,
we
don't
have
very
long
and
tim's
time
is
going
to
get
even
more
expensive.
The
closer
we
get
to
kubecon
close
we
get
to
that
date.
So
so
we
want
to.
We
want
to
keep
trying
until
then
well,
isn't
kubecon
next
monday.
D
C
Okay,
so
so
so
ben
we
will
update
the
docs
like
a
developer
documentation
that
says
how
to
run
these
things
locally
and-
and
I
think
that
will
help
you
get
started.
C
Yes,
sorry,
I
was
talking
on
mute,
please,
okay,
I'll!
Do
that
yeah,
yeah
and
and
we'll
update
these
docs
to
also
also
reflect
the
fact
that
we've
moved
the
customized
templates
from
the
existing
site.
Car
repository
into
the
sample
driver.
C
Yeah
yeah,
yes
yeah,
I
mean
right
now
we
do
a
manual
copy,
so
they
just
has
worked
on
it.
They
just
hear
tea
party,
so
he
he
every
night
he's
running
a
github
actions
job
if
it's
free
to
clone
this
repo
and
then
take
the
docs
out
and
then
great
yeah
yeah.
Unfortunately,
he
can't
join
us
during
these
meetings,
but
but
he's
been
a
great
help.
I
thought
I'd
mention
it
yeah
and
and
along
with
his
main
job
he's
also
a
lecturer
at
northeastern
university
yeah.
C
Yeah
a
lot
of
time
like
hey,
how's,
it
going
and
he'll
be
like
well
I'm
evaluating
tests
right
now.
Let's
see
I'm.
C
Yeah
all
right,
okay,
so
so
we
will
update
the
docs.
I
think
I
think
that's
very
important.
The
next
thing
is
api
review.
I
think
I
think
we
should
focus
all
our
efforts
on
on
getting
this
through
so
in
terms
of
what
is
needed,
so
we're
all
in
agreement
about
what
this
api
should
be
for.
Alpha
jeff,
as
of
yesterday,
has
updated
the
cap
to
reflect
the
latest
changes.
C
All
all
that's
left
is
for
tim
to
take
a
look
and
well
ask
us
questions
if
there
any
and
then
and
then
help
us
move
this
forward
and
and
that's
what
that's,
what
that's,
what
we'll
you
know
I'll
say
in
the
email
and
that's
what
that's
what
needs
to
happen
so,
do
you
all
think
we
should?
We
should
invite
him
for
a
call
like
a
video
call,
the
idea
being
we
can.
We
can
quickly
finish
it
if
he
joins
rather
than
it
being
this
call,
you
know,
maybe
you
know
we
could.
A
Yeah
I
mean
a
zoom
call
is,
is
way
better
and
and
it
could
be
this
time-
slider
anytime
slide.
I
mean
okay,
okay,
let's
aim
for
that
yeah
and
as
long
as
long
as
it
works
for
him
and
like
somebody
who's
who
can
push
the
api,
I
wouldn't
try
to
accommodate
anyone
else's
schedule.
Yeah.
C
C
Right
right,
I'm
going
to
make
the
time
regardless,
like
even
if
I
have
other
meetings
I'll
push
it
yeah.
We
have
only
like
two
weeks,
so
it's
high
priority:
okay,
okay!
So
now
that
that's
out
of
the
way,
so
chris
brought
up
a
really
important
point
last
week
or
this
week,
and
I
just
want
to
bring
it
up
and
make
sure
we're
all
on
the
same
page
about
this,
it's
the
idea
of
status
versus
spec.
C
I
don't
know
if
we
talked
about
this,
but
tim
mentioned
it
to
us
some
time
ago-
that
for
any
resource
where,
if,
if
the,
if
the
field,
if
any
field
is
updated
by
the
system,
it
should
be
that
it
should
be
status
like
like,
for
instance,
in
case
of
pods
like
pod,
ip
is
in
the
spec,
but
then
it's
always
filled
in
by
the
system.
It's
filled
in
by
kubernetes
like,
like
you
know
in
a
more
conceptual
sense
shouldn't
that
really
be
a
status
field.
A
Well,
there
are
some
fields
that
could
be
in
principle
filled
in
by
either
an
end
user
or
a
controller.
Those
have
to
be
part
of
the
spec,
because
but
yeah,
something
that
is
definitely
going
to
be
filled
in
by
a
controller,
should
be
status
all
right.
So
let's
talk
about.
C
That
so
so
take,
for
instance,
a
bucket
in
in
market
is
created
by
cozy,
and
it
is
not
created
by
the
user
unless
it's
the
driverless
use
case
where
the
admin
goes
and
creates
it
manually.
C
C
A
Yeah,
so
so
maybe
I
said
something
confusing,
so
you
have
to
think
of
it
in
terms
of
like
a
client
and
a
server
where
the
client
specifies
the
spec
and
then
the
server
sees
that
and
then
goes
and
does
something
and
then
returns
just
returns.
The
status
so
like
there
are
situations
where
there's
objects
that,
where
both
the
client
and
server
are
just
different
controllers
talking
to
each
other
so
like
in
that
case,
the
the
thing
is
acting
as
the
client.
A
You
know
something
that
an
admin
might
do,
or
one
controller
might
do,
would
be
filling
in
the
spec
of
the
bucket
object.
A
user
will
never
see
the
bucket
object,
because
it's
not
namespaced
so
yeah
that
you
would
fill
in
the
the
spec
there,
like
one
controller,
would
fill
in
the
spec
and
then
there'd
be
another
controller.
A
I
think
if
there's
only
one
controller
acting
on
the
object,
then
its
job
is
to
fill
out
the
status.
If
there's
two
out
controllers
operating
on
the
object,
one
of
them
is
like
a
requester,
and
one
of
them
is
the
responder,
and
the
requester
obviously
has
to
fill
in
the
spec
to
make
a
request,
and
then
the
responder
or
the
controller
that
is
waiting
for
those
objects
sees
that
request
and
then
does
something
and
then
fills
in
the
status
to
reflect
what
happened
and
that's
why
you're
saying
client
and
server
like
yeah.
C
Fair
enough,
what
are
your
thoughts
on
this.
B
Can
you
hear
me
okay,
yeah,
yeah
yeah?
I
think.
C
Okay,
so
does
that
mean
then
that
this
bucket
id,
which
we
move
to
status,
should
it
go
back
into
the
spec?
E
H
That's
what
we
have
yeah,
that's
for
the
snapshot.
We
have
something
similar
yeah
it's.
Why
is
the
for
your?
It's
like
your?
What
this
is
your
brown,
what
yeah
brownfield
case?
So
it's
like
static
provision
case,
so
you'll
have
the
id
ahead
of
time,
so
that
id
will
be
in
your
spec
to
start
with
and
then
in
the
status.
You
will
always
have
this
field
because
in
both
case,
this
field
should
be.
C
H
I
H
D
H
C
H
C
Makes
sense
so
in
our
case
bucket
id
would
exist
in
both
places
and
in
in
both
cases.
You
know
either
the
admin
fills
it
up.
Well,
no
in
case
of
the
spec,
either
the
admin
will
set
up
or
the
central
controller
fills
it
up,
and
the
responsibility
for
the
status
is
always
done
by
the
sidecar.
C
That
is,
I.
H
B
C
Interesting
anyway,
so
okay
that
clarifies
that
so
we'll
update
the
api.
That
also
means
we
have
to
update
the
cap
jeff.
I
know
you're
working
on
the
cap,
you
you
take
care
of
that
right.
E
Yeah
I
we're
talking
about
bucket
id
in
the
bucket
instance,
definition
right
in
a
bucket
resource
right.
Yes,
let
me
make
a
note.
I
can
do
that,
and
this
is
a
good
time
to
advertise
reviews
on
the
cap,
especially
we
won't
very
much
if
we're
going
to
get
some
of
tim's
valuable
time.
E
I
want
the
community
to
support
the
kep
pr,
the
latest
one
that
has
a
bunch
of
commits
I'll
squash
later,
but
they're
they're
separate
now
for
your
own
benefit,
but
we
have
to
be
an
agreement
with
that
pr,
because
that
pr
for
the
cap
is
going
to
be
the
basis
of
tim's
api
review
and
it's
not
getting
it's
not
getting
much
love
right
now,
we're
not
getting
many
eyes
on
it.
C
Yeah
agreed
so
so
so
the
you
know
key
members
from
each
of
the
vendors.
If
you
can
go
review,
like
all
of
you,
should
go
review
the
cap
and
then
leave
a
lg
tm
on
it.
You
know
you
don't
need
to
have
permissions,
but
just
typing
in
slash.
Lgtm
is
going
to
count
because
tim
is
going
to
see
that
and
he's
going
to
see
that
the
community
approves
of
this.
It's
easier
from.
You
know
that
is
more
convincing
than
us
saying
community
approves
of
this.
C
I
agree,
and
it
would
be
good
if
you,
if
you
did
that
as
soon
as
possible.
I'm
I'm
sorry,
I'm
I'm
being
a
little
pushy
about
this,
because
we
have
very
little
time
and
yeah
yeah
the
cost
of
missing.
E
This
deadline
is
very
expensive,
more
important
than
any
politics
about
this
is
that
we
really
want
it
to
be
good
and,
and
collectively
we're
going
to
get
better
ideas
from
the
whole
community.
Better
points
of
view,
different
dimensions
of
thought.
So
really
we
we
do
want
more
eyes
on
it,
so
that
we
can
make
sure
that
we
have
the
best
api
possible.
C
C
C
So
I
would
actually
like
people
to
sign
up
ben.
Can
I
count
on
you
to
take
a
look
at
the
cap
and
if
everything
looks
good,
you
know,
leave
a
review
yeah
I'll
I'll
put
an
lgtm
on
it
after
reading,
through
it,
okay
and
vyani,
could
you
also,
as
a
representative
of
take
a
look
at
it,
yep?
Okay,
perfect,
thank
you
and
jeffrey
jeff.
You
wrote
this,
so
are
you
biased
yeah?
Yes,
I'm
biased,
so
I
want
other
eyes
fish.
C
Can
you
take
take
a
look
at
the
cap
and
leave
a
review
here
at
the
time?
Yeah
sounds
good.
Thank
you
so
much
okay.
So
there
are.
There
are
other
fields
too.
So
one
was
the
bucket
id.
The
other
was.
Let
me
see
this
in
the
history.
C
Okay,
are
you
ready
jeff,
I'm
ready?
Okay,
the
other
is
inside
of
bucket
access.
The
fields
are
minted
secret
name
and
account
id.
C
Right
right,
so
yeah,
so
so
the
thing
about
minted
secret
is
the
csi.
Adapter
has
to
go
fetch
this
minted
secret
and
then
put
it
into
the
put
it
into
the
workload.
The
part
the
thing
about
that
is
the
csa.
Adapter
has
no
clue
which
namespace
this
minted
secret
resides
in.
Is
it
a
is.
A
E
A
E
A
A
And,
and
who
creates
those
secrets,
the
sidecar
yeah
yeah
and
is
the
sidecar
intended
to
create
them
in
a
single
namespace
per
cozy
implementation?
Or
is
it
meant
to
create
them
in
the
in
the
name
space
where
the
the
end
user
is
running
his
workload
like?
Where
are
they
supposed
to
be.
B
C
Yeah
yeah,
so
we
use
that,
to
you
know,
put
it
into
an
environment
variable
and
then
leave
it
right
from
and
set
the
site
card.
E
Yeah
yeah,
so
that
was
one
of
the
changes.
Can
you
repeat
that
so
I
wrote
a
note
that
said:
keep
the
ba.spec
dot
minted
secret
and
add
ba.status.net
secret.
Is
that
correct.
D
E
C
If
I
write,
if
I
were
to
see
this
see
this
resource
right
like
where
to
see
the
definition
for
bucket
access-
and
I
didn't
know
how
things
work,
I
would
see
two
secrets-
and
I
wouldn't
know
what
was
what,
but
by
calling
it
static
credentials
secret.
It's
very
clear.
What's
going.
A
K
C
K
E
E
C
G
J
C
K
A
C
E
C
D
C
H
I
thought
we
are
actually
trying
to
not
to
say
static
because,
like
for
snapshotting
we're
not
even
mentioned
in
static,
we
say
pre-existing
pre-existing
snapshots
so
every
time
I
have
a
static
there
and
I
was
asked
to
change
it
actually
pre-existing.
It's.
E
E
A
Yeah
so,
but
but
crucially,
in
the
normal
case,
it's
absent
and
so
correct.
It's
a
signal
that
oh,
this
field
is
gone.
That
means
there
was
no
pre-existing
secret
and
the
one
that
got
put
in
the
status
was
from
cozy
yeah,
so
so
that
the
the
case
when
it's
absent
is
actually
the
most
interesting
case
from
from
a
naming
perspective,.
E
C
A
I
know
I'm
I
mean
credentials
sounds
better
than
secret
now
that
you
say
it
as
long
as
it's
still
accurate
and
yeah
a
qualifier
or
not
as
long
as.
A
A
G
C
Yeah
yeah,
it's
okay,
we'll
go
through
the
grinds.
So
let's,
let's
call
it
static
credentials.
Then.
J
H
H
H
Don't,
oh,
I
don't
think
we
have
that
I
don't.
I
don't
think
we
call
it
pre-existing
in
name.
We
just
call
it
a
snapshot
handle
because
we
differentiate
that.
So,
if
you
look
at
our
experiment,
explanation
there
right,
so
you
can
only
have
one
of
those
two
fields.
One
is
for
dynamic.
The
other
one
is
for
pre-existing
it's
very
clear.
There's
no
confusion
there.
Oh.
H
But
no
I'm
saying
the
name
is
not.
We
don't
have
dynamic
in
there,
but
I'm
saying
if
you
look
at
the
source
one
source,
you
will
only
specify
that
in
the
case
of
dynamic
provisioning,
the
other
one
like
the
snatcher
handle.
You
honestly
specify
that
in
the
source,
if
it
is
a
pre-existing,
so
so
in
our
case
this
is
very
clear.
You
can
only
have
one
off
but
with
so
that's
why
we
don't.
We
don't
really
need
that
in
the
name.
C
Yeah
we
can,
we
can
become
symmetric
with
snapshotting,
I'm
fine
with
this
purely
conceptually
speaking
it.
It
seemed
to
be
better
to
call
it
pre-existing
and
created
or
generated.
I
guess
no,
even
that's
not
true,
because
in
case
of
driverless
it's
not
generated
so
the
status.
Let's
just
call
it
credentials,
I
think
we're
all.
On
the
same
same.
E
D
C
Yeah,
that
should
that
should
be
okay.
That
should
be
fine.
I
mean
if
we
wanted
to
clearly
say
that
this
was
pre-created,
I
mean
you
could
say
pre-existing
or
static,
but
I
guess.
E
J
J
C
C
Yeah,
okay,
great
all
right,
so
let's
go
back
here
so
that
was
for
the
seek.
Those
are
the
credentials
now.
Let's
look
at
account
id,
so
account
id
is
needed
as
a
separate
field,
because
we
need
to
know
the
handle
for
the
account
that
you
know
that
is
going
to
refer
to
the
to
the
user.
That's
actually
going
to
access
the
access,
the
bucket
in
case
we
need
to
revoke
access.
This
is
this
is
what
will
be
used
as
the
handle
to
revoke
access.
C
E
We
talking
about
the
b
instance
or
the
ba
here,
the
ba,
that's
our
thought.
Okay
and
so
you're
saying
we're
going
to
have
ba.spec.account
id
for
the
same
reason.
We
have
a
credentials
in
spec
and
in
addition,
since
account
id
is
going
to
be
filled
in
by
cozy,
then
we
will
also
have
it
in
the
ba
dot
status.
J
E
C
Yeah,
okay,
so
let's
make
those
changes
and
update
the
kip.
I
think
I
got.
E
Them
noted
okay.
If
anybody
on
the
call
comes
up
with
an
argument
against
the
name
choices,
please
just
ping
put
it
in
sig
storage
cozy
later
today,
I'll
make
these
changes.
C
Yeah
yeah
any
discussions:
please
bring
it
up
on
slack,
we're
all
always
there,
so
we
can.
We
can
help
answer
them,
so
all
right
so
so
to
follow
up
on
this.
We
also
need
to
update.
Let's
see,
do
we
need
to
update
the
spec
in
any
way
for
this?
C
No,
no.
The
spec
remains,
as
it
is.
Respect
is
a
grpc
spec
and
I
don't
think,
there's
any
changes.
Neither
there
we
do
need
to
update
our
controller
and
sidecar
to
follow
this
new
convention,
so
that
needs
to
be
taken
care
of
okay,
and
also
this
is
good
because
it
gives
us
a
way
to
disambiguate
between
a
bucket,
that
was
that
already
exists
and
a
bucket
that
that
we
need
to
provision
like
from
the
sidecar
side.
C
There
was
no
way
to
tell
this
before
and-
and
there
was
some
issue
with
not
being
able
to
tell
with
cleaning
up,
I
believe
I
forgot
what
exactly
the
issue
was,
but
it
will
come
back
up.
Okay,
so
so
I
think
we're
in
a
good
shape
in
case
of
the
api.
C
C
Right
now,
okay,
so
all
right,
let's
talk
about
development,
so
so,
while
while
tim
is
going
to
review
all
this,
it
is
also
important
to
make
sure
we're
on
you
know
we're
continuing
development,
so
we
don't
have
health
checks
implemented
for
any
of
the
cozy
resources,
so
that
needs
to
be
implemented.
C
B
C
Currently
now
so
so
is
it
okay?
If
you
can
someone
else,
take
up
the
task,
would
you
do
you
want
to
do
it.
F
C
Already
working
on
emitting
events.
C
Okay,
so
can
you
talk
a
little
bit
about
that
because
do
we
emit
events
on
the
pod
that's
requesting
the
bucket?
So
if
I
were
to
do
cube
cpl
describe
part,
would
I
see
the
event
saying
bucket
provision
then
and
bucket?
You
know
not
provision.
B
C
That's
exactly
it
so
we
get
the
pod
name
and
finding
space.
B
I
B
For
publishing,
and
so
we
use
that
to
emit
events
onto
the
pod
when
we're
provisioning
or
we're.
I
The
same
process
happens
for
on
publishing.
F
Someone
had
a
question:
yes,
just
on
the
the
health
checks,
the
health
check
for
a
provision,
or
would
it
make
sense
to
adopt
the
approval
request
that
csi
has,
instead
of
relying
on
the
get
get
info.
C
C
So
the
difference
is,
if
you,
if
you
were
to
you,
know
if
you
want
to
do
a
health
check
on
the
driver
itself,
you'll
need
to
do
the
grpc
api
for
it.
C
If
you
want
to
do
the
health
check
for
the
the
sidecar,
which
is
not
the
driver,
the
sidecar
itself,
then
you
don't
need
that
now,
given
that
the
sidecar
and
and
the
driver
have
to
run
in
the
same
part,
and
and
given
that
you
know
the
the
pods
and
sorry
the
containers
in
the
single
part
have
to
share
the
same
life
cycle.
C
I
think
it's
okay
to
say
you
know
just
have
the
health
check
in
in
just
the
side,
car
and-
and
you
know
we
can-
we
can
punt
the
driver
level
health
check
for
the
future
if
needed,.
F
Sense:
well,
yes,
if
the
sidecar
is
not
doing
any
grpc
call
and
to
to
handle
a
health
check
request
right
now,.
C
F
B
J
Nothing's
happening
so
like
it's
still
open
for
design.
I
was
just.
B
Ping,
let
me
check,
I
think
it's
like
okay
identity.
D
C
B
J
B
C
I'm
not
sure,
okay,
okay,
so
let's
let's
go
with
that.
I
I
think
that's
that's
a
better
way
to
I.
I
don't
see
a
strong
need
for
the
driver
itself.
You
know
to
have
a
health
check
endpoint.
I
you
know
if
there
is
a
use
case,
obviously
we'll
implement
it.
The
effort
is
low,
but
I
think
it's
okay
to
just
have
a
healthy
for
the
side
car,
because
they're
always
tied
together.
C
If
the
cycle
goes
down,
you
you
want
the
driver,
I
mean
the
driver
doesn't
work
basically,
so
I
think
one
health
check
is
enough
having
it
just
in
the
sidecars,
okay
or
or
like
csi.
Does
it's
csi
has
a
separate
liveness
probe
like
like
a
health
check,
microservice.
D
C
D
F
C
I
I
think
that
should
work
I
mean,
but
we
were
saying
we
don't
we
won't.
You
know
we
won't
do
the
grpc
thing,
we'll
just
add
a
slash
healthy
endpoint
to
you
know
to
our
to
our
driver.
C
We
don't,
we
don't
need
the
liveness
probe.
F
C
A
I
mean
like
liveness
rubs
in
general,
are
kind
of
funny,
because
you
know
it's
it's
it's
like
you
can
just
see
if
the
process
is
still
there
and
if
it
is,
it
must
be
running
but
like
it's
meant
to
detect
some
sort
of
a
deadlock
or
a
live
lock,
where
you
know
that
you
come
in
through
the
http
endpoint
and
you
actually
maybe
perform
a
slightly
deeper
check
to
say
yeah.
I'm
still
processing
requests
such
that
like.
A
C
Having
what
fail
having
the
whole
system
fail,
because
what
we're
doing
by
doing
by
doing
this
is
we're
having
silent
failures
like
like
every
time,
let's
say:
there's
a
there's:
a
live
lock
or
a
deadlock.
What
we
end
up
doing
is
we
restart,
so
it
contains.
The
deadlock
goes
away,
comes
back
up
and
let's
say
it
happens
again
and
it'll
keep
spinning
like
that
and
unless
you're
looking
at
the
processes,
you're,
probably
going
to
miss
that
it's
actually
going
down
and
coming
up.
A
C
Okay,
so
I
think
I
think
yeah,
I
think
I
think,
to
begin
with,
let's
just
have
a
slash,
healthy
endpoint,
like
like
our
server
listening
to
slash.