►
From YouTube: Sig-Auth Bi-Weekly Meeting for 20230426
Description
Sig-Auth Bi-Weekly Meeting for 20230426
A
Hello,
everyone
welcome
to
the
April,
26
2023
meeting
of
say:
goth.
Let's
kick
it
off.
We
have
quite
a
bit
on
the
agenda,
maybe
just
to
kick
it
off.
First
announcement
is,
if
you
missed
the
kubecon,
say
goth
Deep
dive,
here's
the
schedule,
along
with
the
slides
and
the
recording,
should
be
up
in
I.
Think
next
week
and
Jordan.
You
want
to
talk
about
the
other
topics.
B
Yeah,
so
for
those
who
were
at
kubecon,
it
was
great
to
see
everybody.
We
had
several
good
discussions,
kind
of
working
out
some
long-standing
questions
about
designs
and
things.
We
tried
to
take
good
notes
for
the
people
who
weren't
at
kubecon
I've
missed
plenty
of
coupons
and
wished
that
I
knew
what
people
talked
about.
So
we
tried
to
at
least
take
notes
and
leave
breadcrumbs
and
for
our
our
own
memory
and
for
the
benefit
of
people
who
kind
of
wondered.
What
topics
did
we
talk
about?
B
Thanks
David,
a
lot
of
those
conversations
are
going
to
trickle
into
updates
in
the
the
cap,
designs,
pull
requests
and
things
in
the
next
few
days,
hopefully
this
week
next
week.
So
look
for
that,
but
I
did
want
to
point
people
to
the
some
of
the
notes.
B
We
took
that
segues
into
the
next
couple
of
bullet
points,
so
we're
starting
to
spin
up
work
for
128
and
kind
of
got
a
clearer
picture
of
which
things
are
ready
to
proceed
with
implementation,
which
things
seem
achievable
to
get
a
design
pinned
down
and
and
actually
get
an
initial
implementation
of
128
and
then
a
couple
things
that
are
probably
going
to
stick
in
the
design
phase
for
128..
So
those
are
links
to
queries
into
the
enhancements.
Repo.
Take
a
look
at
those
we
I.
B
We
tried
to
actually
assign
things
to
the
people
who
said
they
would
be
working
on
stuff
who
we
think
will
be
working
on
stuff.
So
look
at,
what's
assigned
to
you,
make
sure
it
matches
what
you're
planning
to
work
on
and
I
didn't
have
a
lot
else
to
say.
It
was
mostly
just
kind
of
an
announcement
to
go.
B
Look
at
those
things
make
sure
it
matches
your
expectations
if
it
doesn't
like
jump
in
slack
or
something
that
we'll
figure
out
who
to
add
who
to
remove
or
what
to
pull
in
or
what
to
push
helps.
But
I
wanted
to
make
sure
everyone
was
on
the
same
page
and
then
the
last
item
was
just
linking
to
a
couple
of
things
that
Sig
API
Machinery
is
really
driving,
but
are
of
interest
to
save
off.
So
some
of
the
continuing
improvements
to
admission
stuff
that
is
probably
interesting
to
people
here.
B
B
Try
to
take
some
of
that
feedback
and
update
the
design
so
I,
don't
think
we're
ready
to
wrap
this
up
and
have
it
be
implementable
yet,
but
they
had
some
next
steps
to
kind
of
continue
pushing
on
the
design.
Gotcha.
A
Okay,
cool
thanks
all
right;
I!
Guess
with
that:
let's
talk
about
username
spaces
support.
C
So
hi
I'm
Rodrigo,
so
we
added
in
sync
node.
We
had
the
support
for
username
spaces
as
an
alpha
feature,
but
we
wanted
to
check
with
with
you
folks
here
how
how
it
should
interact
with
pod
security
policies
during
kipcon.
I
chat
a
little
bit
with
more
about
it,
but
it's
invest
anyway
to
come
here
to
the
meeting
and
make
this
topic
available
for
everyone.
C
Basically,
what
what
we
are
doing
now
is
like
we
are
adding
a
new
namespace
that
is
the
username
space,
so
Corner
Spot.
Until
now,
here's
the
the
host
username
space
and
yeah
the
restricted
policy.
This
allows
hostnet
spaces,
so
we're
trying
to
think
how
we
should
yeah
interact
with
both
security
standards
like
what
we
talked
with
more
was
basically
at
some
release.
Probably
when
the
feature
is
still
Alpha,
we
should
add
it.
We
should
disallow
the
host
username
space
in
the
restricted
policy
for
I,
don't
know
kubernetes
whatever
version
and
then
go
from
there.
D
So
so
think,
other
other
things
to
add
there
is.
We
would
also
want
some
some
Fields,
like
capabilities
to
be
expanded.
So
when
we
are
in
a
username
space,
it's
actually
safer
to
give
more
capabilities
to
the
pod.
You
can
be
running
as
root
because
you
are
not
really
root
on
the
host
and
so
on.
So
it's
like
when
we
are
in
a
username
space,
some
of
the
other
fields
get
relaxed
and
that's
what
we
want
to
be
able
to
express.
B
D
B
Users,
okay,
so
the
Baseline
level
would
continue
to
allow
a
default
pod.
B
So
the
the
guidance
for
the
Baseline
level
is
that
we
don't
allow
sort
of
escalating
things
or
putting
things
into
a
pod
spec
that
escalate
permissions
Beyond
just
the
defaults,
so
you
can't
say
privilege,
true,
you
can't
say
like
add
a
host
Mount
volume.
You
can't
say,
host
networking
true,
but
because
host
username
space
has
been
the
default
that
will
continue
to
be
allowed
in
the
Baseline
level.
D
E
E
C
Well,
it's
actually
more
tricky
because
to
use
a
username
space,
we
don't
only
need
changes
in
kubernetes.
We
also
need
supporting
container
runtimes,
the
Linux
kernel
and
the
file
systems
used
by
the
bot,
so
even
in
130,
you're
gonna
know
for
sure
that
will
be
on
earth.
C
E
It
would
make
a
pod
that
wanted
to
turn
on
those
capabilities
like
as
I,
understand
the
way
username
spaces
work.
We
would
be
able
to
let
a
pod
that
was
using
username
spaces
run
as
root,
and
the
point
is
that
that
would
be
allowable
in
a
restricted
policy,
because
you
would
not
be
able
to
to
escape
from
your
container
and
and
impact
things
on
other
containers.
Yeah.
B
So
there's
there's
two
directions:
one
is
relaxing
restrictions
on
things
like
run
as
user
for
pods
that
say
username
space
true
and
like
David
said
we
would.
We
would
only
want
to
do
that
once
we
were
sure
that
the
username
space,
true
thing
expressed
in
the
API,
was
actually
effective.
All
the
way
down
to
the
node
in
the
runtime
so
like
the
prereqs
for
that
are
the
field
hits
GA.
So
we're
sure
that
we're
sure
that
it,
like
the
feature,
is
locked
on.
B
You
can't
have
a
component
that
is
saying
actually
I
don't
pay
attention
to
that
feature,
and
the
oldest
node
that
we
would
support
in
SKU
with
the
control
plane
is
either
going
to
honor
the
username
so
slag
or
reject
the
Pod.
If
it
turned
off
the
feature,
I'm
not
sure
how
it
was
implemented
on
the
Node
side
and
I
guess
you
were
saying:
there's
also
container
runtime
stuff.
B
So
do
we
have
positive
detection
on
the
Node
side
to
know
that
the
container
runtime
supports
username
spaces,
or
does
it
just
sort
of
tell
the
container
runtime
run
this
thing
with
username
spaces
and
if
the
container
runtime
doesn't
support
it?
It's
like
oh
well,
you
got
host
namespaces.
Do
we
have
positive
detection
there.
C
D
C
So
basically,
the
short
answer
is
no
there's
a
longer
answer
that
that
is,
will
probably
back
for
the
change
to
continue
runtime,
so
it
will
probably
honor
or
even
on
Old
container
runtimes,
but
but
yeah
like
the
the
changes
is
a
Backward
Compatible
change.
So
if
you
set
this
field
in
the
CRI
API,
that
is
a
crpc
API
on
the
other
part
you
doesn't
know
about
that
field,
then
it's
ignore.
B
B
B
Ideally
into
the
cap
and
maybe
have
a
section
in
the
cap
talking
about
relaxing
the
the
Baseline
and
restricted
Cloud
security
standards.
C
Okay,
but
maybe
to
to
have
an
agreement,
maybe
on
the
simpler
things
first
or
maybe,
these
are
simpler
to
to
resolve,
but
so
at
some
point
the
restricted
policy.
Should
this
allow
not
studying
the
user
using
the
host
username
space
right
or
that
is
not
what
we
want
for
the
restricted
policy.
B
We
so
we
are
able
to
add
new
requirements
to
the
restricted
policy.
I'm,
not
sure
if
we
would
say
we
would
disallow
setting
host
or
if
we
would
require
setting.
C
B
It
if
I
recall
correctly,
you
are
only
allowed
to
set
username
space
for
stateless
pods,
which
don't
have
volume
mounts.
It
seems
really
strange
to
me
that
we
would
require
pods
to
use
username
spaces
like
as
a
security
measure
when
they
could
just
say
actually
I'd
mount
a
persistent
volume
in
and
then
we
would
no
longer
require
them
to
use
username
spaces.
C
That
seems
pretty
soon
Okay,
so
yeah,
okay,
so
that
is
another
thing
like
we'll
add
support
for
stateful
parts,
usually
probably
soon,
but
what
I
don't
understand
from
your
comment
is:
do
you
want
us
to
wait
until
we
support
stateful
Parts
to
add
Port
security,
integration
with
Port
security
standards,
or
shall
we
do
something
before
we
support
all
kinds
of
parts?
So.
B
B
E
So
so
I
would
really
like
to
find
a
way
to
adjust
pod
security
admissions
so
that,
if
I
want
to
opt
in
I
know
that
all
my
nodes
are
going
to
honor
this
because
I
own
my
deployment
I
would
like
to
find
a
way
to
enable
pod
security
admission
to
relax
the
requirement
in
advance.
I
think
that's
useful
both
because
it
allows
refinement
in
my
cluster
things
that
are
safe
in
my
cluster
are
actually
safe.
E
I
know
this,
so
I
can
turn
it
on,
and
it
allows
us
to
get
pre-ga
signal
on
both
the
capability.
Does
it
does
it
run
well,
is
it
efficient
and
the
user?
The
felt
user
impact
I
am
now
able
to
change
my
namespace
from
either
baseline
or
privilege
down
to
restricted.
With
this
new
feature,
I
would
like
to
have
the
ability
to
do
that
and
without
a
way
to
signal
that
pod
security,
admission
I,
don't
know
how
I
would
get
that
same
signal.
C
E
C
E
Were
present
before,
except
for
the.
E
C
B
Well,
the
Windows
support
intersected
with
pod
security
admission
and
we
actually
waited
until
the
oldest
node
had
support
or
would
fail
safe
with
the
the
Pod
OS
field.
Before
we
relaxed
pod
security
admission.
E
B
E
E
And
none
of
what
I'm
saying
is
like
hey:
we
should
change
the
default.
What
I'm
saying
is,
if
we're
going
to
opt
into
this
and
turn
it
on
in
a
cluster
and
like
the
chances
that
Renault
is
going
to
come
to
me
and
say
David.
We
should
turn
this
on
in
a
in
a
tech
preview
cluster.
It
seems
very
high,
I'd
like
to
be
able
to
have
a
full
story.
B
I
could
maybe
see
an
alpha
level
thing
so
that,
like
it,
sets
expectations
appropriately
like
this
is
a
preview
thing.
This
is
not
a
thing
that
you're
expected
to
enable
and
then
upgrade
and
like
it's
a
try,
it
out
thing.
Maybe
I,
don't
know
I,
don't
think
I.
E
Mean
I'm
I'm
open
to
that,
so
I
can
I,
don't
think
I
would
want
it
based
well
I
mean
looking
at
it.
I,
don't
think
I'd
want
it
based
on
the
feature
gate
like
I.
There's
one
argument
that
says:
okay
when
you
turn
on
this
feature
gate.
This
is
what
you
get
the
issue
is
that
because
we
can
have
skewed
cubelets,
an
old
cubelet,
wouldn't
honor
that
feature
game
right,
so
there
would
be
no
way
turn
that
on
I
wouldn't
want
to
break
somebody
else's
cluster.
So
a
second
opt-in
would
work
for
me.
E
E
No
well
based
on
the
yeah
sure,
because
it
would
never
be
on
by
default
right.
That
would
I
think
that
would
satisfy
their
need
and
it
would
let
us
get
the
code
that
we
need
in
and
try
it
yeah,
because.
E
You
can
see
you
end
up
in
a
state
where,
like
okay,
you
got
three
releases
to
get
to
the
cute
little
bit
and
then
now
you
have
to
have
several
releases
in
the
API
server
to
get
stability
and
yeah.
B
B
C
D
I
can
try
to
summarize
what
I
got
out
of
it
and
David
Jordan
correct
me,
so
we'll
we'll
go
and
update
the
cap
for
username
spaces
and
describe
how
we
would
want
things
relaxed
and
on
the
side
to
start
getting
feedback
on
whether
a
restricted
policy
can
require
this.
We
have
a
separate
flag
which
will
stay
Alpha
in
order
to
test
it
out
early
and
not
having
to
wait
till
it's
available
on
all
the
notes-
and
this
is
the
second
change
also
a
part
of
this
clip
or
that's
a
different
cap.
C
E
E
Coordination
to
the
vendor
or
deployer
or
whoever's
managing
it
if
they
come
in
and
say,
okay
control,
plane
now
use
username
spaces
in
pod
security
admission
they
better
be
sure
that
all
their
cubelets
support
it.
C
B
C
C
This
will
be
a
feature,
get
feature
gate
on
the
API
server
and
the
burden
to
verify
that
the
cumulative
will
honor.
It
will
be
on
the
cluster
admin
and
then
we'll
have
another
feature
gate
when
the
host,
when
the
feature
when
username
spaces
are
ga
to
for
the
restricted
policy
to
enforce
username
spaces
so
and
for
that,
the
feature
will
need
to
be
GA
and
support.
Of
course,
all
about
not
only
stateless
spots
did
I
get
it
correctly.
I.
E
C
But
but
I
mean
like
the
like
support,
all
the
parts
when
we
want
to
enforce
username
spaces
in
the
restricted
policy,
because
Jordan
was
saying
that
otherwise,
you
can
just
add
a
volume
and
circumvent
the
yeah.
B
B
C
Okay,
so
I
think
that's
sum
is
up
so
for
restricted,
we'll
wait
to
change
the
restricted
policy
to
require
returning
spaces.
Wait
until
the
field
initiate
supports
all
the
parts
to
relax.
The
checks
for
for
when
username
spaces
are
in
use,
we'll
use
a
a
different
feature
gate
and
the
burden
to
verify
that
the
equivalent
will
honor
it
will
be
on
the
cluster
Army.
E
E
Course,
but
in
Broad
Strokes,
yes,
that's
that's
what
I'm
thinking
I
think
that
gives
you
a
way
to
phase
this
in
get
the
feedback
you
need
offer
a
carrot
to
a
user.
Who
does
the
thing
that
you
want
to
I'm,
maybe
less
sure
about
eventually
requiring
feel
to
be
set.
B
D
E
D
Yeah,
that's
fine,
I
mean
we
can.
We
can
revisit
it.
C
Yeah
yeah
we
can.
We
can
mention
in
the
cap
that
this
is
the
rough
high
level
idea
and
before
migrating
to
GA,
we'll
clarify
with
cigar
author.
C
A
All
right
automatic
reloading
of
HC
Acer
in
Kate's,
client,
libraries.
F
I'm
not
sure
if
there
is
here,
but
can
you
guys
hear
me
or
not
yeah,
we
can
hear
you:
okay,
okay.
Basically
somebody
working
on
a
slightly
different
product
than
me
brought
this
up.
Basically,
when
you
rotate
the
you
rotate
the
cas
backing
a
kubernetes
cluster,
it
looks
like
the
root
the
the
ca
root
that
we
Plumb
into
every
pod,
we're
talking
to
cube
API
server.
F
That's
not
it
just
read
once
at
pod,
startup
and
then
kind
of
held
Forever,
at
least
that's
what
it
looks
like
to
me.
So
you
have
to
bounce
every
pod
in
the
cluster
when
you
rotate
the.
A
G
B
We
if
we
can't
load
the
file
we
error.
If
we
can
load
the
file,
we
set
a
pointer
to
it
in
the
client
config
in
the
client
config,
we
say
dot
CA
file
here.
Let
me
talking
about
oh
yeah.
There
we
go
so
we
this.
This
is
a
check
to
make
sure
that
the
like
the
file
that
we're
pointing
at
is
a
valid
start
bundle,
but
we
actually
propagate
downward
is
a
pointer
to
the
file,
not
the
content.
B
We
loaded
from
it,
okay
and
so
later
elsewhere,
back
on
the
ranch
like
I'm,
pretty
sure
that
if
we
are
given
pointers
to
handles,
we
load
the
content,
but
then
we
set
reload
TLS
file
is
true
and
I
thought
we
like
set
up.
The
background
was
the
issue
you
were
seeing
just
based
on
inspection
of
code,
or
was
someone
actually
observing
like
a
rotation
failing
to.
F
G
B
B
If
you
were
given
a
CA
file
so
yeah,
it
looks
like
there's
a
bug,
but
it's
above
and
a
test
and
a
fix
would
be
welcome.
Okay,.
F
A
Lauren,
are
you
looking
at
specific
files
right
now?
Do
you
want
to
link
it.
A
A
A
B
A
All
right
next
inconsistent
authorization
of
node
resources.
G
Yeah
I
I
put
that
in
so
so
there's
a
urban
looking
at
like
trying
to
get
the
metrics
add-ons
like
Prometheus
running
on
clusters,
with
some
restrictions
that
prevent
you
from
running
exec
on
the
nodes.
I
saw
that
there's
this
kind
of
difficult
position
that
we
put
users
in
that
like
there's,
two
ways
to
get
say:
node
metrics
on
any
kubernetes
cluster,
the
kind
of
the
official
or
the
most
supported
way
would
be
via
the
API
server
right
like
we
don't.
G
We
don't
recommend
that
you
we
allow
direct
access
to
the
cubelet
so
and
the
problem
with
getting
say
nodes
metrics
from
via
the
API
server
requires
a
very
high
level
of
privilege.
It
basically
requires
the
proxy
sub
resource
on
the
the
basically
that
lets
you
like
run
arbitrary
commands
on
on
any
container
running
on
that
node
on.
G
There
is
some
subdivision
of
the
like
the
rbac
that
cubelet
will
use
to
authorize
requests
so
that,
like
you,
only
need
a
subset
of
the
permissions
to
say
access,
metrics
or
stats.
So
we
put
users
in
this
difficult
position
where,
like
you'd,
need
different
position.
Permissions.
G
If
you
are
the
connecting
to
the
nodes
correctly,
which
we
don't
recommend
or
if
you
use
the
recommended
path,
which
is
via
the
API
server
proxy,
then
you
require
like
a
very
high
privilege,
and
you
can't
really
take
advantage
of
the
our
back
sub
resource
division.
That
is
that
cubelet
supports.
So
this
is
basically
an
explanation
of
that
and
one
I
have
kind
of
put
in
as
examples
or
like
initial
thoughts
about
certain
ways.
G
We
could
potentially
fix
that
one
is
to
make
the
way
cubelet
creates
Maps
the
request
path
to
the
request
attributes
when
performing
a
authorization
request
consistent
across
cubelet
and
API
server,
so
cubelet
would
would
require
just
node
slash,
metrics,
API
server
would
require
node
slash
proxy,
so
one
proposal
is
to
make
the
authorization
nodes
in
API
server
match
what
cubelet
does
so
you,
you
don't
have
to
have
direct
access
to
the
node
and
you
you
get
to
assign
the
least
amount
of
privileges
to
any
client
that
needs
to
access,
say
node
metrics.
G
So
my
question
is
like:
what
can
we
do?
Is
this
a
problem
that
is
worth
solving
and
is?
This
is
accessing?
Is
letting
add-ons
and
applications
access
the
node
endpoints
directly
kind
of
a
supported
path
to
the
long
term
r.
G
Are
we
taking
the
position
that
like
if
you
are
connecting
to
any
node
resource,
any
cluster
resource,
you
should
be
going
via
the
API
API
server?
No
matter
what
in
which,
if,
if
the
letter
is
the
position
that
we
hold,
then
we
should
probably
fix
this
inconsistency
and
allow
users
to
assign
least
privilege
to
to
their
to
the
workloads.
D
F
E
A
G
I
kind
of
that's
why
I
kind
of
marked
it
as
a
hack
but
I
think
the
real
solution
is
to
is
something
that
looks
more
like
the
reinterpret
request
right
like,
but
that
also
has
problems
in
that
to
preserve
backward
compatibility.
We
would
need
to
also
allow
proxy
proxy
should
also
match
the
metric
sub-resource
right,
so
so
that
any
previous
authorization
to
just
allow
proxy,
which
is
what
I
think
more
most
plants,
would
do,
would
also
still
work.
G
Directly,
you
don't
have
much
of
a
problem
right,
except
that
we
don't.
We
kind
of
explicitly
say
that
direct
node
acts
like
accessing
the
cubelet
API
endpoint
is
not
like
not
permitted.
It's
kind
of
an
API
server
bypass
press,
you
bypass
all
of
the
admission
control
and
all
of
the
like
the
security
controls
are
all
of
the
controls
on
the
cluster
are
in
the
API
server
and
letting
letting
clients
access,
tubelet,
directly
kind
of
bypasses
them,
and
we
don't
recommend
that
and
not
not.
B
D
B
B
The
cubelet
can
still
delegate
authorization
checks
to
the
API
server
you're,
not
you're,
never
going
to
hit
admission
for
read,
requests
I'm,
just
trying
to
understand
like
what
additional
policy
do.
We
think
we're
gaining
by
going
through
the
API
server
SED
I'm.
B
Yeah
I,
what
every
time
we
talk
about,
the
cubelet
API,
we
kind
of
skirt
around
the
fact
that
the
cubelete
API
is
not
really
a
thing.
That
is,
super
well
defined
and
we
kind
of
say:
oh,
we
don't
really
want
people
like
using
it,
but
then
some
sometimes
it's.
G
Right
so
to
regard
even
let's
say
we
make
improvements
to,
but
the
changes
that
you
would
need
to
it's
like
I'm,
trying
to
imagine
the
better
cubelet
API
that
would
make
this
problem
moved
and
I'm
thinking.
That
would
be
something
along
the
lines
of
where
the
safe
and
metrics
sub
resource
is
a
is
a
resource.
G
That's
actually
known
to
the
API
server
and
interpreted
especially
like
in
the
today
like
when
you
do
a
API
V1
notes,
node,
slash
proxy
slash,
metrics,
stash
metrics
is
just
a
path.
A
cube,
API
server
doesn't
really
care
what
that
is
right.
So,
but
if
there
were
a
node
API,
then
the
path
would
look
something
more
like
nodes.
Slash
no
note,
slash,
node,
slash,
say
metrics
API
or
something
like
that,
and
that
would
be
the
that
would
then
be
proxy
to
cubelet.
So
is
that
the
thing
that
you're.
B
Like
maybe
like,
if,
if
we're,
if
we're
actually
saying
this,
is
an
API
service
that
we
want
to
support
rather
than
trying
to
just
solve
the
authorization
part
of
it,
and
let
people
still
go
through
this
sort
of
unstructured
catch-all
proxy
hit
a
random
endpoint
on
the
cubelet
middle
man
in
the
in
the
API
server
and
just
authorize
it
especially
like
it
seems
like
if,
if
logs
and
metrics
and
stats,
if
we're
saying
these
are
so
fundamental
and
important,
we
need
to
support
it.
B
E
Apis
I
thought
so
so
I
might
have
misunderstood,
The
Stance
just
taken
three
months
ago
regarding
node
logs.
Are
you
saying
we
should
reassess
that
and
say
you
know
what
these
these
endpoints
are
special.
We
do
know
they
have
different
Behavior
than
proxy
in
general.
We
do
know
they
can
be
authorized
differently
than
proxy
in
general,
and
we
do
recognize
that
in
some
deployments
they
are
valuable
to
be
recognize
differently.
E
B
G
Do
I
understand
correctly
that
you
can't
you
can
authorize
someone
to
access
just
the
parts
logs
right
that
that
is
something
that's
possible
through
our
bank
today?
Yes,.
B
But
right
I
mean
the
sort
of
the
rationale
behind
that
being
different
than
node
logs
is
that
the
Pod
logs
are
completely
owned,
like
they're
limited
to
things
managed
by
kubernetes.
What
you
can
access
via
node
logs
is
currently
like
any
log
on
the
system
and
with
the
new
Alpha
systemd
Windows
service
support.
It's
like
any
log
of
any
service
running
in
the
system.
Yeah.
G
That
that
is,
that
looks
like
a
kind
of
a
time
bomb
like
because
there's
lots.
There
are
a
lot
of
components
which
accidentally
forget
to
scrub
the
the
logs
of
sensitive
data
and
yeah.
It's.
B
So,
rather
than
focusing
just
on
the
authorization
piece,
I
guess
I
would
want
to
know
like
are
these
apis?
We
actually
want
to
support
so,
let's
Define
them
and
like
say
what
the
guarantees
are,
rather
than
just
special
casing
the
authorization
bit
okay.
G
So
I
think
I
should
go
talk
to
signode
and
see
see
what
they
think
about
this
is
that
right.
B
I
think
more
clarity
can
only
help,
even
if
even
for
people
who
are
currently
using
those
endpoints
directly
against
the
cubelets
and
I
would
I
would
try
to
do
this
yeah
more
more
holistically,
and
for
that
we
need
feedback
from
the
signaled
on,
especially
the
metrics
and
stats
like
what's
the
stability
of
those?
What's
the
long-term
plans
for
those,
if
we
wanted
to
promote
those.
G
And
there
are
a
lot
of
other
actions
being
added
as
well
like
this
checkpoint
is
a
new
entrant
which
unfortunately,
was
not
added
as
a
new
sub
resource
when
it
was
so
you're
still
using
proxy,
then.
B
Yeah
I
I
think
proxy
is
a
decent
until
better
guarantees
are
made
having
things
live
under
proxy
and
require
like
super
broad
access
and
have
a
disclaimer
on
the
says.
This
can't
be
part
of
conformance
like
we
don't
this
API
service
isn't
really
guaranteed.
B
We
shouldn't
promote
things
out
of
there
until
there
is
more
clarity
about
what
we're
actually
supporting
there
like
as
soon
as
it
becomes
an
API
that
you
can
call
directly
and
Cube
API
server.
I
think
people
have
way
higher
expectations
that
that
won't
break.
G
Okay,
so
yeah.
So
the
conclusion
that
I
draw
from
this
discussion
is
that,
like
the
the
path
is
to
define
a
stable
API
for
these
things,
so
that
they
can
become
excellent,
stable,
fully
grown
resources
and
then
kind
of
the
the
fact
that
we
use
proxy
as
as
a
casual
path,
goes
away
by
itself.
Right.
E
A
G
Yeah,
but
one
question
I
have
is
that
like?
Would
this
be
a
much
bigger
problem
than
that's
defining
like
a
kind
of
a
metadata
format
that
makes
it
suitable
for
direct
like
a
full-fledged
API
surface,
but
we
need
to
Define,
like
the
schemata
of
the
stats
contents
of
the
stats
response
itself
right.
G
Would
that
would
that
be
a
prerequisite
to
making
it
or
or
could
we
could
this
be
like
a
raw
extension
where
it
could
be
arbitrary,
Json
data
under
a
field
and
some
metadata
that
says
that
these
are
a
node
stats?
Those.
B
Are
probably
better
questions
for
Sig
node,
I
I,
don't
know
that
I
would
try
to
radically
change
the
content
of
those
endpoints.
What
I
would
care
a
lot
more
about
is
like
if
someone,
if
we
add
or
not,
we
if
signode
adds
sub
resources
for
these
and
someone
integrates
with
them.
Can
they
be
confident
that
they
won't
get
broken
in
the.
E
E
I
guess
we
haven't
defined
it
oh
and
then
for
logs
I
suspect.
We
could
actually
find
a
way
to
constrain
logs
to
do
the
80
case
safely
and
not
exceed
scope,
but
I
wouldn't
block
something
like
metrics
on
it.
I
just
want
to
make
sure
it
was
going
to
end
up
in
the
same
category.
D
A
Okay,
I
think
that's,
that's
it.
That's
all
the
stuff
on
the
agenda
there,
any
anything
else
anybody
wants
to
talk
about.
If
not,
we
can
end
it.