►
From YouTube: sig-auth bi-weekly meeting 20210818
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hey
everyone.
This
is
the
sig
off
meeting
for
august
18th
2021.
We
have
a
sort
of
a
mixed
agenda,
but
let's
go
get
started.
First
of
all,
so
jordan
said
he
couldn't
make
it,
but
he
did
remind
us
that
for
123
deadlines
are
fast
approaching
so
prr
and
kept
deadlines
are
in
early
september
and
dates
are
there.
A
But
david,
you
said
that
vr
primarily
matters
or
is
primarily
difficult
at
beta
nga
levels.
So
if
you,
I
guess,
if
you
got
an
alpha
kept,
maybe
you
don't
have
a
bunch
of
work
to
do.
Okay,.
B
That's
right
alpha
caps,
the
process
is
really
lightweight
because
you
don't
yet
really
know
what
can
go
wrong,
so
the
process
helps
you
there,
but
when
you
go
to
beta
and
you're
on
by
default,
there's
actually
a
bunch
of
questions
to
answer
about.
How
can
how
can
a
cluster
operator
know
whether
this
is
working
as
expected,
since
it's
on
by
default.
A
So,
let's
see
that
means
just
a
little.
I
guess
over
three
weeks
between
now
and
kept
freeze.
That
is
not
a
long
time.
Your
caps
have
to
be
reviewed
before
then
so
cool
with
that
I'll
go
skip
over
the
issues
for
a
second
mike.
Did
you
want
to
talk
about
your
cap?
Did
you
want
to
just
bring
it
up
right
now
or
after
you
discuss
it.
C
No,
I
I
just
wanted
to
announce
it
and
just
say
this
is
brand
new.
I
think
I
was
thinking
a
couple
weeks
ago
that
additional
metadata
in
the
authenticator
back-end
web
hook,
authenticator
back-end,
would
be
useful,
such
as
request
ip
address
and
request
the
host
used
host
header,
and
I
proposed
it
kept
to
add
that,
and
I
think
jordan
commented
that
the
additional
metadata
might
be
useful,
also
in
aussie
web
hooks
and
admission
web
hooks.
So
it's
a
brand
new
web
hook.
C
If
you
think
that
sounds
interesting
or
it's
a
brand
new
cap,
if
you
think
that
sounds
interesting,
leave
a
comment,
if
you
think
it's
a
bad
idea,
also
leave
a
comment
and
we
can
talk
about
it.
Some
other
time.
D
C
If
I
spend
time
have
time
to
work
on
it,
but
I
don't,
I
don't
think
this
is
targeting
123
necessarily.
B
For
those
of
us
who
haven't
looked
at
the
code
in
a
little
while
my
my
memory
of
token
authenticators
is
that
today,
even
in
tree,
they
don't
have
access
to
the
header
same
with
authorization
and
admission.
Is
this
proposing
adding
header
information
all
the
way
through
the
stack
or
just
to
the
web
hooks?
Or
am
I
mistaken?
I
might
be
mistaken.
It's
been
a
while.
C
So
implementation
is
not
prescribed,
yet
we
have
the
request
info
filter
right
now,
just
mostly
as
resource
attributes,
although
it
is
called
request
info.
We
could
pick
these
specific
things
that
we
think
are
useful
in
these
webhook
backends
and
add
it
to
request
info.
That
seems
like
a
logical
spot
request.
Info
is
already
available
via
the
context
object
in
authentic.
A
What
david
said,
even
though
our
token
authenticators
generally
just
get
passed?
In
the
token,
nothing
prevents
us
from
the
general
authenticator
interface
literally
gets
the
entire
request
as
input
and
we'll
go
crazy.
Yeah.
B
For
instance,
yeah
you're
right
it
does
not
have
a
header
in
it,
so
it
is
talking
about
changing
that
flow
in
the
server
okay.
So
we
are
talking
about
changing
it.
So.
C
I
would
I
would
not
copy
the
htv
requests
into
the
context
or
whatever.
I
would
take
these
specific
attributes
that
we
think
are
useful,
a
limited
set
and
propose
forwarding
that,
via
request
info
or
similar
mechanism,.
A
As
a
related
question
mike,
does
your
cap
consider
things
like
aggregated
api
servers
which
are
in
the
request
processing
chain,
but
do
not
have
the
ability
to
literally
see
the
initial
request
because
they
are
behind
the
api
server?
Generally
speaking,.
C
Great
question:
it
does
not,
it
actually
only
considers
authentic
hooks,
so
jordan
asked
for
ozzy
and
admission
which
are
not
addressed
yet
in
the
camp.
I
have
not
updated
it.
I
I
I
will
also
consider
aggregated
apis
when
I
revise
it:
okay,
cool.
Thank
you.
A
A
So
I
went
over
the
last
issue
triage.
There
was
some
stuff
that
came
up
that
we
did
not
know
how
to
address
or
do
something
with.
So
I
just
put
it
on
the
agenda,
so
we
can
talk
about
it.
A
This
is
the
first
one,
so
this
came
up
like
a
long
time
ago.
This
is
a
known
bug
for
a
long
time.
We
never
addressed
it.
Then
we
kind
of
talked
about
it,
like
some
concept
of
like
a
crd
marking
itself
as
sensitive
as
a
way
of
hinting
through
the
underlying
storage
interface
that
it
needs
to
be
encrypted
at
rest.
A
I
think
that's
about
as
far
as
we
got
in
that
conversation,
but
I
wanted
to
bring
this
up
because,
while
this
is
sort
of
an
api
server,
wiring
issue,
it's
pretty
firmly
kind
of
our
problem
in
the
sense
that
we
own
the
kitchen
arrest.
A
So
do
folks
have
opinions
on
this.
I
think
the
sort
of
the.
A
A
A
Yes,
because
we
don't
wire
that
data
through
to
that
part
of
the
stack
that
was
my
first
comment
there.
It
points
out
like
where
we
nail
out
that
field.
That
field
is
the
one
that
contained
the
information
about
the
encryption
address,
configuration
and
then
we
don't
when
you
reconstruct
it
for
the
extension
api
server
bit,
we
don't
reconsider
the
encryption
configuration
information
again,
so
it's
lost.
B
Well,
losing
it
was
an
accident
not
planned
yeah.
C
C
Yeah-
and
we
have
discussed
exposing
more
of
this
through
the
api
and
have
so
far
decided
not
to,
but
I
think
step
zero
is
to
make
sure
that
the
current
configuration
model
works
for
all
resources.
A
Yeah
I
can,
I
can
buy
that
I
just
wanted
to
not
make
a
decision
for
us
awesome
yeah.
So
all
right.
A
A
All
right
next,
this
one,
I
think,
is
a
pretty
minor
issue,
which
is
that
our
validation
for
service
accounts
is
not
for
their
names
is
not
exactly
correct
for
what
it's
trying
to
be,
which
is
that
we
don't?
We
don't
do
the
we
do
an
overall
length
check
for
the
search
account
name,
but
we
don't
either
check
for
exactly
which
is.
A
Yeah
suppose
we
don't
do
a
line
check
at
63,
even
though
we're
supposed
to,
and
apparently
people
do
make
these
things
in
the
wild
and
somewhere
in
the
code.
We
do
an
overall
length
check
for
something
255
or
whatever,
which
is
like
the
overall
limit,
but
we
don't
check
the
internal
bits.
Do
we
care
and
we
should
we
do
anything?
Can
we
just
document
this
or
do
we
need
to
document
it.
C
It
seems
like
a
documentation
bug
at
this
point.
I
don't
think.
A
A
A
Yeah
all
right
and
then
last
little
bit,
yeah,
okay,
so
in
relation
to
priority
and
fairness,
the
slash
health
probe,
which
is
done
anonymously
by
the
cubelet
and
other
actors,
traditionally
are
complete.
Health's,
liveness
and
readies
are
completely
exempt.
I
think
from
priority
in
fairness
and
jordan
said
that
was
resolved
by
that
and
then
daniel
was
like
no,
we
just
just
broke
it
in
a
different
way,
and
now
this
whole
thread
is
cigar
to
the
api
machinery
and
signature.
Saying
that
it's
not
our
fault,
which
is
unhelpful.
A
B
Was
involved
by
go
ahead?
You
probably
know
so.
We
ended
up
with
an
unfortunate
crash
loop.
So
what
happened
is
if
you
get
a?
If
you
had
a
really
busy
server,
then
priority
in
fairness
would
actually
start
429
health
z,
requests
from
the
cubelet
healthy
lives
erasy,
so
your
pod,
that
was
under
pressure,
would
but
still
responding
would
end
up
getting
pulled
out
of
the
load
bouncer,
which
would
cause
like
the
next
pod
to
take
the
load,
which
would
then
cause
it
to
fail.
B
B
The
unresolved
came
in
because
in
theory,
some
anonymous
person
because
those
by
default
are
available
to,
I
believe
anonymous
can
just
come
in
and
flood
nothing
but
healthy
requests,
and
maybe
that
starts
out
the
process
so
to
resolve
that
you
would
need
to
be
able
to
identify
the
cubelet
doing,
identify
that
it
was
a
cubelet
or
other
confirmed
actor
and
give
them
unrestricted
access
to
hellz,
but
not
unrestricted
access
to
general
user.
A
B
A
B
It
would
be
possible
to
do
that.
I
would
find
that
to
be
a
reasonable
change,
but
I
won't
dictate
it.
That
is
the
only
solution
just
that
it
is
one
that
is
reasonable
and
one
that
I
think
we
have
the
information
for
and
isn't
risky
security
wise,
since
the
key
never
leaves
the
cubelet,
but
that
is
expensive.
C
Yeah,
I
guess
I'm
curious
like
how
much
more
expensive
is
serving
up
like
in
okay
than
just
going
through
the
priority
and
fairness
quota
system
and
because
it
does
seem
like
it
would
be
trivial
to
overload
with
429s
instead
of
okay
responses.
C
Is
there
actually
a
significant
difference
in
what
we're
protecting
against,
even
like,
even
anything,
with
network
access
like
there's,
ssl
exhaust
and
there's
syn
flooding?
It
seems
like
this
is
not
the
right
spot
to
solve
to
mitigate
the
concern
that
is,
you
know,
motivating
this
change.
A
David
mike,
do
you
mind
both
commenting
on
this?
Just
with
your
thoughts
that
way,
we
have
a
little
bit
of
progress.
You
know
david's
approach,
you
know
could
at
least
be
actionable.
If
someone
wants
to
go,
try
to
fix
that.
That
would
at
least
provide
a
thing,
and
you
can
stop
fighting
about
it
which
say:
go
into
cool.
C
I
think
maybe
we
should
close
this
bug
and
then
create
a
feature
request
for
cubelet
health,
z,
auth,
or
something
like
that.
I'll
comment
that,
on
the
book.
A
All
right,
johan,
you
honk,
am
I
saying
your
name
right,
I'm
sorry!
If
I'm
watching
it
over
here,
yeah
all
right.
I
think
you're
you're
next
up,
we've
gotten
through
the
issues
and
make
sure
captain
announcements
interrupt.
If
you
want
to
present
the
problem
that
you
were
having.
F
A
So
just
so,
I
understand:
there's
actors,
you
know
autonomous
actors
on
your
clusters.
I
think
you
mentioned,
like
the
crown
job
controller,
that
issue
no
op
updates
and
these
generate
audio
event,
noise,
which
is
problematic
for
your
storage
back
just
by
basically
wasting
space,
and
you
wanted
to
come
up
with
some
mechanism
to
inform
the
audio
back
end
that
the
change
was
the
update.
While
it
did
occur,
it
was
a
no
op,
so,
yes,
I
don't
know
choose
to
drop
the
event.
He
doesn't
care
yeah.
F
E
B
G
Do
you
remember,
I'm
sorry,
I
was
not
paying
attention.
B
Sure
so
so
I
just
want
to
make
sure
what
we're
talking
about.
I
remember
the
flow
of
the
audit
log
correctly
and
that
when
we
first
receive
a
request,
we
create
an
entry
or
make
you
know
or
make
a
call
say.
B
We
have
received
a
request,
and
at
that
point
in
time
we
don't
know
whether
it's
a
no
op
update
or
not,
and
then,
after
the
request
is
handled,
we
send
a
second
notification
saying
this
is
what
actually
happened,
and
that
would
be
the
earliest
time
if
we
chose
to
do
anything
that
we
knew
that
it
was
a
no-op
request.
I
just
want
to
make
sure
I
remembered
the
flow
correctly
there.
G
Yeah,
that's
correct
the
first
so
when
the
request
is
received
before
any
work
is
done
on
it,
it
fires
a
audit
event
in
the
request
received
stage
which
can
be
disabled
but
but
yeah,
and
then,
after
the
response
is
complete,
there's
a
response
complete
stage
as
well.
Okay,
thanks.
A
So
I
was
gonna,
I
I'm
pretty
sure
option.
One
cannot
be
done
because
no
op
updates
are
valid
storage
migration.
Today,
it's
like
a
feature,
another
bug.
So
no
we
can't
do
one
we'll
just
break
the
server.
A
I
I
don't
know
what
the
the
requirements
are
to
start,
adding
effectively
audit
annotations
to
our
events.
Their
apis
are
just
like
extremely
tight.
Instead
of
like
actual
fields,.
B
Have
thoughts
I
was
actually
trying
to
think
about
whether
this
can
be
known
during
a
validating
admission
webhook,
and
I
think
it
can
be
known
because
I
think
you
have
the
old
and
the
new
object.
That's
about
to
be
persisted,
but
you
don't
know
that's
going
to
be
the
one
that
is
actually
persistent.
B
Yeah
the
server
actually
retries
updates,
there's
like
a
loop
on
it
for
updates
to
get
called
during
patch
requests.
F
A
Yeah,
okay,
okay,
so
there
would
be
some
audit
annotation
that
could
be
added
to
let
the
back
end
know
on
it
would
only
be
on.
Is
it
request,
completed
response
completed?
What's
this
about
company
response,
complete
phase
that
there
was
no,
for
you
know
whatever
audit
id
event,
there
was
no
no
change.
Yes,.
A
Yeah
I
mean
okay,
so
tim,
do
you
have
thoughts.
A
F
That's
my
option
too.
So
if
we
have
both
used
on
the
metadata
rank,
I
can
request
only
the
metadata
or
only
the
request
objects
for
the
both
the
request,
object
and
the
response
objective.
Currently,
in
our
implementation,
we
only
have
the
request
object.
So
our
nerve,
the
autonomous
nerve,
is
on
the
request
object.
So
there
is
no
response
or
object
returned
to
the
overhook.
F
So
if
we,
as
you
said,
we
can
compare,
of
course,
if
we
enable
both
the
request,
objects
and
the
response
response
object,
we
can
compare
them
on
the
overhook,
but
that
means
the
communication
between
the
api
server
and
the
overclocker
website
will
double
right.
So
it'll
only
be
a
stress
to
our
system.
A
Yeah
tim
david
remind
me:
can
you
can
you
on
the
response
completed
phase?
Do
you
have
the
original
resource
version
or
you
just
get
the
final
research
version.
A
A
If,
if
you
I'm
trying
to
think,
though,
like
if
like
what
the
problem,
this
is
trying
to
solve
right
like
if
presumably
your
web
hook
needs
to
be
able
to
handle
arbitrary
levels
of
traffic
from
the
api
server,
because
you
don't
want
to
lose
audit
events.
Yes,
basically
because
that's
the
point
of
audit
events.
A
Yes,
so
if
you
went
down
option
two
and
you
got
twice
as
much
request
data,
it
does,
however,
enable
you
to
significantly
reduce
all
storage
data
right,
because
you
can
do
the
check
that
you
want
to
do
and
not
retain
stores
for
things
you
don't
care
about.
A
F
A
Yeah
my
my
main
hesitation
is:
I
don't
necessarily
want
to
grow
this
surface
area
indefinitely,
like
the
idea
of
having
a
bag
of
fields
is
very
tempted,
because
you
can
keep
just
sorry
a
map
effectively.
Is
it's
very
easy
and
tempting
to
keep
adding
stuff
to
it
and
there's
a
perverse
incentive
to
keep
doing
that
because
you
technically
are
not
changing
the
api,
but
you
are
actually
still
changing
the
api
yeah.
A
So
that
that's
the
aspect
about
this,
that
makes
me
nervous-
and
yes,
I
realize
it's
a
it's
a
small
building,
so
maybe
my
ass
would
be
is
try
option
two
and
and
see
I
know,
it'll
install
your
storage
issue,
I'm
curious.
If
the
actual
network
overhead
is
problematic
at
scale.
A
David
tim
mike,
would
you
would
you
consider
a
cap,
a
good
next
step,
then
we'll
try
and
propose
an
annotation
for
this
information.
G
G
Oh
yeah,
that
that's
that's
very
reasonable.
I
I
definitely
think
that
cap
is
the
next
step
for
this.
I'm
I'm
skeptical
of
the
idea,
but
we
can
hash
that
out
on
the
cap.
Okay,.
A
Okay,
thank
you
awesome
all
right.
A
Oh,
thank
you
you're
way
better
than
me.
Matt
you're.
I
Up
sir
yeah,
this
would
be
a
pretty
quick
topic.
I
just
wanted
to
kind
of
get
a
temperature
check
on
something
that
mo
and
I
had
been
talking
about
so
right
now
we
have
the
oidc
built
in
oadc
authenticator,
which
validates
jots,
and
does
it
the
way
that
oidc
issues
id
tokens
so
they
need
to
have
an
issuer.
I
It
has
some
sort
of
opinions
that
come
from
the
oidc
spec,
which
is
great.
Sometimes
you
have
jots
that
aren't
oidc
id
tokens
they're
still
perfectly
valid
jots.
They
have
standard
jot
claims
that
you
can
validate,
but
maybe
they
don't
come
from
oidc
discovery.
I
Sorry,
their
key
set
doesn't
come
through
oidc
discovery
or
they
might
not
have
all
the
claims
that
an
id
token
has
so
like
issue
or
things
like
that.
Is
there
any
appetite
in
this
sig
for
a
new
built-in
token
authenticator?
That's
like
the
first
first
question.
The
answer
to
that
may
just
be
no
right.
Second
question
was:
if
we,
if
we
did
go
down
this
path
and
I
wrote
a
cap,
we
had
the
whole
design
discussion.
I
That
is
a
sort
of
a
super
set
of
the
current
odc
behavior
that
also
supports
things
like
loading,
the
jwks
directly
from
a
url
or
loading,
the
jwks
from
a
file
potentially
doing
other
kinds
of
validations
that
are
useful
for
jots
that
aren't
id
tokens
anyway.
I
wanted
to
kind
of
like
just
get
a
five
minute
temperature
check
here
and
then
I'll
either
go
write
a
cap
or
I
will
skip
writing
account
so
yeah.
I
Obviously
the
the
one
argument
just
to
throw
this
out
there.
One
argument
against
doing
this.
In
course
you
you
can
get
this
from
a
web
hook.
The
web
book
extension
point
the
downside
of
doing
it.
That
way
is
sort
of
the
same
reason
why
we
have
the
oidc
authenticator
in
process,
which
is
it's
good
for
performance
and
reliability
to
have
the
token
validation
happening
all
in
memory.
B
A
You
know,
just
as
you
provide
a
a
trust
bundle
on
disk
and
then
all
the
validation
happens
in
process
and
then
the
conversion
of
that
validated
structure.
There's
like
cn
equals,
username
and
or
equals
groups.
It's
all
isolated
in
process
the.
A
I
think
this
is
the
token
version
of
that
which
is
there
is
some
generic
format
that
is
perhaps
less
opinionated
than
oidc,
which
requires
very
particular
set
of
claims
and
like
a
well-known
url
and
discovery
and
all
of
those
things.
I
Okay,
we
don't
have
to
drag
this
discussion
up
if
anybody
has
thoughts
after
the
meeting
feel
free
to
bring
me
in
slack.
It
sounds
like
I
might
be,
writing
a
pick
up
for
this,
and
we
can
talk
more
details
later.
A
All
right,
I
think
me
now,
okay,
so
we
talked
a
while
back
about
some
kind
of
proxy
based
approach
for
client.
Go
to
allow
for
arbitrary
client-side
authentication
mechanism
through
a
local
proxy
of
some
sort,
and
I
had
done
a
psd
for
having
some
kind
of
web
server
that
you
do.
A
web
server,
that's
localhost
on
localhost
that
you
perform
mtls
with
and
it
does
the
authentication
for
you
after
I
done
that
plc.
A
I
started
exploring
options
around
unix
domain
sockets
because
those
had
been
recommended
for
as
a
possible
approach
on
mac
and
linux,
and
something
that
initiate
pointed
out
to
me
was
that
windows
10
does
support
unix
domain
sockets.
So
what
I?
What
I've
been
doing
since
then,
is
I've
been
trying
to
gather
data
and
metrics
around
what
percentage
of.
A
A
There
is
to
try
to
understand
that
if
we
said
that
this
cap
was
going
to
be
purely
unix
domain,
socket
based
which
subset
of
our
community,
would
it
not
be
able
to
serve
and
would
the
possible
with
that
as
a
downside,
would
it
still
be
worth
it
to
have
a
unix
domain,
socket
approach,
eds
based
approach
instead
of
web
server.
A
D
Yeah,
my
my
reasoning
was
the
security
benefits
so
yeah.
Last
time
we
talked
about
this.
Somebody
brought
up
some
examples
of,
for
example,
the
azure
secret
store
csi
driver
uses.
D
I
think
unix
domain
sockets
exclusively
for
both
windows
and
linux
platforms.
So
I
think,
there's
some
examples
in
the
community
of
of
that
approach.
D
Yeah
I
I
would
lean
towards
that,
but
I
wouldn't
block
on
it.
If
there's
more
appetite
for
a
tls
based
approach,.
A
Yeah,
I
think
that
was
a
niche,
and
the
the
thing
I
want
to
point
out
is
that
I
think
a
uds
based
approach,
server
side
on
like
a
cubelet
or
something,
I
think
is
a
no-brainer,
because
you
control
the
os
and
your
os
is
almost
certainly
windows
or
linux
right
and
your
windows.
Os
is
probably
very
new
because
you're
writing,
probably
like
a
windows
node
with
the
new
fancy
container
stuff
and
the
new
linux
subsystem
and
all
that
stuff
right.
A
So
in
that
environment
I
think
you're
really
well
covered
and
I
don't
think
using
a
uds
is
of
any
concern.
My
concern
was
more
of
that.
This
feature
is
client-facing
in
the
sense
of
cube.
Ctl
and
similar
clients
could
leverage
it,
which
is
like
you
know,
normal
people's
laptops
and
stuff
and
windows
is
the
dominant
os
for
normal
people,
and
I
do
know
that
you
know
windows.
A
10
has
made
significant
strides
in
terms
of
replacing
windows
8
and
7,
but
as
a
for
example,
if
you
go
download
the
latest
release
of
go,
it
still
works
fine
on
windows,
7.
right
there's.
I
don't
think
there's
any
technical
reason
that
our
clients
can't
support
even
on
this
great,
so
that
was
that
was
my
concern
there.
I
didn't
want
to
leave
people
out
of
some
functionality
that
they
may
want
to
use,
and
I
think,
asking
an
end
user
to
update
their
desktop.
Os
is
not
a
good
ask.
A
Yeah
I
had
reached
out
to
sig
windows
and
they
didn't
necessarily
have
data
for
me
and
I
asked
in
slack
for
say,
contributes
but
yeah,
I'm
certainly
willing
to.
H
A
It
sucks
you
don't
work
on
windows,
then
some
things
so
so
there
there
is
there's
probably
some
testing
burden
on
this.
That
is
almost
certainly
significantly
higher.
Like
pretty
sure
a
web
server
works
just
fine
on
windows,
I'm
sure
all
that
stuff
looks
exactly
the
same
and
go
handles
it
really
well.
A
So
that
is
also
I
guess,
of
a
concern,
but
to
your
point
rita,
I
do
think
maybe
maybe
an
email
to
kubernetes
dev
and
any
other
things
mailing
list
that
makes
sense
to
try
to
get
more
data
is
valuable.
There's
a
there's,
an
end
user
computing
business
unit
at
vmware
that
I'm
in
the
process
of
reaching
out
to
because
we
sell
software
that
manages
like
people's
host
machines,
so
obviously
collects
metrics
on
those.
So
I
I
might
be
able
to
get
some
further
data
out
of
vmware
but
yeah.
A
That
was
my
thought
there
is
like.
I
don't
know
if
this
is
necessarily
a
fair
approach.
The
other
question
I
have
is,
let's
just
say
for
our
purposes,
that
we
determine
based
on
our
information
collection,
that
a
small,
a
small
user
base
would
be
negatively
impacted
by
this
using
uds
and
thus
maybe
we'll
go
ahead
and
go
down
the
uds
route.
Would
we
still
use
tls
over
this
channel.
A
So
as
an
example
on
the
local
port-based
server
approach
that
was
effectively
a
requirement
to
secure
it.
D
What
would
be
the
reason
to
would
it
be.
A
It
would
it
would
be
basically
that
you
would
be
trying
to
add
another
layer,
so
not
just
that
the
if
they,
if
the
uds
is
there,
it's
not
just
if
you
have
permissions
on
that
file,
descriptor
whatever
or
a
file
name
to
connect
to
it.
You
also
need
like
to
be
able
to
like
mtls
to
the
server
running
on
the
other
side.
A
As
a
way
of
saying,
like
you
know,
you
were
the
process
that
cube
ctl
spoke
with
and
like
when
cube
ctl
did
that,
like
it
was
handy,
the
mtls
certificate
for
a
client
certainly
used
for
mtls,
and
it
is
now
just
doing
so
over
this
uds,
thus
very
tightly
locking
it
down
it,
be
basically
another
layer
on
top
of
the
the
uds
based
security.
D
A
I
don't
know
exactly
what
the
attack
vectors
are
for
on
for
a
uds.
We
had
a
comment
on
chat,
but
they
don't
saying
that
they
don't
feel
that
tls
is
necessary
for
something
like
the
uds
on
localhost.
A
B
So
if
you
had
access
to
the
uds,
would
you
not
also
have
access
to?
I
guess
you
wouldn't
necessarily
have
to
hit
disk
when
launching
this
proxy
is
the
thought
yeah.
So
when.
F
A
B
A
All
right
you
can
you
can.
You
can
still
just
say
that
the
tls
server
name
is
like
localhost
and
okay.
It'll
still
work.
B
My
I
like
I
like
having
at
least
regular
tls,
but
that's
yeah.
I
wouldn't
stick
on
it.
A
Okay,
the
if
I
think,
by
the
time
you
get
regular
tls,
adding
mtls
is
100
free
because
the
other
end
had
to
manufacture
a
ca
and
a
serving
cert.
So
adding
a
client
cert
is
like
zero
extra
work,
so
it's
already
all
there
and
the
process
for
transferring
that
data
is
also
already
there.
It's
in
the
client
code,
credential
plug-in
api
today.
A
So
if,
if
you
want
tls
at
all,
then
then
by,
then
you
should
just
do
mtls.
I
guess
my
question
was:
would
this
be
a
http
plaintext
uds,
or
would
it
be
more
than
that?
What
I
got
from
you
david
is,
I
would
like
pls
just
because
I
like
tls
everywhere
I
I
do.
A
A
Okay,
mike
or
tim,
did
you
guys
maria?
Did
you
guys
have
any
opinion
tim?
I
I
think
you
have
vaguely
mentioned
early
on
that
this
could
be
some
functionality
that
perhaps
apple
might
want
to
use
on
something.
Does
this
sway
you
in
either
way
for
any
particular
direction.
G
A
G
Need
direct
memory
access
in
order
to
snoop
on
that,
but
I
also
don't
know
the
details
of
attacks
against
unix
domain
sockets.
A
Okay,
okay!
Well,
I
can
try
to
do
some
research
and
try
to
figure
out
what
what
tls
based
approach,
what
protection
would
it
add
I?
I
know
it
is
done,
at
least
in
some
places
because,
like
if
you
look
even
inside
in
the
go
hp
stack,
there
is
a
distinct
difference
between
the
unix
protocol
versus
the
unix
s.
A
A
Where
do
you
put
the
clients,
sir,
to
how
and
how
would
its
permissions
differ
from
soccer,
so
the
the
client
cert,
so
the
the
credential
clientgo
credential
exec
plugin
mechanism
transfers
this
information
over
standard
out.
So
when
you
execute
the
binary
it
prints
to
its
standard
out
to
hand
you
back
the
credentials,
so
they
don't
have
to
be
stored
anywhere
they're
just
passed
through
that
buffer.
A
Thank
you
yes,
so
yes,
mike,
as
you
mentioned.
Basically,
if,
if
your
file
permissions
are
not
correct,
it
doesn't
really
matter
because
we
have
effect.
If
you
use
this
a
tls
based
approach,
I
think
you
have
elevated.
The
requirement
of
you
must
be
rude
on
the
box
to
like
meaningfully
have
a
chance
of
breaking
into
that
channel.
A
Yep
yeah,
that
is
fair.
E
A
Yeah
it
it
it's
another
layer,
I
don't
know.
If
I
I
think
the
question
is
basically
is
the
complexity
in
the
design
and
implementation
worth
the
protection.
I
I
do
think
I
mean
there's
still
some
long-standing
technical
issues
with
exec,
which
is
like
the
main
one
being
is.
A
So
maybe
one
day
we'll
have
a
cube,
rc
file
or
whatever,
where
we
can
white
list
or
sorry
allow
list
all
the
binaries
that
we
are
willing
to
accept
all
right,
but
we
are
out
of
time.
So
thank
you.
Everyone
for
the
great
discussion
and
we'll
see
y'all
in
two
weeks.