►
From YouTube: sig-auth bi-weekly meeting 20200624
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Can
I
run
here
me,
sir
good
yeah
I'm,
just
saying
hello,
throwing
up
and
around
for
a
while,
but
my
life
has
been
in
some
turmoil:
I'm
now
Apple,
so
my
affiliations
changed
and
I'm
figuring
out
their
personal
stuff.
It
sounds
like
I
can
keep
doing
the
kind
of
pre-existing
things
I'm
doing,
even
though
they're
not
necessarily
related
to
my
work,
so
I'll
figure
that
out
over
the
coming
weeks
and
hopefully
finish
off
this
cap.
Finally,
thanks.
C
A
D
E
A
C
D
Let
me
try
again
sure
yeah
there
we
go
yeah,
sorry,
guys,
alright,
so
yeah
cool,
so
hello,
it's
Paula,
yeah
today,
I
will
be
talking
about
second
operator.
We
present
today's
as
well
to
take
note
twice
now
so
last
month
and
again
last
week.
Here
are
the
handles
of
everyone.
Disabilities
at
the
moment,
so
just
before
we
get
that
actually
answered.
Second,
please
going
to
GA
on
1.19.
We
work
on
that
right
now,
so
hopefully
that
you
know
ok
turn.
But
it's
still.
D
There
are
quite
a
few
points
that
we
think
I'm
not
great
with
the
implementation,
so
we
kind
of
developing
that
of
tray
tube,
to
make
it
easier
for
users
to
use-
and
these
are
some
of
the
needs
that
we
we
think
we
we
have
across
the
community.
So,
for
example,
custom
ii
profiles
need
to
be
able
to
matter
we're
actually
manually
copied
across
all
nodes.
So
you
know,
without
that
you
simply
can't
use
custom
ii
and
there's
no
in-memory
representation
of
profiles.
D
So
you
know
it's
very
hard
for
people
to
to
share
a
given
profile
and
be
able
to
deploy
that
easily.
There's
also
a
lack
of
a
medium
to
share
second
profiles
for
gnome
workloads,
so,
for
example,
if
you
have
we've
worked
secured,
you
know,
and
you
want
to
use
a
second
profile.
You
probably
would
have
to
create
your
own,
it's
hard
for
you
to
kind
of
create
line
and
share
and
and
help
other
people
to
kind
of
contribute
it.
D
The
simple
form,
there's
also
a
lack
of
metrics,
it's
hard
to
have
the
view
of
you
know
what
part
of
your
workload
is
using.
Second
out
of
those
which
ones
actually
applied
automatically,
which
ones
is,
is
using
like
the
default
and
and
so
on
and
ultimately
another
thing
is,
you
know
we
we
think
that
is
an
operator
would
help
guide
the
users
to
have
a
stronger
security
depoe.
D
So
we
starting
already
with
MVP,
it
is
pretty
much
almost.
There
is
already
synchronizing
those
in
memory
and
profiles
which
are
configured
on
config
maps
across
all
nodes,
so
you
can
create
a
config
map
in
all
the
profiles.
Credit
inside
of
this
config
map
will
pretty
much
kind
of
sync
that
across
all
the
nodes,
and
we
also
make
sure
that
they
have
the
correct
and
ACL
strap
in
the
future.
There
is
a
lot
of
things
we
want
to
do
as
well,
so
we
want
to
you
know
we
store
public
sacrum
there.
D
So
you
know
the
idea
is
to
start
creating
things
were
you
know,
profiles
for
all
sorts
of
components
and
and
help
the
community
to
not
only
contribute
but
but
also
to
you
know,
create
new
ones
and
and
make
sure
that
we
are
evolving,
that
the
case
automatically
apply,
provides
to
public
containers
so,
for
example,
being
able
to
create
those
profiles,
and
once
you
know
you
have
a
profile
for
a
given
workload
allowed
to
automatically
map
them
and
execute
that
within
your
cluster.
So
you
know,
if
you
have.
D
Another
thing
this
train
boards
and,
as
always,
is
allow
closest
to
failure
close.
So,
for
example,
at
the
moment,
if
my
in
one
of
the
nodes,
the
runtime,
doesn't
support
second
or
for
whatever
reason
you
know
given,
psych
won't,
wouldn't
be
applied,
let's
say
because
it's
using
a
different
action
that
is
not
supported
or
whatever
the
idea
of
the
operators
to
be
able
to
actually
not
allow
that
part
should
be
executed
at
so
we
didn't
I,
keep
a
note.
D
So
what
we're
trying
to
do
is
make
sure
that
this
is
I've
been
running
working
well
for
second,
and
then
once
that
is
the
case,
we
could
pretty
much
extend
to
out
more
here's
a
mind
map
that
will
of
things
that
we
start
thinking
about.
So,
as
I
said
before,
you
know,
like
the
in-memory
representation,
the
thinking
of
files
is
something
that
you
know
of
existing
profile
is
something
that
we're
considering
the
idea
of
having
second
feature
aware.
D
So
the
key
reason
why
we're
presenting
these
and
and
why
we
present
that
to
signal
as
well
is
well
one
of
the
reasons
is
chicken.
You
know
a
binding
for
us
to
create
a
repo
within
kubernetes
IG's,
so
we
can
actually
benefit
from
the
development
ecosystem
and
also
from
community
supports,
and
also
one
thing
that
is
very
valuable
for
us.
Is
you
get
the
help
and
ideas
from
anyone
that
is
King
on
unciekin
pensacon
corporator?
So
that
would
be
all
from
me
really
a
do
you
guys
have
any
questions.
D
G
A
H
C
H
So
we've
extended
that
concept.
You
have
a
common
policy
report
and
the
idea
is
just
going
to
this
document.
That's
linked
in
the
agenda,
I'm
not
going
to
go
through
details,
but
the
idea
is
that
creates
a
reusable
type
which
we
can
either
have
adapters
from
different
policy
tools,
and
there
seem
to
be
quite
a
lot
of
these
now,
where
different
policy
tools,
I,
guess
addressing
different
concerns,
but
the
idea
would
be
to
allow
them
to
create
a
common
report
structure.
H
So
we've
been
working,
you
know,
there's
folks
from
Red
Hat
IBM
and
then
myself
and
a
few
other
folks
in
the
working
group
have
been
drafting
this
structure,
which
is
in
this
document
in
terms
of
how
we
would
be
able
to
capture
the
reports
in
a
common
format
and,
of
course,
the
advantage
there
being.
Then,
through
good
cuddle
or
any
other
common
API
tools,
upstream
management
tools
can
do
further
processing,
auditing
logging
and
enrichment
of
these
reports.
H
So
for
that
now
you
know
in
a
repo
we
do
have
there's
a
PR
pending,
which
we
are
almost
ready
to
merge,
and
you
know
what
it's
pretty
much.
What
what
this
does
today
is.
It
has
the
definitions,
so
we
ended
up
using
the
code
generation
from
the
same
thing
as
what
good
builder
does
so
in
the
policy
report
folder
in
the
types.
Basically,
we
have
defined
a
cluster
wide
type
as
well
as
a
namespace
type
and
this
report,
you
know
it
kind
of
captures
some
of
these
common
fields
and
the
idea.
H
H
So
some
tools
we're
looking
at
coop
bench,
OPA,
gatekeeper,
caverno
cloud
custodian
starboard,
which
Liz
had
commented
on
and
here
that
that's
one
of
the
new
tools
aqua
was
working
on
and
then
falco
potentially
is
another
one
which
we
could
use
so
either
way,
we'll
look
at
creating
adapters
or
in
some
cases
the
tools
may
natively
support
this
output
format
and
I
think
there
was
Rackham,
maybe
Erica.
If
you
want
to
jump
in
and
share
a
little
bit
more
about
what
you
know.
The
IBM
team
is
also
doing
with
Rackham
and
what's
proposed
there
I.
H
So,
really
just
looking
for
any
any
updated
thoughts,
feedback
from
what
we
from
that
just
again,
validation
that
this
seems
like
the
right
direction
and
then
I
think.
The
other
thing
that
we
wanted
to
discuss
is
once
we
have
a
few
of
these
at
least
policy
tools.
Once
we
start
creating
samples
and
adaptors
what
you
know,
what
else
would
we
want
to
get
done
before,
or
this
is
more
widely
circulated
or
if
we
I
don't
know
like
what
would
be
the
next
logical
step
for
something
like
this.
I
That's
basically,
what
we're
doing
the
thing
that
we
want
we're
trying
to
use
kind
of
q4
is
it's
the
coordination
point,
there's
a
lot
of
different
tools
and
standards
that
are
hoping
to
standardize
on
this,
and
we
need
a
it
doesn't
necessarily
need
to
be.
You
know
a
server
anything,
but
at
least
having
a
definitive,
authoritative
place
for
people
to
standardize
on
it
would
be
is
what
we're
looking
for
if
it
has
to
stay
in
our
say,
working
group
repo,
that's
okay,
but
we
would
like
to
be
able
to
I'm.
J
H
So
I
think
one
thing-
and
this
is
just
even
from
other
working
groups-
this
has
come
up,
is
but
working
groups
since
there's
no
official
code
or
I
guess
any
official
of
artifacts
right.
The
question
is,
what
is,
is
there
any
sort
of
graduation
from
that?
Or
do
they
stay
in
prototype
and
proposal
phase
right?
That's
really.
What
would
what
we'd
be
looking
for
and
I
mean
it
can
be
used
either
way
so
you're
right
right?
A
I
guess
I
would
ask,
is
like
be
ready.
You
could
have
like
a
different
repo
in
carbonite
six
that
was
like
for
this
upset
of
work.
That's
like
graduated
prototypes
is
there
some
conceptual
benefit
to
that
versus
like,
for
example,
making
like
a
unique
github
board
that
sort
of
houses
this
project
and
you
sort
of
yeah.
That's
where
you
anchor
your
community
I
mean
it's
I'm,
sure
I'm
trying
to
relate
it
to
like
I
guess
like.
Is
there
a
reason
for
it
to
be
in
kubernetes
sig
when
it's
like
done
or
ready.
J
To
be
clear,
I,
don't
I,
don't
know
jack
I'm,
just
looking
at
it
saying
like
I,
don't
think
it
ever
makes
it
to
KK
I,
don't
know
that
it
ever
needs
to
be
mine.
You've
API,
if
we
just
need
an
officially
sanctioned
Trinity
SIG's
repo,
that
seems
pretty
reasonable
right
somewhere,
where
it
can
live
after
the
working
group
goes
away
and.
A
I
I
H
H
F
H
It
would
be
stored
in
at
CD,
and
you
know,
one
of
the
discussions
we've
had
is
how
we
aggregate
and
what
would
be
some
of
the
pros
and
cons
in
terms
of
sizing
and
scaling.
So
there
is
some
flexibility
in
the
definition
for
that,
so
the
idea
would
be
to
allow
yeah
we
weren't
thinking
of
having
like
a
API
aggregation,
but
also
like.
We
had
quite
a
lot
of
discussion
on
whether
to
follow
like
the
standards
back
and
Status
structure,
and
for
this
it
seemed
like
it
didn't
make
sense
to
you.
H
It
was
because
it's
an
output
from
a
policy
engine
right.
It's
not
really
driving
a
controller,
so
yeah
apparently
a
question.
Yes,
that
would
be
a
net
CD.
It
would
be,
and
of
course
like
it
would
be
accessible.
Could
group
cuddle
and
standard
API
calls,
but
the
idea
what
the
definition
itself
leaves
enough
flexibility
for
either
things
to
be
aggregated
at
the
cluster
level
of
node
level,
namespace
or
even
like
a
workload
right.
So
we
could
have
a
report
for
workload
if
a
engine
chooses
to
output
things
in
that
manner,.
F
I
We're
aware
of
that,
there's
kind
of
what
are
the
one
we're
trying
to
draw
is.
This
is
a
state
of
the
cluster
as
it
is.
This
is
not
for
keeping
your
historical
data,
it's
not
for
including
lots
of
external
non
kubernetes
kind
of
related
policy
information.
Maybe
you
couldn't
come,
make
a
link
to
it
or
label.
I
H
H
So
we'll
have
to
do
some
iterations
and
one
of
the
first
targets
we
had
was
good
bench
to
try
out.
You
know,
writing
some
adaptors
and
outputting
the
report
and
seeing
what
works
best,
also
not
just
for
storage
and
flexibility,
but
also
in
terms
of
admins
being
able
to
view
this,
and
you
know,
sort
of
use
it
appropriately.
H
F
A
A
H
So
we're
gonna
does
more
work.
We
need
to
do.
We
just
wanted
to
socialize
this
at
this
point,
and
you
know
we
I
feel
we
should
get
one
or
two
adapters
under
our
belt
first
and
then,
once
we
have
that
will
you
know
come
back
with
some
more
demos
and
at
that
time
we
can
figure
out
where
this
goes
as
a
next
step.
A
K
Service
account
Dickens
in
a
backwards
compatible
way,
so
that
the
three
things
that
we
had
identified
were
paws
security
policy
constraints
around
different
volume
types
people
might
have
set
up
their
policies
to
allow
secrets
and
we
would
start
injecting
predicted
volumes
and
not
fit
within
those
policies.
The
other
two
were
rotation,
clients
that
didn't
expect
to
rotate
and
file
system
permissions,
and
those
have
already
been
addressed.
So
this
is
the
last
thing
that
would
let
us
unblock
rolling
it
out.
Yeah.
L
A
A
A
A
Okay,
I,
don't
think
anyone
is
but
I
don't
see
like
matcha
and
stuff
on
the
call,
but
the
general
asked
was
hey
want
to
move
cute
detail
to
staging
and
you're
odd
code
is
in
the
way
we
want
to
do
with
your
auth
code
during
they.
You
probably
have
like
the
strongest
opinions
on
this.
K
K
J
K
K
J
J
I'm
not
trying
to
push
it
somewhere
like
that,
although
you
could
probably
do
worse,
I'm
thinking
more
like
you
know,
is
it
something
we
would
use
for
for
things
like
a
shared
dependency
between
the
scheduler
and
the
node
as
well
scheduler
in
Cuba,
right
like
they
face
similar
problems,
they
have
these
helpers
that
they
use
with
their
API
types.
The
helpers
have
a
fair
amount
of
code
around
them.
J
K
K
A
A
A
J
Distinction
here
from
sake,
readies
evil,
is
that
this
one
is
explicitly
intended
to
depend
upon
I.
Think
it's
pretty
reasonable
to
include
the
client,
so
cube,
client
go
cube,
API
and
cube
api
machinery,
and
the
requirement
for
entering
something
like
Kate's
util
is
that
you
do
not
depend
on
those.
So
so
that's
why
this
is
a
different
bucket
than
the
one
we
already
have
and
that
dependency
chain,
though,
is
actually
why
OpenShift
s2
one
of
them
was
allowed
to
depend
on
kk,
and
one
of
them
is
not
I
know.
J
A
G
G
Yeah,
so
this
had
kind
of
originated
I,
just
sort
of
briefly
talked
with
Moe
on
slack
about
like
what
would
it
take
sorry,
what
would
it
take
in?
Would
it
what
are
people's
thoughts
on
adding
something
like
request
signing
to
kubernetes
and
by
request
ending
I
mean
like
something
like
the
kubernetes
or
the
AWS
64,
where
you
have
like
a
public
sort
of
public
in
a
secret
shared
key
or
even
a
symmetric
key,
so
the
current
context,
currently
a
WF
signatur.
It
has
a
shared
secret
with
the
client
and
the
authentication
server.
G
But
if
there's
also,
you
can
do
request
signing
without
a
shared
secret,
you
can
have
a
symmetric
request.
Signing
there's
not
to
my
knowledge,
there's
not
like
a
an
open-source
implementation
of
like
AWS
sig
before
on
the
server
side.
For
you
to
do
your
own
thing
yet,
but
I
kind
of
wanted
to
just
yeah
I
sort
of
wanted
to
pull
and
see
what
what
people
thought
about
this.
L
Front
proxy
is
what
I
I
know
most
gonna
hit
me,
and
maybe
David
as
well.
I
would
really
like
to
make
it
easier
for
people
to
do
stuff
like
Kerberos
or
before
the
upfront
proxies
and
I
would
definitely
endorse
a
project
that
wastes
some
of
the
infrastructure
for
people
to
extend
authentication
using
proxies.
A
A
So
I
thought
I
in
that
frame:
I,
don't
I,
guess
I,
don't
know
how
well
that
would
sort
of
pan
out,
because
when
I
was
like
fixing
up
the
cap
for
exec
I
read
over
all
the
old
stuff
like
one
of
the
core
things
there
was.
You
don't
want
to
run
this
external
binary
for
every
single
request
right
and
like
if
you
actually
want
to
do
requests
signing
you
should
do
it
for
every
single
request,
and
maybe
the
overhead
isn't
that
bad?
A
Maybe
your
binary
is
like
not
invite
down
or
something
so
it
spins
up
real
fast,
then
signing
and
then
disappears,
and
maybe
that's
tolerable,
but
that
is
an
obvious
sort
of
non
congruence
with
what
the
API
is
sort
of
designed
to
do
is
kind
of
design.
For,
like
you
know,
the
expire,
a
shin,
the
response
is
basically
like:
hey
long-running
plan
go
process.
Please
cache
this
credential
in
the
memory
for
that
long
before
you
call
me
again
for
more
stuff
and
never
mind
that
we
don't
pass
any
of
the
request
information
to
the
exact.
L
Agree
that
it's
definitely
not
there
today,
but
we
could
brainstorm
to
figure
out
whether
it's
something
feasible,
I'm,
not
sure
yet,
but
yeah
I
agree
that
we're
far
from
it
today.
A
A
A
This
was
pretty
limited
as
an
extension
point
for
the
API
server,
though,
because
once
again
you
can't,
we
can't
do
like
TLS
based
signing
on
that,
because
that's
at
least
not
obviously
so
why
I
think
these
suits
or
topics
are
sort
of
related
and
I'm
kind
of
looking,
for
you
know
the
various
leads
and
stuff
to
maybe
respond
to
that
issue.
The
enhancement
fee
are
like
hey
like
do.
A
E
My
suggesting
yet
HTTP
proxy
environment
variable,
which,
unfortunately,
you
know,
causes
weird
stuff
to
happen
for
all
of
your
HTTP
clients
on
your
machine.
So
if
we
did
in
cute
control
proxy
and
somehow
magically
made
cute
control
proxy,
do
whatever
it
wanted,
with
the
request
before
sending
it
on
to
the
real
server.
A
A
A
So
like,
even
if
you
back
up
and
say
you
only
have
just
tokens
and
not
an
asymmetric
thing,
you
can
imagine
the
token
that
was
only
there
to
get
pods
and
that's
actually
pretty
cool,
because
you
know
bearer
tokens
are
not
so
great
at
not
losing
your
identity
across
the
wire.
But
okay,
if
you
can
send
a
token-
and
it
only
works
for
like
this
one
particular
request.
Well,
that's
better.
A
Yeah
request
signing
does
effectively
sort
of
become
of
the
capabilities
of
this
model,
which
you're
saying
that
I
am
making
this
request
using
this
credential,
and
even
though
the
credential
represents
me,
the
action
I'm
taking
with
that
credential
must
also
constrain
the
overall
access.
So
obviously,
I
have
to
be
able
to
get
pods,
but
I'm
also
saying
that
they
can
also
only
be
used
for
getting
pods
and.
G
A
J
H
J
A
L
K
There's
been
a
lot
of
discussion
and
there's
like
no
notes
and
yeah
agenda,
so
we
don't
lose
some
of
the
things
that
we're
here
can
people
like,
maybe
summarize,
a
couple
of
their
points
under
the
item.
That
would
be
super
helpful.
You
can
pull
looking
at
the
agenda.
Sorry,
you
can
not
either
now
or
absolute
meeting
Peter
Elizabeth
thanks.
G
Now
I
was
just
saying
that
yeah
I
don't
know
I'm
not
objecting
to
the
server
side
like
front
proxy
tingling
like
if
you
and
I
talked
about
how
the
cubelet
doesn't
support
from
proxy
today
either.
But
it's
a
relatively
simple
change
to
add
that
if
we
wanted
to
same
with
like
dynamic,
it's
like
ca
for
cubelet,
pointing
to
a
config
now,
but
yeah
I
think
it
is
the
client
side.
G
That's
the
really
tricky
thing
cuz,
if,
like
he
can't
when
he
has
launched,
we
relied
on
the
exec
based
off,
which
was
just
I,
think
in
alpha
or
in
just
over
two
years
ago,
which
you
know
adds
a
new
service,
a
work.
Okay
like
in
terms
of
widespread
adoption
of
client
libraries
like
we
had
a
lot
more.
You
know
requests
for
CA
base
off
when
we
when
we
got
started
just
because
it
wasn't
it
had
throughout
the
ecosystem,
especially
like
other
languages.
I.
G
A
A
K
The
I
have
somewhat
similar
questions
about
whether
externalizing
that
from
within
control,
is
the
right
right
place
to
do
it
or,
if
pointing
it
at
something
which
could
do
the
signing
like
manage
the
TLS
connection
to
the
server
would
also
be
a
better
approach.
So
I
think
it
has
a
similar
questions
to
see
the
previous
topic.