►
From YouTube: SIG-Auth Bi-Weekly Meeting for 20230412
Description
SIG-Auth Bi-Weekly Meeting for 20230412
A
All
right,
hello,
everyone
welcome
to
the
April
12th
2023
meeting
of
saigoth.
Let's
kick
off,
we
have
quite
a
few
items
on
the
agenda.
First
off
I
just
want
to
make
a
quick
announcement
in
case
you're
at
kubecon
or,
if
you're
able
to
join
remotely.
We
do
have
to
take
off
Deep
dive.
The
link
is
provided
We'll,
Be,
Love,
we'll
love
to
catch
up
with
folks,
all
right,
I,
don't
think,
there's
anything
other
than
the
discussion
topic.
B
Ide
yeah
so
yeah
so
last
time
the
feedback
was.
If
our
motivation
is
just
automatic
rotation,
we
can
do
that
through
files
like
there's
like
an
existing
path
to
just
you
know,
have
it
pick
up
changes
from
disk?
We
don't
want
to
do
that.
We
don't
want
to
have
these
keys
on
disk.
The
idea
is
to
decouple
this
logic.
B
You
know
fully
out
of
API
server
so
that
then
you
can
have
this
separate
key
server
component.
Do
whatever
you
want
like,
for
example,
pull
from
KMS,
so
I
listed
a
few
other
use
cases
in
there,
but.
D
Right
so
I,
is
it
just
two
use
cases
rotation
which
we
already
said
wasn't
like
sort
of
valid
for
this
one
and
then
out
of
process
yeah.
B
Yeah,
it's
it's
mostly
around
out
of
process
so
that
you
know
that
logic
can
be
decoupled
out
of
API
server
and
then
people
can,
you
know,
extend
it
in
the
way
that
suits
their
needs.
B
B
Yeah
yeah,
that's
a
good
question.
What
is
I
I
was
thinking
to
have
that
out
of
scope
of
this
because
I
guess,
what's
our
long-term
plan
with
those
Legacy
tokens.
F
We
don't
gen,
we
don't
auto-generate
them
anymore,
but
people
can
request
them
and
there
is
not
a
plan
in
place
today
to
get
those
people
onto
a
better
system
without
breaking
them.
Basically,
you
can.
You
have
been
able
to
request.
E
F
A
reasonable
path
for
migrating
to
Auto
generated
ones,
and
that's
basically
done.
We
have
some
cleanup
work
to
do
there,
but
there's
not
a
path
for
migrating
explicit
dependencies
on
the
requested
ones.
We
could
maybe
look
at
making
the
one
if
you
request
a
token
in
a
secret
like
moving
the
generation
of
that
from
the
controller
manager
into
the
API
server.
That
might
be
a
possibility,
but
the
nature
of
those
tokens
I,
don't
think
we
can
change
reasonably
because.
H
H
It's
up
to.
You
know
the
operator
of
the
API
server
if
they
want
to
trust
that
email
or
not,
but
that
would
that
would
still
allow
you
for
the
temporary
ones
to
basically
not
have
any
key
that
you
have
to
hold
on.
H
D
D
D
I
mean
yeah,
like
so
I
I,
don't
think
it's
out
of
scope,
I
I!
Think
if
you're
gonna
try
to
fix
it,
you
should
actually
try
to
fix
it
right.
So
stepping
back
for
a
bit
is
the
I.
Also
think.
Let
me
actually
pause
for
a
second
David.
You
wrote
a
comment
there.
You
want
to
talk
about
that.
First,
since
I
think
you
wrote.
D
D
So
David
would
comment
on
the
agenda,
which
is
basically
Rita.
Do
you
want
to
put
the
agenda
back
on
screen
real
quick,
which
is
basically,
would
we
recommend
or
require
that
service
account?
Tokens
are
not
reusable
between
different
clusters.
D
F
E
D
Say
I
mean
okay,
sure
like
yes,
the
if,
if
your
uid
generation
never
conflicts,
which
it
really
shouldn't,
you
should
be.
Okay,
that's
fair.
F
Yeah
I'm
trying
to
understand
like
what
keys
are
being
used
that
are
more
powerful
than
access
to
the
cube.
Api
server
would
be
like.
If
you
have
access
to
the
keep
API
server,
you
can
already
do
things.
You
could
restart
the
qbi
server
with
another
key
pair
so
that
it
would
like
spit
out
tokens
and
accept
tokens
with
that
key
pair
like
if
you
have
that
level
of
access.
F
H
It's
mostly
about
just
the
like
the
well
the
export
ability
right
like
if
you
have
a
key
that
or
a
set
of
keys,
even
if
you're
rotating
piece
that
are
exportable,
they
can't
there's
no
audit
process
on
those
keys
right
and
external.
The
external.
H
The
API,
the
verifying
of
those
of
tokens
signed
by
the
API
server
that
are
used
by
other
systems,
say
cloud
off
somewhere
is,
is
going
to
have
to
trust
the
API
server.
You
know
we're
not
necessarily
the
API
server,
but
wherever
you
have
the
issue
or
endpoint
for
The
Trusted
set
of
keys.
So
even
if
you
know,
if
a
if
a
bad
actor
does
who
control
the
API
server
does
add
a
new,
a
new,
you
know
row
key.
H
They
also
have
to
have
access
to
the
oidc
issuer
or
the
the
public
oiec
issuer
or
like
verifying
URL
right
like
the.
E
D
You
don't
need
to
do
that
right,
like
you,
could
just
use
like
the
request,
herder
stuff
right,
the
the
ability
to
just
tell
the
API
server
who
you
are
so
you
can
just
impersonate
any
identity
and
just
ask
you
to
Mint
tokens,
so
they
would
get
audited
you're
right
that
there
was
totally
because
it
would
be
following
the
regular
path.
It
would
be
a
regular
audit
Trail
for
this,
but
it
doesn't
prevent
you
from
basically
infinite
abuse
of
the
system
in
an
online
attack.
F
H
Okay,
because
then
you
then
you
have
like
yes,
an
operator
could,
at
you
know,
potentially
at
one
point,
create
a
road
key
update,
an
API
server,
a
key
or
music
credential
have
access
to
the
API
server,
but
they
don't
necessarily
have
access
to
the
external
system
that
those
tokens
are
being
used
for,
okay,
right
and
then
so
take
the
taste
of
AWS.
If,
if
even
even
a,
if,
even
if
I'm,
not
running
EPS,
I'm
running
cops
and
I.
Put
my
oh,
my
keys
in
an
S3
bucket
for
my
oidc
provider.
H
Now
I
have
now
as
a
you
know,
as
a
if
I
want
to
get
Cloud
credential
like
AWS
Cloud
credentials
by
by
assuming
some
role
with
with
a
token
signed
by
the
API
server.
I
have
to
have
the
key.
I
have
to
have
be
able
to
publish
that
key
to
the
public
endpoint
and,
and
it
is
now
it's
orderable
or
it's.
It's
yeah.
C
A
Just
a
quick
time
check
just
FYI:
do
we
want
to
like
time
box
this.
F
We
had
said
we
did
I.
My
like
my
take
is
that
this
is
pretty
niche.
F
F
Otherwise
we
were
sort
of
like
half
fixing
the
problem
in
kubernetes
and
it
doesn't
really
make
sense
and,
like
someone
mentioned
something
about
types
of
EX
like
extending
this,
and
that
raises
big
questions
for
me,
like
I,
don't
really
want
the
token
surface
to
be
extendable,
I'm,
not
sure
exactly
what
was
meant
by
that
I
think
those
have
to
be
resolved
and
given
the
list
of
what
we
have
in
Flight,
I
would
rather
see
us
like
finish
some
in-progress
things
before
we
take
on
a
new
extension
point.
But
that's
my.
D
You
already
have
to
do
their
seven
proxy
if
you
wanna,
if
you
wanna,
intercept
other
things.
D
You
can
already
you
can
already
intercept
everything,
Jordan
request,
header
lets.
You
do
right,
like
that's
kind
of
the
point,
I'm
not
saying
it's
a
good
recommendation,
but
if,
at
the
end
of
the
day
we're
talking
about
a
significantly
complex
feature
that
has
like
an
incredibly
Niche
use
case,
I'm
kind
of
okay,
saying
that
you
get
an
incredibly
Niche
implementation
right,
maybe
I'm
kind
of
hand
waving
there.
A
little
bit.
I
haven't
thought
through
that
too
much.
D
But
the
the
other
thing
I
wanted
to
suggest
here
to
see
if
it
was
tractable
is
so
I'm
generally
like
really
not
in
favor
of
any
more
of
these
grpc
things
being
shoved
into
like
core
critical
parts
of
the
API
server.
Like
I've
learned
my
lesson
from
KMS:
it's
just
a
just
a
crappy
thing:
don't
do
it
don't
add
more!
D
Is
there
a
way
that
we
could
try
to
leverage
that
existing
extension
point
to
help
and
disregard
somehow
so
like
having
some
kind
of
way
to
like
chain
the
KMS?
To
give
you
like
a
derived
key
or
something,
but
something
that's
not
as
something
that's
not
on
disk,
but
maybe
in
memory,
maybe
with
some
kind
of
TTL
associated
with
it.
D
H
D
I
mean
it
would
stay
in
process
right.
The
API
server
is
not
going
to
write
it
out.
It's
got.
It's
got
to
stay
in
process
right,
so
I
I
think
at
some
point
you
can
start
saying
that
someone's
not
necessarily
going
to
be
able
to
get
into
the
memory
of
the
API
server.
But
maybe,
if
that's
still
in
your
threat
model,
then
you
get
really
painful.
H
Yeah
I
mean
if
they
have
access
to
the
same
identity
as
the
or
whatever
you
know.
Whatever
permissions
that
the
process
that's
running,
the
API
server
has
to
decrypt
the
token
that's
true.
They
could
do
that.
So.
D
Yeah,
so
in
that
sense,
it's
there
sort
of
the
way
I've
seen
some
of
this
stuff
handled
in
like
Azure
ID
and
like
GitHub,
is
they
they
use
some
cert
based
signing
and
they
issue
intermediates
that
are
short-lived.
D
So
you,
you
can't
really
meaningfully
export
those
because
they
die
very
quickly
but
they're,
not
in
line
to
the
request
right.
You
you
ask
for
it.
I
guess
you
I
think
it
goes
through
some
kind
of
CSR
flow,
but
you
you
generate
your
local
key,
you
ask
for
it
to
be
signed
and
you
have
it
intermediate
and
you
get
to
use
it
for
a
certain
amount
of
time.
D
The
intermediate
is
present
for,
however
long
like
it
could
be,
as
short
as
like
10
minutes
right,
because
the
point
is
to
make
it
so
that
it's
not
in
line
like
that's
the
core
thing.
I
want
to
try
to
avoid
here
and
yes,
I
agree
with
Jordan's
time
box,
because
now
we're
at
20
minutes.
F
G
E
E
D
Cool
we
wanna
Jordan.
Were
you
the
one
who
wrote
All
these
caps
out
yeah.
F
I
did
I
like
I,
said
I'm,
not
really
expecting
us
to
like
leave
today,
knowing
like
this
is
exactly
what
we're
working
on
for
128,
but
I
wanted
to
I,
first
linked
to
the
open
enhancements
issues
tagged
with
Sig
auth,
just
to
remind
us
that
there's
a
lot
of
stuff
that
isn't
done
yet
that
we
kind
of
have
in
flight,
and
so
as
we
want
to
look
at
spinning
up
new
stuff.
Also,
let's
try
to
make
progress
and
complete
some
of
the
in-flight
stuff.
F
So
that
was
just
the
first
thing.
Take
a
look
there.
If
there
are
things
you've
been
involved
in
you
kind
of
forgot
about
or
sitting
here
in
beta,
maybe
look
at
what
it
takes
to
get
to
GA,
and
if
we
can
make
some
of
those
things
happen.
The
second
bullet
was
I.
Think
it'd
be
helpful
to
know
where
things
are
in
each
stage.
So
some
things
you
need
design
some
things
already
designed
and
we
just
need
to
like
Implement
them
and
make
progress.
Some
things
have
implementations
that
need
review.
F
So
whatever
the
work
is,
that
needs
to
be
done
like
the
next
step.
What
we're
wanting
to
accomplish
for
128,
let's
be
really
clear
about
like
what
the
goal
is
for
128
land
of
design
or
have
an
implementation
and
a
review,
and
then
we
need
to
make
sure
we're
load,
balancing
folks.
So
like
this,
this
isn't
really
like
a
wish
list
like
here's.
What
I
want
to
be
in
128,
but
I'm
not
planning
on
working
on.
This
is
more
like
here's,
what
I'm
planning
to
do
in
128.
F
So,
if
I
put
a
lot
of
question
marks
and
kind
of
put
names
of
people,
but
hearing
from
you
all
would
be
super
helpful,
like
here's,
what
you
are
planning
to
do,
whether
it's
work
on
a
design
or
work
on
an
implementation,
so
I'm
trying
to
take
question
marks
off
of
things
as
people
commit
to
them,
or
we
get
confirmation
that
people
are
planning
to
work
on.
F
Yeah
I
took
a
first
pass
on
things
that
I
am
planning
to
work
on
either
as
a
reviewer
or
helping
Drive
implementation,
or
that
I
knew
I'd
seen
other
people
like
actively
updating
stuff
on
so
I
took
a
first
stab
at
Milestones,
but
that's
just
a
first
step.
F
And
then
I
roughly
sorted
this
list
in
order
of
like
things
that
were
furthest
along
in
the
pipeline.
So
there
are
a
couple
things
that
the
who
am
I,
API,
very
small
surface
area,
pretty
well
understood
pretty
much
just
mechanically
stepping
through
the
progression.
So
I
think
that
one
is
a
reasonable
thing
to
Target
GA
I'm
happy
to
review
it
and
I
need
to
get
confirmation.
Oh
I
see
a
someone
in
the
chat.
Hopefully
this
is
a
pretty
mechanical
graduation
to
GA
that
we
can
get
in.
I
There
are
some
tests
required
like
to
improve
coverage
and
also,
maybe
writing
a
blog
post
about
just
by
this,
no
obstacles.
Okay,.
F
The
next
one
was
the
cleanup.
This
is
actually
around
the
Legacy
token
use,
so
tracking
of
Legacy
tokens
is
a
pretty
mechanical
graduation.
F
F
I,
don't
know
if
we
want
to
step
through
this
Sergey
is
not
here.
Did
you
want
to
talk
about
KMS
V2,
no,.
D
Thought
Sergey
was
here,
that's
where
I
saw
him
on.
Maybe
he
had
to
drop
down
here?
Okay,
sure,
so
we
can
near
the
later.
Part
of
the
agenda
is
like
what
I'm
highlighting
here
like
I.
Have
some
stuff
I
want
to
make
changes
to
for
chemistry
to
to
the
crypto
stuff,
so
I,
so
in
my
mind,
KMS
V2
cannot
go
to
GA
in
128,
like
I
need
two
releases
like
I
planned
on
making
like
some
read
level
changes
and
then
some
right
level
changes.
D
So
I
needed
two
releases
to
do
that
unless
I
wanted
to
add
flags
and
I
didn't
really
want
to
add
Flags.
D
D
So
if
we,
if
we're
saying
KMS
V2,
has
to
have
like
near
completed
automated
rotation,
then
we
gotta,
like
finish
all
the
storage
version,
stuff
If.
Instead,
we
would
pull
that
out,
maybe
into
its
own
thing
on
track
it
separately.
Then
it
would
not
necessarily
block
kmsv2
itself
from
going
GA
right,
so
there
could
be
a
separate
cap.
That's
basically
like
here's,
the
storage
version
API
that
does
this
right
now
and
here's
the
API
server
identity,
stuff.
That
does
this
right
now
and
together
and
add
some
enhancements.
And
here
you
go.
D
Here's
a
new
cap
that
says:
how
are
you
going
to
do
automated
rotation?
I,
don't
know,
I,
don't
know
where
I've
landed
on
that
yet,
but
that
I
think
those
are
like
the
two
big
things.
J
So
we
have
one
more
piece
of
Alpha
scope,
work
to
land
I,
think
in
128,
which
is
the
actual
projected
volume
support,
and
then
you
talked
about
beta,
but
I
think
it
makes
a
little
more
sense
to
land
cubelet
certificate,
support
like
cubelet
pod
certificate
support
and
have
them
both
be
at
Alpha
first,
so
we
can
see
how
they
play
together
if
that
makes
sense,
so
that
was
my
kind
of
optimistic
forward-looking
plan
was
I.
Have
that
draft
kept
about
workload?
J
J
E
F
I'm
not
opposed
to
getting
the
projected
volume
in
Alpha.
First
I
do
think
there's
value
in
the
API,
even
if
we
don't
have
the
projected
volumes
like
just
as
a
as
a
way
to
say
here's
how
you
get
the
signing
certificates
for
this.
Even
if
things
were
talking
to
the
API
directly
I
I
think
that's
a
useful
API,
even
by
itself
so
they're
delaying.
D
J
F
E
E
F
J
F
J
E
D
D
D
Okay,
cool
multiple
authorization,
workbooks.
F
B
F
Would
like
to
be
able
to
control
the
failure
policy
on
a
web
hook
like
right
now
web
hooks
that
there's
an
error
calling
them
their
timeout
or
something
authorization's
like
well,
no
opinion
go
on
to
the
next
authorizer
and
I
would
like
to
be
able
to
say
if
this
authorizer
is
unavailable
that
counts
as
a
or
times
out.
That's
a
deal
closed,
so
I
am
interested.
I
can
help
with
review
at
the
very
least,
maybe
implementation
or
design,
but
I.
F
Let
me
let
me
so
I
don't
want
to
couple
the
ability
to
run
multiple
web
hooks
to
an
API
if
we
also
want
to
expose
it
via
API
I
wouldn't
want
to
run
that
API.
But
if
someone
wanted
to
work
on
exposing
that
optionally
in
a
non-conformance
way.
F
D
I
mean
if
your
customers
ask
you
to
you,
might
have
to
right.
It's
kind
of
it's
kind
of
the
gist
of
how
these
things
go.
D
Okay,
yeah
I,
don't
know
if
I
have
had
any
asks
for
the
just
the
multiple
bit
like
the
multiple
having
more
than
one
and
then
having
the
failure
policy.
I.
Don't
think
I've
necessarily
heard
too
much
about
that.
I
have
heard
about
I
want
to
be
able
to
configure
my
own
web
hook
on
your
cluster.
D
E
Dude,
okay,
I
see
you're
typing
right
now,.
F
That's
a
good
question,
Tim
so
being
able
to
have
some
indication
in
the
config
like
only
route,
this
type
of
request
to
this
web
hook,
where
what
that
condition
can
consider
is
like
open
to
debate.
That's
an
interesting
thing
to
settle
in
the
design.
E
D
Yeah
but
I
did
talk
to
like
I
think
last
coupon
I
talked
to
like
Joe
about
like
could
I
reasonably
use
cell
for
filters
here
and
he'd
seemed
pretty
positive
about
that
and
to
me
I
think
if
you're
gonna
have
failed
closed
web
Hooks
and
more
than
one
of
them,
you
really
probably
want
to
filter
to
make
sure
that
it
only
invokes
it
where
it
makes
sense,
because
this
isn't
like
admission,
it's
going
to
break
your
cluster
in
a
really
special
sort
of
way.
F
D
F
F
But
I
wasn't
gonna
walk
on
it
to
begin
yeah
like
today,
it's
an
implicit,
it's
implicitly
at
the
front
of
the
authorizers
and
whatever
else
you
specify
like
node
web
hook,
rbac
whatever
you
can
specify
the
order
and
then
the
super
user
one
shows
up
at
the
beginning
a
structure
in
the
config
that
says
here's
the
order
I
want
everything
to
run
in,
and
here's
where
in
where
in
that
order,
I
want
the
super
user
one
to
go.
F
D
F
E
D
D
I
D
So
I
plan
on
working
on
this
one,
this
coming
release
and
probably
I'll,
probably
drag
the
niche
and
elect,
and
someone
else
with
me
then
yeah
and
I
think
some
of
the
I'm
forgetting
his
name.
But
a
member
of
the
community
helped
write
up
a
bunch
of
tests
that
test
the
existing
oidc
functionality
so
that
I
want
to
get
that
merged
sort
of
early
in
the
1
28
cycle.
So
we
can
have
the
CI
signal.
D
So
if
we
start
messing
with
this
code,
if
we
actually
get
the
writing
code,
this
time
around
we're
not
going
to
break
stuff,
so
I
think
Jordan,
like
probably
you
and
I,
need
to
sort
of
like
agree
on
the
API
surface
sooner
rather
than
later,.
F
Like
arm
wrestle
over
what
we
make
a
prereq,
the
yeah
for
for
128,
honestly,
if
we
settle
on
a
design
for
this
and
got
test
coverage
of
the
existing
code
in
place
like
I'd,
be
pretty
happy
with
that
progress.
Even
if
the
implementation
didn't
quite
start
yet.
But.
D
I
D
So
if
we,
if
we,
if
we,
if
we
write
code
that
messes
with
the
existing
stuff,
it
needs
to
be
really
strongly
gated
and
ideally
we
don't
mess
with
the
existing
stuff,
if
at
all
possible,
but
we
can
figure
that
out,
but
yeah
I
think
that's
the
core
thing
right
is
like.
If
you
have
an
alpha
implementation,
it
doesn't
have
to
be
perfect
or
done.
It
is
Alpha.
That
is
what
that
means.
Yeah.
I
I,
remember
that
the
with
concept
or
something
that
worked
already
so-
and
this
was
a
completely
different
authenticator,
so
not
messing
with
other
authenticators
at
all.
So
as
yes,.
D
E
I
D
But
yeah
so
that
that's
the
hard
part
right
is
like
I
had
basically
thought
that
we
would
just
copy
paste
the
existing
authenticator
and
just
start
ripping
it
apart
and
making
it
do
what
we
wanted.
But
that's
really
hard
to
do
if
we
have
to
maintain
perfect
feature
parity
with
the
other
one
or
behavioral
parody,
at
least
so,
but
I
understand
where
you're
coming
from
on
that
Jordan
I
I.
Just
that
wasn't
sort
of
what
was
on
my
mind
but
I.
E
F
And
Moe:
okay:
let's.
D
Yeah
I,
think
that
one
is
fine,
I
I,
think
we'll.
If
we
Max
are
you
gonna,
be
a
kubecon
by
any
chance.
D
Oh,
you
are
okay,
that's
excellent!
So
that
way
we
all
right,
I
will
I'll
find
you
then,
and.
F
Okay,
again
I'm,
you
were
commenting
on
the
multiple
authorization
of
my
books.
Would
you
be
interested
in
helping
review
some
of
that
as
well.
F
Sorry
for
the
mult,
the
authorization
config,
you
had
asked
a
question
about
the
condition
stuff
I.
G
I'll
be
around
for
a
couple
weeks
and
can
help
review
until
August,
so
if
this
land,
if
this
doesn't
land
in
the
128
cycle,
I'd
love
to
be
involved,
but
I'm,
probably
not
going
to
be
around
for
much
of
128.,
okay,
I
I'm,
sorry
I
totally
forgot.
First.
F
All
right,
the
fine
green
off
Z1
I'm
happy
to
drop
that
out.
F
It
is,
we
have
a
lot
of
work
to
do.
I'm,
I'm,
okay,
with
reordering
some
things.
D
Yeah,
let's
see
I
know
Rob
is
going
to
be
at
kubecon,
so
we
could
talk
to
him
about
reference
Grant
and
see
how
we
want
to
move
forward
with
that
I
think
we
had
quite
a
lot
of
open
questions
about
how.
F
F
Questions
like
the
reference
Grant
one
I
think
the
reference
Grant
proposal
is
like
half
of
a
solution,
maybe
less
than
half,
maybe
a
third
of
a
solution.
F
So
I
think
this
is
solving
a
real
problem,
but
I
I
would
at
least
like
to
think
about
whether
we
can
actually
solve
the
whole
problem,
which
will
require
more
work
but
Rob's.
Not
here
I
think
we
should
probably
talk
with
Rob
and
Nick
I
think
was
the
one
who
opened
the
housing.
D
G
G
F
Okay,
I'm,
mostly
counting
on
the
API
Machinery
folks,
find
reviewers,
but
yeah
thanks
for
throwing
this
on
on
our
radar.
D
Yeah
I
think
a
lot
of
people
are
really
excited
for
the
sales
engine
stuff.
It
makes
like
an
entire
class
of
problems
to
disappear.
G
G
I
think
the
other
one
other
thing
to
note
about
this
is
it's
dependent
on
Cell
admission
control
going
to
Beta?
So
if
it
looks
like
that's
not
going
to
happen
in
128,
then
that'll
block
this.
J
D
Yeah
we
can,
if
we're
done
with
kept
stuff,
I'll
try
to
make
it
not
too
long.
You
want
to
open
up
the
dock.
Okay,
so
if
folks
haven't
read
this
thing
yet
this
is
going
to
be
kind
of
hard
to
follow
along,
but
I'll
do
my
best.
D
So
obviously
the
part
where
the
remote
KMS
encrypts
a
data,
encryption
key
is
sort
of
unchanged,
but
currently
on
Startup
and
on
key
rotation.
We
do
some
fancy,
non's
calculation
and
sorry
I'm,
saying
that
wrong.
When
we
use
our
data
encryption
key
that
has
been
encrypted
by
the
KMS
plugin
every
time
we
do
an
encryption
with
it.
We
generate
a
new
nonce.
The
nonce
is
both
partially
random
and
it
has
a
counter.
We
are
limited
to
12
bytes,
because
AES
GCM
is
annoying,
so
that's
kind
of
how
it
works
today.
D
So
there
was
concerns
around
basically
anything
that
could
cause
the
counter
to
be
reset
in
a
way
that
it's
unexpected
or
reused
in
the
way,
that's
unexpected,
primarily
around
any
kind
of
technology
that
would
restore
the
process
to
an
earlier
State
and
then
have
it
run
from
that
earlier
point
in
time,
then
there's
I've
been
reading
up
on
sort
of
various
guidelines
around
how
long
you
should
use
a
key
for
basically
how
like
how
much
data
you
should
encrypt
with
it
and
how
many
encryption
operations
you
should
do
and
all
that's
based
on
like
the
size
of
the
messages
and
all
sorts
of
other
stuff.
D
But
if
you
look
at
the
most
conservative
estimates,
they
end
up
being
on
the
order
of
like
a
couple
of
hundred
gigabytes
right.
So
like
you,
you
can
actually
very
easily
hit
this
limit
on
a
high
right
load,
cluster
right,
so
I
wanted
to
try
to
basically
address.
D
Let
me
step
back
for
a
second
canvas.
V1
had
none
of
these
issues
because
it
always
made
a
new
deck.
Every
time
it
was
going
to
do
an
encryption,
so
it
was
always
a
new
deck,
so
it
had
this
horrible,
like
failure
mode
where
it
always
needed
the
remote
plugin
to
be
100
available
for
every
single
call,
but
it
didn't
have
any
like
cryptographic
shortfalls
it
just
just
and
it
had
a
purely
random
knots.
D
So
I
I
wanted
to
come
up
with
an
approach
that
retains
the
existing
minimal
Reliance
on
the
external
KMS,
but
restores
more
of
the
capabilities
that
V1
had
from
the
crypto
standpoint,
while
not
sort
of
gaining
the
massive
cash
size
problem
that
V1
also
has
so
like.
In
order
to
function
efficiently,
you
need
like
an
effectively
an
unbounded
debt
cache
because
you
just
have
one-to-one
mapping.
D
So
this
this.
Basically
this
this
describes
a
scheme.
That's
vaguely
called
nonsense
extension.
So
it's
a
it's
a
mechanism
for
adding
extra
bytes
that
are
used
to
derive
sub
Keys,
which
is
it,
for
example.
It's
done
by
the
I
think
it's
called
Cha-Cha
forget
the
name.
Now
the
Cha
Cha
say:
first
Mike
I
see
you
on
muted.
Did
you
have
thoughts.
C
D
D
That
was
at
a
particular
counter
state
at
like
time,
10,
but
at
time
five
is
when
you
took
the
snapshot,
and
then
you
restored
the
VM
at
that
snapshot.
So
it's
going
to
go
from
five
to
ten
again,
so
it's
going
to
count
back
up
right.
So
all
of
those
things
that
encrypts
further
will
have
collisions.
C
C
Yeah
I
wonder
how
realistic
this
concern
is.
D
Yeah,
so
that
one
I'm
not
super
sure
about
the
I,
think
the
gist,
though,
is,
is
that,
when
it
happens,
it's
completely
catastrophic
right,
like
is.
C
It
two
two
things
that
right
to
disk.
D
Yes,
two
things
that
write
the
desk.
So
if
you
have
a
single
nonce
Collision,
you
immediately
leak
the
like
the
authentication,
key
and
I.
Think
I
think
you
only
need
like
a
handful
of
collisions
to
leak,
the
full
key
you
have
to
do
like
more
complex
math,
but
yeah
you
end
up
losing
the
actual
encryption
key
and
because
we're
using
the
same
deck
for
all
of
them.
You
basically
lose
the
entire
backup.
D
Right,
like
that's,
the
real
issue
is
that
even
if
this
happens
point
zero,
one
percent
of
the
time
or
zero
zero
one
percent.
In
those
cases,
the
whole
system
falls
apart
right.
So
right
now
the
canvas
V2
docs
basically
say
do
not
use
VMware
stay
restores
with
this
feature,
because
we
can't
guarantee
that
it'll
work
exactly
like
you
expect.
C
Yeah
I
guess
I'm
trying
to
think
through
a
scenario:
the
full
scenario:
okay,
I
understand
the
concern
now
I
have
to
mull
on
it.
But
thank
you.
D
D
D
So
the
design
basically
says
we'll
just
use
an
hkdf
with
a
salt
that
we
store
beside
like
a
public
salt
that
we
store
with
the
existing
nonce
and
off
when
I
get
around
to
actually
making
an
actual
PR
for
this
I'll
have
all
the
links
to
the
stuff
where
I
looked
at
this
stuff.
But
it
the
scheme
is
generally
known
as
like
a
non-sustansian
scheme.
Right
is
that
you're
you're?
D
We
can
go
back
to
having
purely
random
nonsense
because
we
no
longer
have
to
worry
about
collisions,
but
your
derived
key
is
mathematically
unrelated
to
the
original
key
because
of
how
like
the
hash
works,
but
it's
deterministic
based
on
the
random
salt
plus
the
original
key.
E
E
D
C
To
derive
the
keys,
but
then
the
sequence
numbers
for
like
asgcm
are
the
nonsense
for
the
TLs
record.
Numbers
are
the
nonsense
for
the
in
the
asgcm
Cyber
Suite
right,
and
they
don't
even
have
that
Randomness
that
we
add,
because
we
are,
you
know
adding
that
prefix
to
the
knots,
the
only
in
your
PR,
the
there's,
some
prefix
that
is
still
random.
So.
D
C
D
D
G
F
D
D
A
Good
thing:
we're
not
going
in
GA,
110,
.,
okay,
I,
moved
the
rest
of
the
topics
to
the
following
week.
So
sorry,
folks,
we
if
we
didn't
have
a
chance
to
get
to
all
right,
that's
I
think!
That's
it
all
right
thanks!
Everyone.
Thank
you.