►
From YouTube: Sig-Auth Bi-Weekly Meeting for 20230524
Description
Sig-Auth Bi-Weekly Meeting for 20230524
A
So
everyone
this
is
the
cigarth
meeting
for
May
24th
2023.
Let's
see
I
think
we
have
a
relatively
light
agenda
today.
So
let's
get
started.
Okay,
so
first
announcement,
so
I
think
I've
spoken
with
you
about
this
Jordan.
A
A
So
you
would
be
nagged
when
there
is
a
ga
replacement
to
go.
Do
the
work
to
replace
your
stuff.
B
A
So
Jordan
convinced
me
not
to
nag,
so
we
will
deprecate
in
128
and
have
it
in
release,
notes
in
128.
and
then
in
129.
Like
let's
say
in
128,
we
add
the
feature
gate
it
would
be
enabled
by
default
in
128,
but.
B
A
B
A
C
We
found
a
vulnerability
in
kmsp1.
We
would
try
to
fix
the
vulnerability,
but
we're
not
going
to
like
try
to
address
the
scale
issues
or
the
things
that
kmsb2
was
intended
to
address.
Yeah
right.
That
makes
sense
and
just
to
be
crystal
clear.
I
know,
you'd
and
I
agree
on
this
mode.
Like
we
say
deprecated,
we
are
not
ever
intending
to
drop
the
ability
to
read
KMS
V1
encrypted
data
out
of
STD,
we
probably
ever
so.
People
get
really
anxious
when
they
hear
they
were
deprecated
and.
D
C
A
Yeah,
so
I
I,
think
sort
of
the
maximum
concrete
plan
I
have
is
if
V2
was
GA,
maybe
three
releases,
maybe
even
more
than
that
we
would
drop
right
support
for
everyone.
So
you
could.
You
can
move
off
of
it,
but
you
you're
not
supposed
to
use
it
for
rights
anymore
at
that
point,
but
that's
like
the
earliest
I
feel
comfortable.
So
like
a
year
after
b2ga.
C
Yeah
but
that's
a
separate
conversation.
This
is
just
to
tell
people
like
correctly
label
the
KMS
V1
feature
as
like:
it's
not
it's
not
going
to
get
better.
E
A
A
That
one
is
easy:
does
it
leak
the
crypto
Keys?
Somehow,
then
sure
we
will
fix
that
and
if
it
somehow
does
that
and
we
haven't
noticed
yet
almost
certainly
V2
probably
is
impacted
too
so.
A
Okay,
okay,
so
I
don't
hear
any
issue,
though,
on
the
the
relative
semantics
here:
okay
I
can
I
guess,
move
to
the
next
one,
which
is
little
demo,
so
I
know
I,
don't
see
any
eks
folks
on
the
call,
but
I
know
that
they
were
interested
in
external
signing
of
service
account
tokens,
and
one
of
the
things
I
had
asked
about
was
to
have
support
for
certificate-based
signing
of
service
account
tokens
for
that
feature.
A
Basically,
so
that
way,
if
you
wanted
to
have
external
private
key
material
that
isn't
directly
accessible,
you
could
still
like
provision
like
short-lived
certificates,
like
into
the
cluster
that
you
want.
So
that
way
in
the
same
way
that,
like
KMS
V2,
fixes
most
of
the
performance
issues
of
KMS
V1
by
moving
the
external
server
out
of
the
main
Loop
of
the
every
call.
Basically,
that's
what
you
could
do
with
certificate
based
signing
so
I
implemented
that
and
I
was
going
to
show
a
little
demo
of
it
just
kind
of
fun.
A
F
A
A
Okay,
so
I
just
have
a
locally
right.
It's
a
great,
almost
certainly
way
too
tiny
foreign
I
have
a
locally
running
terminal
that
has
a
local
cluster
up
running
from
that
branch,
and
so,
let's
see
so,
if
I
show
this.
So,
if
you
guys
remember,
openid
has
a
jocks
endpoint
for
the
API
server.
This
is
the
regular
key,
the
one
that
we
have
today.
A
A
Right
and
then
let's
go,
let's
see
what
should
I
do
next.
So
if
we
were
to
create
a
token
we
can
I
mean,
let's
do
actually,
let's
do
Oculus.
A
So
if
I
do,
who
am
I
but
with
a
token
for
the
default
service
account,
we
can
see
this
nice
little
warning
that
I
added
to
the
code
base,
so
I
could
actually
see
that
my
stuff
was
working
because
Jordan's
really
happy,
because
it's
a
warning,
because
that
makes
him
happy
it's
one
of
the
easiest
ways
to
demo.
Something
now
Jordan
is
to
log
it
as
a
warning,
so
you
can
see
it
in
your
terminal.
It's
great
I
love
it
also.
A
It
worked
the
very
first
time
so
I
was
super
skeptical
that
it
actually
was
doing
anything
so
I
had
to
add
this
morning
just
to
prove
to
myself.
It
was
doing
anything
but
so
like
if
we
look
at
the
token.
So
if
we
look
at
the
token-
and
we
just
grabbed
the
header
and
decode
it-
we
can
actually
see
that
it
has
an
x5c
certificate
in
there.
A
A
This
is
the
root
CA,
that's
from
the
Jackson
point
and
we
can
kind
of
see
you
know
its
parameters
and
then,
if
we
look
at
the
one
that
I
get
from
service
account
token,
it
is
a
different
cert
that
has
you
know
it's
not
a
CAA,
but
it
does
have
client
auth
on
it,
whereas,
like
I
think
I,
don't
know
if
openness
cell
shows
it
here,
I
don't
know
if
it
shows
here
there,
one
of
the
things
I
ended
up
doing
because
after
sort
of
reading
stuff
online
about
this
flow,
like
the
biggest
mistake,
I
saw
that
people
tend
to
make.
A
Is
they
put
a
root
CA
into
their
trust?
That
doesn't
semantically
make
sense
for
tokens
so,
for
example,
they'll
put
like
digicert
or
something
in
there,
so
basically
anybody's
anything
can
be
kind
of
used
because
anyone
can
ask
digicer
to
give
them
a
cert
and
obviously
I
know.
If
there
was
cas-based
support,
someone
would
try
to
use
their
company
root
CA,
because
that's
what
people
do
but
I,
don't
think
the
intent
there
would
be
that
I
want
my
company's
route,
any
certificate
signed
by
my
company's
route.
A
Ca,
to
be
also
able
to
sign
tokens
that
are
used
within
my
kubernetes
environment
that
doesn't
make
sense
so
to
try
to
safeguard
that
I
basically
made
the
code
require
that
the
DNS
name
on
the
Leaf
certificate
basically
match
the
issuer.
Hash
that's
configured
on
the
cluster
I,
don't
know.
If
that
makes
sense,
it's
something.
A
G
I
had
one
question
about
the
did
stuff
right,
so
if
the
key
ID
does
not
match
like
if
the,
if
the
array
there
are
multiple
elements
in
the
array,
like
typically,
they
use
the
key
ID
to
figure
out
which
one
to
do,
but
with
this
would
they
have
to
go
through
the
array
and
try
out
all.
A
Yeah
that
so
that
is
fair,
I
guess
if
you
were
so
for
the
API
server,
it
doesn't
matter
because
it
only
trust.
Like
one
group
ta
like
sorry,
not
one,
it
only
trusts
a
Roots,
a
bundle
of
a
set
of
roots
right,
so
it
doesn't
matter
how
it
chains
up
to
any
one
of
them
or
more
than
one
of
them.
It
just
has
to
chain
up
to
one
of
them
right
so
right
now
the
logic
is
very
kind
of
simplistic.
Where
it
just
says:
okay
do
I
see
a
token.
A
Did
it
fail,
the
normal
token
verification
for
service
account
tokens
do
I,
have
the
ability
to
do
root,
CI
verification,
and
does
that
token
have
an
x5c
header?
A
If
it
does,
then
it'll
try
to
basically
root
up
to
that
the
the
routes
that
are
trusted
and
if
that
works,
then
it'll
take
the
public
key
from
the
certificate
and
use
it
to
verify
the
actual
token,
so
I
I
think
the
reason
you're
asking
that
question
Anish
is
the
fact
that,
like
if
you
were
an
external
provider-
and
you
were
trying
to
verify
these,
it
might
be
more
difficult
because
you
might
be
verifying
many
many
different
ones
like
you
might
have
many
many
different
routes
that
you
trust
as
separated
trust
domains
and
then
have
meaning
across
them.
A
So
I
think
we'd
have
to
think
more
carefully
about
how
we,
how
we
do
that.
Obviously
one
way
would
be
to
make
it
so
that
the
key
ID-
that's
in
the
token
matches
this
one.
You
could
totally
just
do
that.
I
guess
that
would
totally
be
fine.
It's
not
really
used
by
the
API
server,
so
I
think
we
could
make
that.
C
A
I'm
not
100
sure
how
that
plays
in
with
x5c
I'd
have
to
look
at
it
more
closely.
So
I
can
I
can
take
that
as
an
action
to
kind
of
yeah.
C
I
feel
like
some
of
the
client
libraries
would
trigger
refreshing
their
view
of
the
key
set
if
they
saw
a
token
that
had
a
kid
that
wasn't
in
the
last
fetched
one
but
I
don't
know
if
that's
like
an
on
failure,
fallback
or
if
that's
a.
Hopefully
it's
not
like
I,
saw
a
token
with
a
random
key
ID
Let
me
Go
fetch,
because
that
would
be
a
general
service
inspector,
but
yeah
yeah
I,
don't
know
it's.
A
Worth
checking
yeah,
so
that's
good
feedback,
yeah
and
I
think
it
probably
makes
sense
to
make
this
and
this
match
mostly
because
the
API
server
doesn't
use
this
at
all
or
its
verification.
So
it
could
just
be
for
external
actors
to
figure
out
which
route
CA
they're
supposed
to
chain
up
to,
let's
see,
did
I
was
there
anything
else.
I
wanted
to
show
on.
A
I
think
I'll
stop
the
terminal
stuff
and
go
back
to
the
code.
Real
quick.
A
So
if
we
look
at
it's
a
this
is
like
the
main
authentication
Logic
for
service
account
tokens
and
basically,
if
regular
e-verification
fails,
it
will
basically
check,
can
I
do
TLS
based
or
certificate
based
verification,
and
does
the
token
that's
being
sent
to
me
have
an
x5c
header
on
it.
If
so,
what
it's
going
to
do
is
it
will
take
the
verification
options
that
it
has.
A
So
that
includes
the
root
Cas
that
it
trusts
and
then
it
will
try
to
chain
those
up
with
the
certificates
that
are
on
the
x5c
header,
and
if
that
works,
it
will
take
the
leave
certificate,
get
its
public
key
and
then
use
that
to
actually
verify
the
token
itself.
So
that's
basically
how
it
ends
up
chaining
up
right.
So
you
you
trust
the
leaf,
because
you
trust
the
root
and
then
you
can
use
the
Leaf's
public
key
to
verify
the
signature
of
the
jar
itself.
A
Oh,
it's:
it's
not
an
alternative.
It's
meant
to
be
like
a
co-requisite.
Basically,
is
that
if
we
support
that
flow,
the
like
eks
is
sort
of
happy
to
make
it
so
that
every
time
a
service
account
token
is
signed,
you
have
to
make
a
remote
API
call
off
the
API
server
and
go
off
to
wherever
to
get
stuff
signed.
A
That's
not
necessarily
tractable
in
the
environment
responsible
for
so
I
wanted
an
approach
that
could
sort
of
mimic
what
we
did
for
KMS,
which
is
keep
everything
within
the
API
server
as
much
as
possible,
and
only
in
the
background
have
the
external
remote
call.
So
the
way
I
would
see
this
happening
is
the
plug-in.
A
H
I
guess
like
imagine
this,
with
without
x5c,
where
you
just
publish
those
yeah.
A
H
A
A
A
H
I
guess
so,
it
would
reduce
the
amount
of
fetches
to
the
juke's
URI
needed
and
the
trade-off
is
like
larger
tokens.
Yeah.
A
So
I
I
think
the
way
that
would
end
up
having
to
work
is
you
in
your
plugin
would
decide
when
you
want
to
use
this
so
like
that
sort
of
is
up
to
you
right,
like
you,
don't
have
to
use
it
all
the
time
or
you
don't
have
to
use
it
at
all.
There's
no
hard
requirement
to
use
it
I'm
more
talking
about
having
an
option
and
then
letting
whoever
is
issuing
the
tokens
issue
them
that
way
and
if
they're
not
valid
to
the
other
entity.
Well,
it's
going
to
fail
very
obviously.
C
I
C
By
like
I'm
trying
to
think
of
this
changes
or
expands
what
someone
using
a
cube,
API
server
tokens
and
how
they're
verifying
them
like
today,
we
don't
support
this,
so
nobody
verifying
tokens
against
Google,
API
server
today
would
necessarily
know
if
their
x5c
support
was
broken
as
soon
as
QB
API
server
supports,
even
if
it's
optional
like
supports
an
integration
that
can
produce
tokens
like
this
produce.
Just
like
this
now,
every
client
that
runs
against
Cube,
API
server
and
verifies
tokens
has
to
at
least
consider
the
possibility
of
tokens
like
this.
C
If
they're
gonna
like
say,
we
interoperate
with
random
Cube
API
server
XYZ
so
right,
it
would
be
good
to
know
like
maybe
a
survey
of
the
not
just
the
go
ones
but
sort
of
the
you
know
top
one
two
three
like
if
I
was
going
to
validate
jots
in
random
language,
whatever
what
library
or
libraries
would
I
likely
be
using
and
do
those
actually
support,
x5c
like
knowing
where
this
level
of
support
is
in
the
ecosystem
would
be
helpful.
A
When
you
guys
did,
though,
the
open
ID
stuff
for
cube,
what
did
you
validate
in
that
process
for
consuming
of
tokens?
I'm
curious.
F
I
think
what
do
we
do?
I
think
we.
A
H
Definitely
I
I
recall
AWS,
explicitly
and
Vault
oidc.
I
C
Feature
has
been
confirmed:
incredible,
independent,
relying
parties,
basically
the
Federation
solutions
for
the
cloud
providers
and
looking
at
popular
IDC
libraries,
so
I.
We
could
dig
into
maybe
the
graduating
PR
to
see
what
got
pointed
at,
but
we
did
similar
sort
of
due
diligence.
There
I
think
the
tokens
we
were
generating
at
that
point
were
like
bog
standard
tokens.
C
It
was
really
just
the
discovery
like
hitting
that
Discovery
URL
and
being
happy
with
the
discovery
content
yeah,
so
that
in
my
mind,
that's
a
smaller
surface
area
than
like
a
new
type
of
token
signature.
That's
pretty
esoteric.
A
C
F
F
A
Any
other
thoughts
or
comments,
I
I,
think
I
would
generally
agree
with
the
sentiment
that
we
wouldn't
do
this
unless
there
was
something
else
to
tie
it
to
right,
like
I,
wouldn't
like
so
I
ended
up,
adding
like
two
flags
to
the
API
server.
To
make
this
work,
one
was
well
what
other
Roots
he
is
that
you
trust
and
the
other
one
is
when
you
generate
a
token
or
when
you
sign
a
token,
what
x5c
stuff
do
I
need
to
attach
to
make
it
so
that
on
the
way
back
in
it'll
verify
correctly.
A
So
that's
just
you
know
like
what
is
the
public
cert
for
your
right,
because
the
private
key
that
you're
given
for
signing
is
just
a
private
key.
It's
not
a
certificate,
so
it
doesn't
know
that
piece
so
I
did
that
that's
fine,
but
I,
don't
think
we
would
do
that
unless
there
was
compelling
reason
effectively
or
some
variation
of
that.
A
All
right:
well,
you
can
finger
and
I
guess
we
can
talk
about
canvas
V2,
crypto,
so
Mike
you
and
I
spoke
about
this
at
some
length
and
I
took
most
of
what
you
wrote
up
and
then
added
more
comments.
Based
on
my
comment.
Last
conversation
with
Jordan
I
just
wanted
to
see
if
we
were
in
consensus
enough
for
at
least
for
me
to
go,
try
to
implement
all
this
and
get
it
ready
for
128.
A
Yeah
so
I
think
I
think
the
biggest
concern
I
had
with
like
the
tank.
One
was
like
the
so.
The
open
issue
to
add
support
into
standard
library
has
like
a
PR
or
a
change
attached
to
it.
That
is
like
mostly
assembly
code,
I,
think
there's
a
bunch
of
assembly
in
here.
So
then
I
was
confused
concerned
about
how
the
tink
library
was
implemented
and
just
go.
A
E
D
A
E
A
Yeah
so
yeah
yeah
for
folks
that
are
not
familiar.
It's
nonce
misuse
resistant.
So
if
you
have
a
nonce
collision
with
asdcm
Sev,
all
that
happens
is
you
lose
the
Cipher
text
of
like
the
current
object
effect.
You,
don't
you
don't
catastrophically
disclose
every
single
thing
that
you
have
previously
encrypted
with.
G
A
So,
instead
of
we
had
regular
asgcm
with
like
one
Collision,
if
I
remember
correctly,
you
like
immediately
disclose
the
plaintext
of
the
things
that
collided,
plus
the
authenticating
key
and
I.
Think
if
you
have
more
than
like
three
or
four,
then
you
just
catastrophically
lose
the
actual
encryption
key
that
you.
I
The
authenticated
encryption,
but
the
the
hash
can
be
that
I've
like
changed
hash.
I
And
then
you
can
xorge
to
cyber
texts
with
the
same
key
same
knots,
to
get
an
extorted
value
of
the
plain
text.
But
it
like
the
SIV
I.
Think
that.
H
It's
the
ciphertext
all
right,
the
plain
text:
the
and
the.
I
Nonce
and
the
key
are
all
combined
to
create
a
synthetic
IV
which
would
be
different
for
different
plain
texts
and
that
prevents
the
that
results
in
the
Collision
resistance
with
something
like
something
called
polyvel
but
I
think
I.
Think
the
only
problem
with
repeated.
I
Single
ciphertext,
where
the
ReUse
knots
with
as
as
gcmsiv
I,
would
double
check
that
but
I.
A
Even
if
you
ignore
that
I
know
that
if
you
lose,
if
you
have
a
few
nonce
collisions,
you
can
basically
do
some
polynomial
math
to
retrieve
the
actual
encryption
key.
It's
one
of
like
the
exponents
in
like
the
shared
polynomials
across
things
like
I
I,
that
one
I
did
dig
into
ADP.
So
that's
the
real
question
here
is
we're
using
the
same
data
encryption
key
effectively
for
the
lifetime
of
the
process.
A
F
A
I
But
the
other
thing
is
I,
there's
no
great
implementation
and
go
and
I
do
not
know
if
there
is
a
fips
implementation
in
any
of
the
standard.
Fips
open,
SSL
distributions,
which.
I
A
I'm
unaware
of
that
myself,
but
I
hadn't
thought
of
that.
That
is
true,
that
is
in
itself
problematic.
E
I
I
And
the
people-
the
reviewers,
legit
I-
don't
know
how
widely
used
it's
been.
Though
it's
up
here
you
go
I
guess
we
can
move
on
to
the
second
option.
A
A
You
mean
what's
written
here:
Mike
yeah,
okay,
yeah,
so
the
first
option
and
I'll
try
to
write
some
of
the
details
down
in
a
bit,
for
it
is
basically
what
we
just
talked
about,
which
is
it
the
SIV
implementation.
So
that
way,
if
we
screw
up
the
non-swell,
it's
not
a
big
deal
or
as
a
less
of
a
big
deal,
and
we
can
kind
of
stop
worrying
about
it.
A
The
second
option
is
basically
a
scheme
to
basically
extend
the
nonce
that,
as
GCM
is
using
so
the
the
notch
size
of
asdcm
is
just
12
bytes
and
there's
no
way
to
increase
that,
because
if
you
try
to
give
it
more
bytes,
it
just
hashes
them
down
to
12
bytes.
So
don't
waste
your
time.
A
So
that
means
that
you
do
need
to
do
some
kind
of
key
rotation,
because
the
chance
of
collisions
goes
up.
If
you
do
too
many,
so
KMS
V1
got
around
this
whole
problem
by
just
you,
re,
never
reusing
a
key
ever
needed
the
problem
of
horrendous
startup
times
and
performance
costs,
because
you
were
constantly
hitting
the
remote
KMS
to
decrypt
or
encrypt
the
decks
you
were
using.
A
So
this
scheme,
which
is
roughly
analogous
to
what
we
discussed
a
while
back
with
the
diagrams
and
stuff
I
had
was
it
would
stop
calling
the
thing
that
we
send
to
the
plug-in
the
deck.
We
would
probably
start
calling
it
The
Seed,
which
would
be
a
set
of
32
random
bytes,
and
then
we
would
use
16
bytes
of
a
nonce
with
hkdf
expand
and
shot
256
to
basically
generate
a
random
key
per
right.
A
So
I
don't
know
if
the
math
works
out
exactly
as
like
an
additive
to
the
nonce
like.
If
you
could
pretend
it
was
a
28
byte,
nonce
total,
but
just
as
an
example,
secret
box
uses
a
24,
byte
nonce
and
basically
it
considers
that
good
enough
for
just
having
random
nonsense,
and
then
we
would
end
up
storing
the.
So
today
we
already
stored
the
nons,
the
12
byte
knots
alongside
the
encrypted
deck
in
cyber
text,
we
would
now
just
store
more
nonce
data.
A
So
that
way
we
knew
how
to
regenerate
the
deck
that
was
actually
used
and
then
to
make
reads,
not
take
a
performance
hit
from
hkdf.
We
would
have
some
kind
of
cash,
probably
unbounded
size,
but
then
we
would
limit
the
growth
of
the
cash
by
giving
each
entry
some
TTL
and
probably
will
make
the
key
into
the
cache
the
NCD
path
of
the
object
which
we
already
have,
and
that
would
prevent
like
frequent
rights
to
the
same
object
from
polluting
the
cash
too
much.
A
And
then,
if
you
were
just
making
lots
of
lots
of
new
objects,
then
the
TTL
would
prevent
unbounded
growth,
that's
kind
of
how
we
would
keep
the
cash
from
going
too
big,
but
the
cash
would
basically
just
say.
Oh
I,
see
that
you
have
this
nonsense.
G
E
A
C
That
that's
my
confused
face,
I
just
can't
I!
Don't
have
that
in
my
head.
So
I,
it's
hard
for
me
to
reason
about
I
would
probably
lean
really
hard
on
folks,
like
Mike
and
other
other
folks
who
either
can
reason
about
like
that
chain
of
construction
for
the
months
or
have
like
crypto
review
teams.
They
can
ask
about
that.
I
I,
just
don't
I,
can't
I,
don't
understand
what
each
of
those
like
lengths
and
chaining
those
steps
together.
I,
don't
know
what
the
properties
of
that
ends
up
being.
H
Yeah,
we
can
make
sure
that
the.
B
C
C
Knowing
what
the
performance
overhead
is
was
my
the
only
thing,
I
really
had
an
opinion
on
and
I
think
if
we
key
the
cash
off
of
the
LCD
key,
so
we
retain
at
most
one
decryption
key
per
actual
entity
entry,
then
that
insulates
us
against
cases
where
someone's
doing
a
ton
of
rights
of
the
same
objects
and
did
you
have
the
overhead
I
think
it'd
be
helpful
to
throw
the
overhead
per
cash
entry
in
here
just
so,
we
can
kind
of
reason
about
what
the
memory
use
would
be
for
various
amounts.
C
For
the
cash
I'm
concerned
about
memory
usage
overhead,
because,
like
someone,
spamming
updates
to
the
same
object,
will
actually
get
compacted
out
of
etcd,
but
unless
we
limit
the
cash
to
roughly
match
like
the
number
of
objects
in
a
CD
like
it,
you
could
do
a
lot
of
a
lot
of
Rights
in
like
an
hour
and
so
like
thinking
about
what
what's
our
Max
QPS.
What's
a
ballpark
like
Max,
writes
per
second,
how
much
overhead
is
there
per
key.
A
A
So
it's
not
free
by
any
means
and
and
then
you
go
from
like
42
000
nanoseconds
to
96
000
nanoseconds.
So
it's
like
twice
as
low
but
I
mean
it's
we're
still
talking
50
000
nanoseconds,
which
I
forget.
H
I
Feel
like
examining
this
in
isolation
is.
C
The
performance
on
right,
I
am
far
less
concerned
about
the
performance
on
read
if
you're
going
to
do
a
global
list
and
we're
paying
this
overhead
on
every
read
of
every
object,
that's
way
more
concerning
which
makes
me
think
we
need
some
sort
of
cash,
and
so
then
my
main
goal
for
the
cash
is
to
not
let
you
own
the
servers
so.
H
H
The
object
as
a
prefix
and
then
the
data
we
would
not
have
that
extra
copy.
The
other
thing
that
we
do,
which
is
not
great
for
allocations,
is
for
every
runtime
unknown
in
protobuf
encoding.
Whenever
we
serialize
that
we
do
a
full
copy
of
the.
A
I
Buy
for
bite
into
the
Proto
object
and
then
we're
gonna
I,
don't
know
what
your
implementation
does
if
it
does
in
place
an
encryption
or
not,
but
then
that
has
to
get
copied
again
from
cypherdex
to
plain
text.
So
we
can
reduce
the
number
of
those
copies
down
to
just
reading
it
from
SCD
or
whatever,
and
make
back
everything
that
we
lose
by
doing
this
hkdf.
But.
A
Okay,
that's
fine
I
can
I
can
I
can
take
that
that
makes
sense.
So
let's
say
let's
say
that
the
benchmarks
are
fine
and
the
performance
is
acceptable,
so
we're
kind
of
we're.
Okay,
with
this
approach,
barring
our
crypto
friends
telling
us
that
we're
doing
crazy
stuff,
basically.
F
10
minutes
to
my
next
meeting,
basically.
A
How
about
I
see
you
on
the
call?
Did
you
want
to
talk
about
your
thing.
D
Hi
everyone
nice
to
meet
you
all
sorry
to
interrupt
the
the
sake
of
meeting
it's
just
related
to.
We
I
think
you
have
a
repo
called
SEC
ask
tools.
This
reboot,
just
you
know
like
to
have
some
GitHub
actions
to
run
to
monitor
them.
The
dashboard
for
the
third
class-
and
you
know
like
other
stuff
like
that,
and
we
would
like
to
you
know
like
make
it
more
General.
D
So,
instead
of
we
wanted
to
reuse
it,
so
instead
of
just
duplicate
or
replicate
the
repo
for
scheduling
we
would
like
to
just
see
if
this
is
okay
to
rename
the
the
repo
name
and
make
everyone
everything
can
just
contribute
one
file,
which
is
the
GitHub
action
instead
of
just
you
know
like
own
and
maintain
a
separate
trip
over
that,
because
I
know
we
have
a
problem
across
the
organization,
kubernetes
organization,
that
we
cannot
create
GitHub
actions
and
kubernetes
repo.
I
A
So
that
way
we
don't
have
to
go,
find
them
ourselves
and
when
people
make
random
issues
in
like
the
cube,
CTL
repo,
we
actually
noticed
instead
of
ignoring
it
for
months
or
years
at
a
time,
and
the
reason
that
that's
the
GitHub
action
is
even
though
we
use
the
latest
and
greatest
GitHub
features
in
our
project
board.
You
can
only
tell
a
GitHub
project
board
to
automatically
add
issues
from
one
repo.
You
can't
tell
it.
I
want
everything
from
an
org.
A
Yeah
I
have
asked
the
GitHub
project
sport
PM
about
this
before
by
the
way.
So
it's
not
that
they're
not
aware,
but
that's
what
it
exists
to
do
for
context.
Yeah.
D
And
this
is
great
and
actually
I
feel,
like.
You
know
like
that.
This.
This
will
benefit
like
everything,
because
you
know,
like
you
know
like
having
everything
in
Just
One
dashboard.
It's
great.
We
had
the
same
issue
in
release
team
and
we
had
to
go
through
like
creating
a
new
job.
You
know
like
with,
with
you
know,
like
special
straps,
to
run
it
in
Pro,
and
you
know
like
having
it
updated
every
every
release,
title
but
I
believe
we
don't
need
that
here,
it's
more
easier.
D
You
know
like
if
we
just
have
a
GitHub,
Action,
Scrap,
the
whole
org
and
just
put
the
issues
nprs
in
the
dashboard.
C
D
Not
yeah
not
yet,
but
we
have
been
told
that
we
can
contribute
this
back
to
contribute
X
once
we
have
it
like.
Generally.
Okay,.
C
Is
the
is
the
intent
for
this
repo
to
evolve
to
be
able
to
have
more
than
one
GitHub
action
like
I?
Think
today
is
if
I'm
reading
it
correctly.
It's
like
one
action
at
the
root
yep.
C
So
would
we
want
to
I'm
trying
to
understand
if
the
goal
is
to
have
like
a
repo
four
actions,
and
this
would
be
like
the
first
action,
in
which
case
it
might
make
more
sense
to
start
a
repo
four
actions
and
like
copy
the
code
and
its
current
state
into
that
repo,
rather
than
trying
to
like
rename
it
and
move
it
around.
I
I,
don't
know
yeah.
D
This
is
my
suggestion.
Actually
it's
just
to
have
a
Depot
with
multiple
actions.
For
now
we
have
this
reboot
with
just
one
action
for
now
and
a
code
that
you
know
like
script,
that
the
organization
so
yeah,
but
it
would
be
great
if
this
is
a
good
idea
to
generalize
it,
because
we
need
it
from
time
to
time.
C
The
night,
the
second
thought
I
had
the
nice
thing
about
this
repo
is,
we
have
exactly
one
consumer
and
it's
all
folks,
and
so
the
only
people
touching
this
are
all
folks,
and
if
we
break
something,
then
it
only
affects
our
project
board
and
like
we're
on
the
hook
for
noticing
and
fixing
it
as
soon
as
you
have
one
action
being
used
by
a
bunch
of
cigs.
Now
it's
like
you
have
people
contributing
to
that
action
and
it
impacting
edge
cases
in
other
cigs,
and
so
so
you
have
a
lot
more
requirements
around
like.
C
That's
the
thing
like
it,
it
runs
and
if
it
breaks
it,
screws
up
our
project
board
and
we
notice
and
we
have
to
go
fix
it
by
hand
like
as
soon
as
it's
a
thing
that
other
people
can
contribute
to
now.
I
I,
don't
know
if
everyone
who's
contributing
to
a
you
know,
shared
thing
even
knows
it's
being
used
against
the
off
board
and
is
able
or
willing
to
go
fix
up
the
off
board
by
hand
if
it
breaks.
You
know.
D
I
I
guess
your
point:
you'll
have
a
total
valid
point,
but
again
I
I,
just
you
know
like
trying
to
reduce
the
number
of
freebies
that
we
are
creating
just
for
the
same.
The
same
thing
it's
like
I
will,
just
you
know
like
Fork
the
reboot,
rename
them
rename
it
and
have
to
maintain
it
by
myself.
It's
everything
the
same
code,
the
same
everything
we
can
add.
We
can
add
like
this
is
what
you
know
like
this.
D
This
piece
of
code
or
this
piece
of
script
does
one
two
three
and
you
know
like:
if
you
wanted
to
create
a
new
thing,
you
can,
you
can
contribute
a
new
one,
but,
and
everyone
can
can
create
its
own
set
of
GitHub
actions
and
separate
this
folder
under
dot,
GitHub
folder
and
it's
like
maintained
by
the
seglets
or
the
SEC.
It's.
D
C
A
repo
to
have
a
structure
like
that,
where,
with
a
folder
per
Sig,
where,
like
a
home,
a
natural
home
for
a
code
like
that,
makes
sense
to
me,
I'm,
not
sure
that
renaming
this
repo
is
the
right
way
to
get
that
or
if
creating
a
repo,
a
single
repo
like
that
and
then
having
a
folder
per
Sig
and
saying
you
can
drop
actions
in
here.
And
maybe
if
we
had
that
we
would
drop,
remove
this
action
into
there
and
turn
down
this
repo
I.
Don't
know.
D
C
D
C
Yeah
the
I
would
lean
on,
contributes
and
test
in
for
folks
I
know.
You
said
you
had
done
this
as
a
prop
plugin
before
and
it
was
painful.
D
It's
not,
but
it's
it's
a
long
process
and
you
know
like
we
can
do
that,
but
I
feel
like
this
is
the
the
main
thing
for
GitHub
action.
It's
like
like
that
and
and
I
wanted
to.
You
know
like
just
revisit
again
that
to
open
a
door,
for
you
know
like
GitHub
action
in
a
secure
way
to
just
start
to
use
it
and
and
the
org,
because
it
makes
a
lot
of
sense
and
it's
it's
easier
to
be
honest,
but
again
I
I've
done
this
already
info.
A
The
other
sort
of
data
point
I
wanted
us
to
consider
is
I,
think
they're
at
least
for
right
now.
The
reason
that
we
have
this
action
is
just
because
of
the
missing
feature
in
GitHub
project
automation,
presumably
they'll
eventually
get
to
that
and
have
it,
and
then
we
won't
need
this
specific
action
anymore.
How
about
do
you
have
other
things
that
you
need
to
automate
with
actions
that
are
like
different
from
this
one.
D
Right
now,
right
now,
no
I
just
wanted
to
make
sure,
maybe
maybe
in
the
future,
we
can
have
like
more
automation,
like
you
know
like
if
you
have
this
label
move
it
to.
You
know
like
need,
need
review,
or
you
know
like.
We
need
approval
stuff
like
that.
It's,
like
you,
know
like
more
more
organized
dashboard,
but
what
have
what
we
have
today?
This
is
more
than
perfect.
This
is
all
what
we
need
right
now,
but
again
in
the
future.
Yes,
we
we
can,
especially
in
sex.
D
You
know
like
we
have
a
lot
of
work
groups
and
we
might
need
it
I,
don't
know,
I,
don't
have
anything
in
my
mind.
Right
now,.
A
Okay,
yeah
I,
I,
guess
I
I,
don't
know
I,
don't
have
a
super
strong
opinion,
I'm
kind
of
happy
with
whatever
it
I
I'm,
basically
happy
that
I
got
an
issue
to
rewrite
this
from
a
bash
script
in
the
go
code
because
to
me
bash
makes
my
eyes
bleed
and
go.
It
makes
me
happy
so
here
we
are
I.
I've
already
succeeded
in
the
goal:
I
had
yeah.
D
Yeah
one
more
one
more
thing
to
you
know
like
to
love
Anish,
so
yeah
I
love
this
okay,
so
I
will
go
ahead
and
I
will
propose
this
to
contribute
X
and
hopefully
it
will
be
okay,
otherwise
yeah
I
will
just
create
my
own.