►
From YouTube: SIG-Auth Bi-Weekly Meeting for 20230830
Description
SIG-Auth Bi-Weekly Meeting for 20230830
A
A
Anish,
do
you
want
to
talk
about
the
secret
feature,
split
first
or
the
offline
stuff
first,
which
one
do
we
want
to
talk
about?
First.
A
Okay,
so
this
dock
is
linked
in
the
agenda
and
I
realize
basically,
no
one's
had
time
to
read
it,
because
we
were
still
like
in
the
process
of
putting
it
here.
The
there's
two
things
I
want
to
get
out
of
this
conversation
for
now.
One
is
a
high
level
agreement
that
the
problem
is
sort
of
like
an
okay
one
to
solve,
and
then
some
of
the
specifics
so
like
at
the
high
level.
A
This
design
is
focused
around
adding
the
capability
in
secret
CSI
to
support
a
mode
where
the
cloud
environment
that
backs
the
provider
that
it's
using
is
unavailable
for
some
amount
of
time
right.
So
the
our
use
cases
are
around
like
Edge
deployments.
Where
might
you
might
not
have
the
best
network
connectivity,
but
you
still
want
to
use
secret
CSI.
A
So
you
know
in
systems
that
are
directly
syncing
into
kubernetes
Secrets.
This
isn't
really
an
issue
because
they
already
wrote
their
data
into
the
kubernetes
API,
so
it's
just
kind
of
there
and
they
can
just
like
if
they're
offline.
A
So
this
proposal
basically
says
that
we
we
will
try
to
solve
that
problem
somewhat
generically
in
the
driver
with
coordination
with
the
provider.
So
that
way,
if
you
have
arbitrary
providers
that
want
to
leverage
this
feature,
they
can
with
relatively
minor
changes,
it
is
explicitly
off
by
default
and
requires
opt-in
by
the
Admin,
as
well
as
opt-in
by
the
provider
implementation.
C
I'm
trying
to
remember
how
much
coasting
there
is
already
built
into
CSI
like
if
so,
if,
if
a
pod
wants
to
mount
a
CSI
volume,
the
CSI
plugin
gets
consulted.
Obviously,
before
the
Pod
starts
the
first
time
and
it
has
to
fetch
the
data
and
set
up
the
volume
out
and
populate
it,
and
then
it
gets
injected
into
the
pod.
C
While
the
Pod
is
running,
the
driver
I
think
gets
consulted
periodically
to
see
if
it
wants
to
update
that
data.
If
that
fails,
doesn't
the
data
and
the
volume
Coast
on
like
what
was
previously
set
up?
C
Okay,
so
that
that
already
works
so
for
pods
that
are
started
already,
we're
not
going
to
break
it
in
an
already
running
pod
if
an
existing,
if
a
container
in
an
existing
pod
dies
and
has
to
be
restarted
that
also
coasts
on
the
existing
volume
right
like
we
don't
reset
up
volumes
for
container
level
restarts
okay,
so
this
is
only
for
like
I
need
to
start
a
new
pod
instance
and
I
am
actually
setting
up
volumes
fresh
for
a
new
pod
and
the
backing
provider
isn't
available.
Okay,
that.
C
A
Would
help
this
is
basically
new
pods
on
like
scale
up
scale
down
cubelet
drains
upgrades
all
that
kind
of
stuff,
and
you
know
for
our
use
cases.
We
don't
need
indefinite
offline
support.
We
just
need
it
for
some
amount
of
time,
but
I
think
you
end
up
needing
this.
You
end
up
needing
a
durable
cache,
basically
to
sort
of
survive
like
a
cluster
restart.
A
Is
not
trying
to
like
do
something
Fancy
on
the
file
system
of
the
cubelet
it's
running
on,
since
that
would
inherently
be
problematic.
Based
on
how
scheduling
happens,
okay,
I
I,
like
a
high
level,
does
anyone
have
a
concern
about
just
in
general,
doing
this
at
all
like
again,
like
I?
Don't
think
such
a
feature
should
be
enabled
by
default?
A
Nor
do
I
think
it
should
ever
happen
without
the
provider
stating
that
they
want
this
to
happen
right,
like
the
reason
being
there
is
so
like
we
support
like
workload,
identity,
style
flows
through
secret
CSI,
where
the
service
account
token
is
being
passed
through,
so
that
is
being
used
as
like
an
authentication
of
the
Pod
and
then
authorization
against
Cloud
systems,
so,
like
a
provider,
needs
to
be
aware
that,
like
those
checks
are
going
to
be
skipped,
if
they
let
this
happen,
do
they
have
to
make
it
do.
A
Or
or
more
specifically,
they
could
be
skipped
depending
on
what
like,
if,
if
we
built
it
in
a
way
where
the
driver
would
decide
on
its
own,
that
it
was
offline
and
would
just
use
as
cash,
then
those
would
be
skipped
right.
The
the
way
this
design
has
it
now
is
that
the
calls
are
always
made
to
the
provider
and
only
if
the
provider
returns
that
a
specific
error
that
basically
says
hey
I,
can't
talk
to
my
Upstream.
B
C
Okay,
so
if
one
workload
asked
for
a
secret
and
got
approved
and
it
got
cached
and
then
the
provider
went
offline
and
another
pod
with
another
service
account,
it's
like
hey,
please
give
me
the
secret.
It
wouldn't
get
a
cash
hit,
because
it's
keyed
by
service
account
as
well.
Okay,
that's
right,
yeah
and.
C
A
A
And
at
the
SPC
level,
so
you,
the
admin
when
configuring
your
custom
resource,
say
I
want
offline
for,
however
long
and
the
the
provider
has
to
go
into
a
mode
where
it
detects
itself
as
being
offline
and
then
tells
the
driver
that
I
can't
can't
do
the
mount.
A
But
it's
not
because
of
an
authorization
issue
or
some
other
assist
random
failure.
It's
I
believe
I'm
offline.
Please
go
ahead
and
use
the
cash.
If
you
happen
to
have
it,
but
basically
the
driver
and
the
provider
are
always
in
the
path
and
no
one's
getting
skipped
so
unlikely
that
something
will
happen
in
a
way
that
is
unexpected.
A
With
that
one
in
mind,
okay,
so
I'm
not
hearing
any
immediate
pushback
or
don't
do
the
thing.
The
hard
part
about
all
of
this
is
having
durable
storage
for
an
offline
thing,
because
the
point
of
secret
csis
did
not
use
kubernetes
secrets.
A
So
basically,
we
have
three
proposals
for
like
where
the
durable
thing
is.
One
is
kubernetes
secrets
in
the
namespace
of
the
secret
CSI
driver,
so
at
least
they're
like
local
to
it.
A
The
other
is
using
a
custom
resource
definition
as
the
cache,
also
in
the
namespace
of
the
secret
CSI
driver
and
using
it
there
and
then
the
third
is
mostly
just
a
combination
of
the
two
with
the
idea
that
you
have
a
custom
resource
that
holds
all
the
metadata
for
the
cache,
but
then
pointers
to
kubernetes
secrets
to
hold
the
actual
data
which
gets
more
complicated.
Obviously,
the
the
gist
of
the
overall
concern
I
have
is.
A
They
might
Auto
log
the
bodies
or
do
something
terrible
yeah,
so
I
I,
don't
I'm
stuck
right.
I
want
basically
neither
for
various
reasons.
A
A
Fixing
all
the
problems
by
inheriting
a
bunch
of
complexity.
Where
do
we
put
that
key?
That
would
be
in
a
kubernetes
secret
right,
so
it's
okay!
If
an
Ingress
controller
can
read
it
because
it
doesn't
have
read
access
to
the
cache
right,
so
it
doesn't
have
the
complete
access
to
actually
get
all
the
data.
A
As
in
the
secret
SYNC
feature
in
secret,
yeah,
well
I
mean
we
plan
on
removing
that
feature.
Okay,.
A
C
Is
there
a
thing
we
could
do
with
the
key
that
would
make
it
less
accessible
and
possibly
more
ephemeral?
And
if
we
lose
the
key
worst
case
like
we
lose
our
cash,
like
normally
with
encryption
keys.
If
you
really
really
really
really
have
to
never
ever
lose
the
key,
because
if
you
lose
it
like
you're
dead.
B
E
You
could
have
your
own,
like
Oracle
service
in
the
cluster
that
uses
TPS
and
if
it
like,
you
know,
gets
all
good
resources
at
once.
You
just.
C
Implementations
have
or
any
communication
mechanism
that
they
have
where
they
could
hold
this
key,
so
it
wouldn't
be
stuck
in
a
kubernetes
I'm.
Just
I,
don't
know,
I'm
wondering.
C
F
Probably
like,
given
that
there's
no
Jordan
and
I
were
talking
the
other
week
about
see
if
we
could
finally
generalize
node
restriction
to
more
things
but
yeah.
Typically,
what
I
see
in
these
cases
is
a
service
account
that
a
demon
runs,
as
is
just
given
the
permissions
cluster-wide,
and
nothing
would
stop
notes
nodes
well,.
E
A
B
A
So
that
that
is
fair,
it
does
weaken
the
boundary
sort
of
inherently
with
what
it's
asking
the
system
to
do
right.
It's
asking
these
gaming
sets
to
basically
coordinate
in
a
central
place
and
they
inherently
trust
the
central
place.
So
if
you
can
poison
or
otherwise
mess
with
the
central
place,
you
cause
all
of
the
Damon
says
to
misbehave.
F
F
F
You
know
persistent
volume
claim
for
an
attached
disk
that
holds
your
cache
that
can
move
around
between
nodes.
If
that
needs
to
survive.
Notary
starts
and
just
be
like
super
careful
about
isolating
the
whole
thing,
but
having
any
cluster
cache,
which
is
way
more
work,
but
you
maybe
get
better
security
boundaries
out
of
it
and
avoid
secrets.
E
E
A
F
Outside
the
API,
so
that
I
don't
know
but
I
don't
know
if
you
get
if
that's
better
or
not.
E
I,
don't
know
how
much
to
worry
about
it
right
because,
it's
like
say,
I'm
using
GCB
Secrets
manager,
whatever
it's
called
right
and
I
keep
my
you
know
my
banking
access
token
in
there
by
the
time,
I've
enabled
the
feature
that,
like
lets
that
token
leave
gcp
Secrets
manager
and
be
stored
anywhere
except
memory.
I've
kind
of
like
given
up
on.
A
These
are,
these
are
secrets
that
are
exported
out
of
The
Secret
store
they're,
not
like
keys
that
are
like
asymmetric
and
the
private
key
never
leaves
the
wall.
You.
F
F
I
mean
if
we
maybe
a
question
for
others
like
if
we
did
use
secrets
to
store
these
the
cash
like.
Is
there
a
path
to
extend
node
authorizer.
F
For
that,
which
would
at
least
give
us
the
option
to
run
drivers
is
cubelet
and
get
some
protection.
D
C
That
action
is
exactly
what
you
would
not
want
a
compromised
node
to
be
able
to
do
right,
even
if
we
did
find
some
way
to
like
link
a
random
secret
and.
F
It
wouldn't
even
it
wouldn't
even
just
linking
the
secret
to
the
Pod,
wouldn't
be
enough
right,
because
if
you're
trying
to
survive
node
restarts,
you
need
to
link
it
to
a
deployment
or
a
demon
set
or
like
something
higher
level,
because
the
pod's
not
going
to
survive
a
node
restart
either
you're
just
going
to
get
a
new
one.
I.
C
E
F
D
E
E
E
C
Doesn't
re-implemental
just
the
relevant
bit?
Yeah
they
being
able
to
cross
nodes
is
a
goal
because
it
helps
with
no
drain
and
node
operations,
and
it's
also
an
anti-goal
because
it
breaks
node
isolation
like
I,
don't
see,
maybe.
B
F
F
Yeah
yeah
I,
don't
know
how
is
that
gated
I'm,
like
the.
A
The
driver
running
on
the
cubelet
has
no
special
permissions
when
it
when
a
pod
runs
and
asks
for
the
CSI
mount.
It
is
issued
a
service
account
token
for
the
Pod
under
a
specific
audience.
The
driver
then
passes
that,
through
to
the
provider,
the
provider
then
passes
it
to
the
cloud
provider
to
do
whatever
checks
using
so
so.
F
A
A
C
F
You
still
need
some
way
to
indicate
that
relationship,
but
if
it's,
if
it's
the
combination
of
this
thing,
popularity
in
parallel
and
some
well
additional
node
authorizer
specific
to
this
thing,
that
also
runs
on
the
control
plane.
C
A
A
A
Right
yeah
well,
I
mean
the
problem
with
secret
sync
is
it
requires
driver
or
whatever
component?
It
is
to
be
able
to
create
sequence
all
right,
so
once
you
can
create
secrets,
you
can
just
be
like
I
would
like
a
secret
and
I
would,
like
the
token
controller,
to
fill
it
in
for
me,
with
the
nice
data
from
this
service
account
and
now
in
cluster
admin,
because
I
could
create
a
secret
in
this
namespace
yeah.
C
A
Well
then,
the
other
thing
on
the
agenda
is
splitting
up
secret
CSI
into
basically
effectively
two
projects
right,
one
that
does
kubernetes
secret
syncing
and
one
that
does
the
CSI
mount,
which
I
can
initially
talk
about
more
oh.
F
E
D
F
A
A
Was
just
a
single
node,
it
would
be
a
non-issue
right.
We
would
just
be
like
yeah
just
use
whatever
the
TPM
has
and
you
don't
have
like
I
I
know
various
projects
with
Microsoft
that
can
do
the
single
machine
case
really
well
across.
Basically
any
Linux
distribution
with
some
relative
compatibility,
but
I
just
don't
know
if
there's
anything
exists
for
anything
beyond
more
than
one
Linux
VM
or
machine.
C
A
E
B
C
This
thought
this
talks
about
protecting
the
content
in
the
API,
with
encryption
and
there's
some
weirdness
around
where
to
put
the
encryption
key,
but
it
doesn't
really
talk
about
no
isolation
at
all
and
today
the
provider.
Can
you
account
for
that
because
they're
given
the
token
and
they
can
see
like
what
node
it
came
from?
Well,
they
can
see
the
Pod.
C
They
can
figure
out
what
no
it
came
from
if
they
cared
hopefully
soon,
they'll
get
the
node
information
as
well,
but
if
we
flatten
the
cache
in
the
current
version
of
the
design,
we
lose
that
information.
A
C
Which,
which
would
help
in
some
cases
but
so
like
if
you
had
a
Daemon
set
or
something
like
it,
would
help
that
right?
You
could
do.
You
could
still
do
rollouts
and
things
because
you
would
have
a
copy
and
you
could
still
key
it
by
the
node.
So
it
would
help
things
like
Damon
sets,
but
it
wouldn't
help
no
drains
and
cross.
Node
scheduling,
moves
and
stuff.
A
Cash,
however,
you
want
to
cash
right.
Nothing
says
that
the
provider
has
to
call
a
cloud
API
right
that
the
ugliness
of
that
is
that
you're
at
like,
if,
if
more
than
one
provider
cares
about
such
a
feature,
you've
just
asked
everyone
to
do
the
same
work
over
and
over
when
the
driver
should
have
just
done
before
it.
Yeah.
A
F
A
A
A
F
F
Often
volumes
are
bound
to
a
specific
nodes,
so
you
wouldn't
feel
I
mean
you
could
do.
A
Right,
yeah
I'm,
not
not
really
asking
about
that
aspect,
I'm
more
asking
if
you
built
the
capability
of
that
approach,
is
that
approach
better
than
using
the
kubernetes
API
with
custom
resource
plus
kubernetes
secret
and
encryption,
and
all
this
stuff
to
try
to
kind
of
hold
it
together,
like
the
beauty
of
using
a
custom
resource
and
kubernetes
secret?
Is
that
you
don't
have
to
install
anything
on
the
cluster?
It's
all
just
there,
you're
just
I.
E
F
Awesome,
yes,
I
think
there
are
trade-offs.
You're
service
has
to
handle
encryption
for
you,
whereas
you
know
Cube,
API
server
has
Secrets
encryption
already.
C
F
C
F
If,
once
it
gets
posted
to
YouTube,
you'll
get
a
transcript,
usually
off
of
YouTube.
D
B
Yeah,
so
this
is
a
controversial
topic,
so
I
think
I
did
bring
this
up
with
the
cigars
I
think,
maybe
last
year
the
current
situation
is
in
CSI
driver,
even
though
it
is
a
CSI
driver.
We
support
sync
as
kubernetes
secret,
which
we
realize
we
shouldn't,
but
it
was
mostly
because
a
lot
of
users
who
you
CSI
driver
to
like
get
secrets
from
external
Secret
store.
B
So
they
can
use
that
with
the
Ingress
resources
and
we've
had
conversations
in
our
team
in
our
Upstream
meeting,
basically
like
the
CSI
community
meeting,
trying
to
come
up
with
a
way
and
also
trying
to
decide
if
we
want
to
split
the
project
and
then
make
it
its
own
thing,
and
one
of
the
other
considerations
was:
there
are
actually
a
couple
of
other
projects
out
there
in
the
wild,
which
do
the
exact
same
thing
right.
B
So
the
conclusion
that
we
came
up
with
in
based
on
our
community
calls
was
we
want
to
split
it,
and
we
want
to
have
this
as
a
separate
project,
because
we
have
a
lot
of
users
today
in
CSI
driver
using
that
feature
who
can
reuse
the
existing
custom
resources
and
be
able
to
easily
move
to
the
new
project
so
like.
That
is
our
biggest
motivation,
so
that,
like
all
our
current
users,
can
still
continue
using
it
rather
than
having
to
ramp
up
and
go
learn
a
different
project
and
then
like
migrate.
B
All
their
applications
to
use
that
I
did
a
POC
and
demoed
that
in
the
community
called
I'm
trying
to
consolidate
that
in,
like
a
Google
doc.
That
I
had
shared
before
so
I'll
present
that,
and
also
like
redo
the
POC
like
demo
in
the
next
one,
but
I
think
the
general
questions
are.
If
we
split
the
project,
is
it
okay
that
we
have
those
two
projects
blessed
by
Cigar?
B
Drivers
we
basically
get
the
service
account
tokens
from
cubelet,
so
the
driver
gets
it
and
passes
it
on
to
the
provider
and
all
of
that
right.
But
if
we
do
this
synced
controller
as
a
separate
thing.
Basically,
this
controller
is
going
to
have
god-level
permissions
to
be
able
to
generate
tokens
on
behalf
of
the
work
and
the
custom
resources.
That's
requesting
for
these
secrets.
B
Also,
we
are
planning
to
tie
the
custom
resource
to
a
service
account
so
that
we
can
still
support
workload,
identity,
Within,
but
yeah.
This
single
controller
is
going
to
have
God
level
permissions
and
then
the
third
thing
is
today.
The
CSI
driver
is
generic
enough
where
the
driver
exists,
as
you
see
God
sub
project,
but
providers
are
implemented
by
each
provider,
so
they
exist
in
different
reports
like
Azure
and
one
exist
in
Azure
Google,
one
existing,
the
Google
repo
and
all
of
it.
B
B
It
is
all
those
providers
have
to
be
packaged
as
sidecars
in
a
single
pod
so
that
we
can
still
continue
using
RPC,
so
we
can
still
use
Unix
domain
socket
to
communicate
with
different
providers
so
like
these
are
the
three
lingering
questions
like
I
will
add
this
in
the
dock
and
present
it
with
the
demo
next
time,
but
looking
for
thoughts
concerns
that
we
should
be
aware.
E
C
A
A
The
one
thing
I
was
curious
about
maybe
Jordan.
You
know
better
than
me,
because
I
haven't
looked
in
a
while.
Can
any
of
the
validating
admission
policy
would
sell
that
has
access
to
the
authorizer?
Can
it
do
any
checks?
A
A
B
E
E
A
E
D
E
A
A
B
A
Mean
we
could
we,
we
could
ship
like
a
default
policy
that
doesn't
allow
like
Cube
system
and
other
namespaces
to
be
messed
with,
and
then
maybe
also
explicitly,
or
we
could
invert
it
all
in,
like
only
allow
like
well
known,
like
only
allowed,
explicitly
opted
in
namespaces
to
be
used
it.
It
is
problematic,
though
right.
You
basically
have
to
Grant
the
authorization
cluster-wide
and
then
constrain
it
admission
somehow,
which
is
like
my
least
favorite
part
about
admission
and
authorization
when
used
together.
E
A
I
think
the
concern
I
have
more
is
foreign
I'm,
okay,
with
this
controller,
having
like
complete
access
to
your
Vault
because
you've,
given
it
that
and
it
can
do
whatever
bad
things
it
wants
with
the
secrets,
but
that
should
not
give
it
cluster
admin
to
the
environment
also
like
it
shouldn't
need
that
level
of
access.
A
Yeah,
so
the
Vault
does
care,
because
they're
built
with
like
workload,
identity,
integration
right
provider,
cares
very
much
so
that
you
pass
in
the
specific
service
account
with
a
like
I
mean
one
one
thing
I
know
we
could
do
is
we
could
disallow
token
requests
being
made
by
this
controller
that
don't
have
an
explicit
audience
set
because
it
has
no
business
asking
for
the
Clusters
audience
as
a
token,
so
we
can
certainly
constrain
it
immediately
in
those
ways
and
we
could
constrain
like
I
think
the
types
of
Secrets
it
could
make.
B
E
F
D
Look
we
we
can
think
about
all
these
like
ways.
We
can
enhance
it,
but
I
think
the
larger
question
was
more
if
we
split
it
out
this
controller.
Is
this
something
that
say:
goth
is
interested
in
sponsoring
and
there's
pros
and
cons
with
this
right.
D
I
would
also
argue
that,
would
this
centralized
controller
the
the
goal
is
to
work
with
external
Secrets
operators
so
that
they
also
get
the
extra
benefits
that
we're
enhancing
all
the
extra
security
around
it.
Hopefully,
because
that's
a
project
that
a
lot
of
people
use
today
and
they
have
the
same
issues.
B
A
B
Them
yeah
the
one
the
other
one
was
when
we
moved
CSI
driver,
like
one
of
us,
was
to
make
providers
out
of
three
rights
so
that,
like
we
don't
package
all
of
that
in
a
single
repo.
So
with
this
one,
the
code
still
exists
in
different
repos
like
it's,
not
in
kubernetes
six,
but
we
would
still
need
to
package
the
Pod
manifest
with
all
the
different
providers,
because
the
communication
is
still
through
Unix
domain
socket.
So.
C
B
C
Yeah
I
mean
putting
them
side
by
side
in
a
pod,
so
they
can
still
speak
over.
Rpc
seems
fine
and
I
like
that.
We
don't
require
the
provider
to
change
anything.
We
can
still
use
the
same
provider
interface
and
make
the
same
request
to
it
so
that
one
provider
implementation
can
be
used
as
a
CSI
driver
or
for
this
that
seems
good.
A
What
was
the
third
thing,
so
that's
the
one
is
the
packaging
constraints
of
like
one
of
the
goals
of
secret
CSI
was
to
not
be
king
maker
right
and
I.
Think
it
succeeded
in
that
by
splitting
out
providers
and
I
think
this
still
continues.
That
Trend,
then,
is
the
permissions
thing.
What's
the
third
thing.
D
C
A
B
C
Would
we
what
would
we
do
for
people
who
are
currently
using
it
in
like?
Would
there
be
a
migration
or
like
if
you
were
doing
this
now,
do
this
type
of
thing,
yeah.
B
So
in
as
part
of
the
boc
like,
we
are
able
to
keep
the
same
custom
resource
and,
like
the
whole
provider
model,
everything
is
the
same
right.
So
only
thing
is
you
deploy
this
extra
additional
controller
and
then
there
is
like
a
wrapper
custom
resource
which
references,
the
old
secret
provider
class
and
then
has
details
about
service
account
and
all
of
that
that
the
controller
can
use
for
workload
entity.
So
that's
the
sample
that
I
shared
on
the
zoom
chat.
B
So
in
terms
of
migration,
like
once
we
have
this
separate
controller,
we
will
Define
a
plan
for
deprecating
this
in
the
CSI
driver
project,
which
will
probably
be
like
a
2.0
or
something
because
it's
a
breaking
change
and
we
will
also
document
how
users
can
move
to
that
like
they
can
keep
most
of
the
resources
that
they
have
today.
It's
just
like
additional
things
to
deploy
and
all
of
that.
C
B
C
C
You
skip
your
web
hook,
or
so
you
could,
but
that
means
you're
putting
a
admission
web
hook
in
front
of
like
token
request,
which
and
then
saying
only
intercept
things
from
this.
One
service
account
like
I,
don't
know
if,
on
any
cluster
older
than
128,
it
means
all
token
requests
are
going
to
go
to
your
web
hook
and
you've.
Just
that
can
toast
your
cluster.
If
it's
unavailable.
A
Okay,
but
but
I
thought
is
validating
cell
admission,
not
beta,
yet
I
thought
it
was
beta.
Oh.
B
A
A
A
Right
cool,
thank
you
discussion.
Everybody
I
think
we'll
probably
have
it
again
in
a
couple
of
weeks.
I
know
you're
so
excited,
but
okay.