►
From YouTube: Pinniped Community Meeting - February 4, 2021
Description
Pinniped Community Meeting - February 4, 2021
This meeting dove deep into api design and the upcoming release of v0.5.0, featuring multiple pinnipeds.
Notes and meeting information can be found here: https://hackmd.io/rd_kVJhjQfOvfAWzK8A3tQ?view
A
All
right
welcome
to
the
first
community
meeting
of
february
2021
for
those
that
are
joining
outside
the
community
even
right
now
we
don't
have
anybody,
but
I
just
want
to
make
the
statement
that
abide
by
our
code
of
conduct
read
and
abide
by
it
while
you
are
in
this
meeting
and
that
is
listed
on
the
notes.
A
So
if
you
need
to
find
the
code
of
conduct,
it
is
in
the
agenda
notes
at
the
top,
I
will
just
kind
of
go
through
those
who
have
input
anything
and
then,
if
you
have
anything
to
add
at
the
end
after
the
status
updates
are
given
feel
free
to
do
that
chime
in,
but
I
will
go
for
the
first
one
margo.
B
Yeah,
so
I've
been
working
on
the
impersonation
proxy
track,
trying
to
make
it
possible
for
clusters
that
are
cloud
hosted
to
be
able
to
use
pinniped.
B
And
so
I
think
matt's
also
probably
gonna
talk
about
some
of
the
other
stuff,
but
one
one
of
those
pieces
was
detecting
cloud-hosted
environments
so
like.
If
we
can
tell
that
you're
running
on
gke
or
eks,
then
we
can
use
that
information
to
spin
up
an
impersonation
proxy
for
you
and
if
you're
running
on
kind,
then
we
don't
need
to.
C
Cool,
I
can
go
next,
a
few
items
here.
C
One
kind
of
minor
thing
I
did
this
week
was
hooked
up
codecov.io
to
our
repo,
so
it
tracks
our
coverage
data
and
leaves
these
comments
telling
on
each
pull
request
telling
you
whether
you
made
coverage
better
or
worse,
I
can
see
how
this
could
be
annoying,
so
I
wanted.
D
C
E
So
I
I
will
admit
I
have
not
worked
with
many
projects
that
have
these
tools,
but
the
ones
I
have
worked
with
that
have
them.
I
completely
just
tune
them
out
like
they're
like
it's.
Just
is
this
noise
because
it's
not
usually
actionable
for
me
like
it,
doesn't
encourage
me
to
write
more
tests.
C
F
E
C
C
Anyway,
we
have
options
next
status
item.
I
worked
on
a
design
for
some
of
the
api
changes
related
to
the
impersonation
proxy
feature
mark,
and
I
worked
together
on
some
of
that
earlier
in
the
week
and
I
think
we.
C
C
This
week
I
wrote
a
blog
post.
I
think
a
lot
of
you
already
took
a
look
at
it,
but
give
it
a
read,
we'll
plan
to
post
this
like
today.
I
think
and
then
cut
0.50
and
I
guess
yeah
we're
we're
cutting
0.50
today.
So
that's
good,
except
you.
C
Okay,
so
so
this
blog
post,
I
think,
is
not
quite
it
doesn't
quite
fit
the
pattern
that
we
might
have
in
other
releases,
where
we
we
write
a
blog
post,
announcing
the
release
and
talking
about
all
the
cool
new
features
that
we
shipped
in
the
release,
because
the
feature
that
we're
shipping
in
this
release
is
pretty
small,
and
it's
only
interesting
to
some
niche
cases
like
most
pen.
Pet
users
won't
care
about
this
feature
at
all.
C
So
what
I
thought
was
interesting
to
blog
about
was
not
the
feature
itself
but
kind
of
how
we
built
it
in
the
api
machinery
and
that
it
might
be
interesting
to
other
developers
who
write
kubernetes
controller
apps.
So
that's
what
I
focused
on
and
we'll
see.
I
expect.
D
C
D
A
C
E
I
was
on
the
on
the
zero
five
zero
release,
like
I
I
saw
like
yesterday,
ryan
and
margo
had
opened
their
pr,
so
I
was
trying
to
figure
out.
Do
we
feel
current
maine.
C
C
D
Yep
thanks
I'm
glad
to
be
back
like
it's
day-
three,
I
guess
so
mostly
catch
up
with
the
team
and
trying
to
get
a
sense
of
what's
going
on
with
like
in-flight
work,
as
well
as
how
the
team
has
been
thinking
about
upcoming
initiatives.
I
got
some
of
that
information.
I
think
we
have
some.
I
scheduled
some
meetings
with
the
team
over
the
next.
D
I
guess
two
weeks
that
will
help
us
make
some
decisions
that
will
inform,
when
I
say
a
short-term
roadmap,
I'm
thinking
like
roughly
six
iterations
and
then
those
will
also
be
foundational
for
a
longer
term
roadmap.
Also,
as
I
work
a
little
bit
more
closely
with
dan
to
understand
some
of
the
specific
outcomes,
we
believe
that
we're
trying
to
drive
specifically
with
pinniped
as
it
exists,
kind
of
in
a
broader
umbrella
of
stuff.
I
know
that's
rather
ambiguous.
D
I
hope
that
next
week
I
can
speak
a
little
bit
more
clearly
on
what
that
actually
means,
but
yeah
happy
to
be
back.
C
Oh,
we
skipped
over
ryan
and
andrew
and
mo.
If
you
have
anything
you
want
to
add
yeah.
A
A
No
comments
all
right,
discussion
topics,
review
api
design
proposal.
C
Yeah,
okay,
so
I
posted
this
earlier
in
the
week,
so
maybe
folks
had
a
chance
to
read
it.
This
is
this
is
a
proposal
about
the
api
changes
that
go
along
with
the
new
impersonation
proxy
feature.
I
don't
think
it
necessarily
changes
that
much
about
actually
how
the
the
code
works
to
implement
the
impersonation
proxy.
It's
mostly
about
how
do
you
configure
and
describe
the
status
of
of
the
impersonation
proxy?
C
So
this
is
a
departure
from
what
we
have
filed
as
sort
of
stories
right
now,
in
github
goals
were
to
make
the
concierge
work,
with
no
configuration
in
as
many
cases
as
possible.
I
think
that's
like
been
our
goal.
Since
we
started
was
like
most
cases
should
work
with
no
config,
you
just
install
penpad
and
it
works
correctly.
C
I
also
wanted
to
leave
room
in
the
leave
room
in
the
api
so
that
we
can
eventually
configure
everything
that
might
ever
need
to
be
configured
about
how
it
behaves
even
if
right
now,
we
don't
have
all
those
knobs
yet
and
then
I
also
really
wanted
to
get
rid
of
the
credential
issue
or
resource,
and
I
don't
know
if
I
justified
very
well
why
I
want
to
get
rid
of
it.
But
it's
mostly
because
it's
I
don't
like
having
a
singleton
api
like
it's,
it's
kind
of
a
weird
api
pattern.
C
I
think
it's
not
obvious
why
most
people
would
care
about
that
resource
anyway,
because
it's
actually
none
of
our
clies,
for
example,
use
it
anymore
at
all,
rcli
used
to
read
it
if
it
doesn't
anymore.
C
C
So
the
changes
I
broke
down
here,
the
first
one
you
can
scroll
down
a
little
the
first
one
here
is
basically
the
change
that
marco
talked
about,
which
is
start
adding
some
auto
detection
of
when
the
impersonation
proxy
is
needed,
and
we
can
make
this
fancier
later
on,
but,
like
the
simplest
place
to
start
is
just.
Are
we
running
on
one
of
the
three
major
hosted
cloud
provider,
cluster
types
and
they're?
All
marker?
I
hope
I'm
not.
C
I
hope
I'm
not
wrong,
but
they're
all
relatively
easy
to
detect
because
they
all
use
like
labels
on
on
the
node
object,
since
you
can
just
make
a
little
api
call
and
tell
if
you're
on
one
of
those,
so
in
the
next
version
of
pinniped
after
the
one
we
shipped
today,
we
would
detect
that
we
were
in
one
of
those
environments.
If
we
are
in
one
of
those
environments,
generate
and
persist,
a
ca
which
we
have
code
to
do.
C
C
C
I
think
the
if
you
could
go
back
up
for
just
a
second,
the
there's,
a
bunch
of
cloud
provider,
specific
annotations
that
could
be
useful
in
certain
scenarios,
but
I
don't
think
we
need
any
of
them
to
make
the
feature
work
in
this
version.
It
just
would
just
be
a
just
a
load,
balancer
service,
okay
and
then
the
next
thing.
C
Second
change
is:
take
the
status
information
that
currently,
we
kind
of
keep
in
the
credential
issuer
api
and
start
putting
it
in
these
in
a
new
status
field
of
the
authenticator
objects.
Instead,
so
we
already
have
like
some
common
types
between
the
jot
authenticator
and
the
token
web
hook
authenticator.
C
C
The
the
slightly
awkward
thing
about
this
is
that
it
is
basically
the
same
information
on
each
authenticator
object.
So
if
you
had
a
hundred
authenticator
objects,
that
would
be
pretty
wasteful,
but
in
most
scenarios
you
have
like
one
and
even
in
more
complex
scenarios,
you
probably
have
like
two
or
three
or
five,
so
it
doesn't
that
doesn't
feel
that
bad
to
me.
I
have
an
example
of
what
I
think
the
api
should
look
like
here.
E
If
I,
if
I
could
make
a
comment,
yeah
yeah,
please
copy
the
same
information.
A
thousand
times
is
a
kubernetes
pattern.
Note
the
root
ca
publisher.
There
is
no
global
config
map
that
has
this
singleton
object
is
copied
via
controller
into
a
known
location
in
every
single
namespace,
so
totally
totally
fine.
This
is
well
within
exactly
what
cube
would
tell
you
to
do.
C
Yeah
so
in
this
object
the
the
spec
and
the
metadata
and
everything-
that's
that's!
That's
what
our
api
looks
like
today.
Already
the
new
part
in
the
status
field.
Is
this
concierge
strategies
field
which
tells
you
for
each
type
of
of
strategy
that
we
support,
and
that's
that's
the
name
that
we
use
right
now,
I'm
open
to
changing
that
name.
I
think
strategy
is
a
little
bit
generic
word.
C
You
know
we
have
basically
the
existing
strategy
that
the
concierge
uses
today,
which
is
the
token
credential,
request
api,
and
we
can
report
on
whether
that's
successful
or
not,
and
we
can
report
on
if
it's
successful,
it's
api,
endpoint
and
the
c8
you
use
to
talk
to
it
and
that's
what
you
need
to
then
build
a
coop
config
that
targets
that
strategy
and
then
the
new
section,
then
is
the
impersonation
proxy,
which
is
turns
out
to
be
almost
all
the
same
fields.
C
But
again,
that's
that's
the
information
that
you
need
to
connect
to
the
impersonation
proxy
in
a
coup
config
and
then
for
backwards
compatibility.
We
can
take
the
status
information
and
we
can.
We
can
continue
to
maintain
the
credential
issuer
for
another
release
or
two
just
in
case
anybody's,
depending
on
it.
We
can
mark
it
deprecated
in
o60.
C
And
then,
finally,
this
is
basically
the
space
that
I
want
to
add
the
space
that
I
want
to
reserve
for
future
configuration
is
then
in
the
config
map
configuration
of
the
of
the
concierge.
So
I
think,
almost
all
of
the
things
that
you
would
want
to
configure
about
how
the
concierge
works
are
things
that
you
would
know
when
you
installed
it.
So
I
like
that,
for
example,
on
the
supervisor
I
like
that
you
can
come
back
and
dynamically
reconfigure
it
with
a
new
idp.
I
think
that's
really
powerful
and
useful.
C
I
don't
think
it's
necessarily
useful
to
dynamically
reconfigure,
the
concierge
to
operate
in
some
different
mode,
because
it's
it's
a
component
that
you
would
probably
deploy
like
as
a
cluster
add-on,
and
you
would
kind
of
know
when
you
installed
it
kind
of
the
parameters
that
you
wanted
to
use.
So
the
the
initial
in
in
060
this
section
would
just
be
empty.
C
We
wouldn't
actually
have
any
new
config
some
point
after
o60,
we
would
add
some
config
and
some
things
we
could
configure
are
this
mode,
which
would
be
either
forcing
the
impersonation
proxy
to
be
on
or
forcing
it
to
be
off
or
the
default
could
be
the
o60
behavior,
which
is
detect
cloud
providers.
Basically,
I
think
it
detects
cloud
providers
or
something-
maybe
maybe
I
think,
maybe
a
fancier
heuristic
than
just
detecting
cloud
providers.
Maybe
we
can
find
another
another
heuristic
that
gets
us
to
like
be
correct.
In
more
percentage
of
cases,.
E
E
Okay,
I
was
gonna
ask
if
we
are
going
down
the
route
of
having,
in
the
future,
more
sort
of
elaborate
config
for
our
config
mount,
but
there's
more
and
more
stuff
in
there.
It
might
be
a
good
idea
for
us
to
start
like
versioning
that
config,
just
like
we
do
all
of
our.
C
C
It's
just
a
way
like
the
way
that
kind
uses
version
configs
for
static
files,
so
yeah-
and
I
I
mentioned
too
like
the
this
section
of
the
config
I'm
talking
about-
would
also
be
a
place
if
we
want
to
add
configuration
for
the
coop
cert
agent
behavior.
We
could
add
that
into
but
again
this.
C
Future
future
api
work,
the
example
here
I
gave
oh
something
else
important
in
the
config.
If
we
scroll
back
up
for
a
second
something
that
I
think
we'll
want
to
be
able
to
do
in
the
future
is
disable
the
automatic
creation
of
that
load,
balancer
service
and
let
the
user
configure
that
out
of
band
so
say
like
maybe
load
balancing,
doesn't
work
in
my
cluster,
but
I
have
a
host
port.
I
forwarded
correctly,
and
I
know
the
external
name
of
that
host.
C
C
Okay,
so
the
first
example
this
would
be
on
a
cluster
where
the
like,
on
a
kind
cluster,
where
the
token
credential
request
api
works.
So
you
can
see
that
it
has
a
success
status
and
then
the
impersonation
proxy
would
be
disabled
because
it
would
be
in
this
auto
mode
where
it
detected
that
it
wasn't
on
a
cloud
provider
cluster,
and
so
it
just
turned
itself
off.
C
C
Would
start
up
and
it
would
have
this
connection
information
and
then
yeah.
One
alternative
api
idea
is
just
a
structural
thing
about
how
that
strategy
status
information
is
laid
out.
I
think
this
is
like
a
style
question.
I
don't
really
know
what
the
right
answer
is
in
the
in
the
first
exam.
In
the
the
main
examples,
each
of
the
strategies
has
its
own
status
type.
C
I
mean
this
example
like
they
share
a
common
strategy
status
object.
That
then
has
like
some
specific
information
under
it.
I
don't
know
what
the
right
answer
is.
Okay,
sorry,
that
was
a
long
read
through.
I
wanted
to
make
sure
there's
some
comments
that
I
wanted
to
get
to
that
are
further
up
in
the
dock
nancy.
If
you
want
I'm
happy
to
share
too,
if
you
want
to
either
way.
D
C
Oh,
I
think
I
think
I
kind
of
addressed
this
the
example
at
the
bottom.
We
can.
We
can
talk
more
about
that.
The
implementation,
I
think,
essentially
we
just
need,
like
some
in-memory
cache
of
like
what
the
status
of
each
of
the
strategies
is
and
then
in
our
authenticator
controller
we
can
have
a
controller
that
copies
that
into
the
status
of
each
authenticator.
C
I'm
not
I'm
not
sure
exactly
what
the
implementation
would
look
like,
but
I
try
to
mostly
focus
on
the
api
design
in
this
dock,
but
it
seems
feasible
to
me
that
you
would
have
some
piece
of
code
that
when
the
when
the
status
of
one
of
the
strategies
changes
it
broadcasts
that
out
and
applies
it
to
all
the
current
authenticators
and
then
maybe
in
the
authenticator
controller.
When
a
new
authenticator
is
created,
it
needs
to
go
pull
that
in
I,
it
might
be
a
little
bit
complicated
like.
C
I
think
there
might
be
two
paths
that
actually
set
that
status
or
we'd
have
to
like
make
a
informer,
basically
make
a
fake
informer
that
that
feeds
the
existing
controller
code.
I
don't
know
mo
mo.
You
may
have.
E
C
C
We
could
start
with
just
a
really
basic
config
that
just
forces
it
to
be
on,
and
that
would
be
enough
for
testing.
I
think
we
could.
Even
I
think
I
mentioned
we
could
even
do
that
with
like
an
environment
variable
that
we
really
don't
expose
as
an
api
at
all.
Yet
because
we
even
even
though
load
balancer
services,
don't
work
in
kind.
The
service,
the
cluster
ap
part
of
the
load
balancer
service
does
work.
You
can
actually
talk
to
it
through
squid,
so
I
think
we
could.
C
F
F
They
all
run
immediately
early
in
the
pipeline,
and
it
would
also
be
nice
to
do
what
you
said
as
well
to
add
a
real
chicken
e
cluster
or
a
real
aks
cluster,
to
make
sure
it
really
works
here
as
well.
C
C
Nothing
in
kind
would
ever
set
that
external
name
field.
That's
how
you
would
kind
of
know
that
it
wasn't
working,
so
we
might
have
to
in
our
test
reach
out
and
pretend
that
it
got
an
external
name
and
just
set
that
field.
C
F
So
we
come
down
to
how
good
we
think
we
can
do
auto
detection.
C
Those
are
the
main
main
questions
and
then
I
think
the
last
comment
was
about
basically
is
oh.
First
of
all,
I
guess
I
want
to
ask:
is
this
ryan
what
you
had
in
mind
as
like
another
type
of
api
structure,.
F
F
A
E
F
C
Yeah,
this
is
an
array
where
each
entry
of
the
array
describes
the
status
of
a
particular
strategy,
and
all
of
these
fields
are
all
like
one
type
here
and
here
you're
looking
at
like
the
same
type
and
then
there's
because
each
strategy
has
some
strategy
specific,
like
configuration
status,
there
has
to
be
this
extra
little
sort
of
tagged
union
field
inside
there.
That
has
the
specific
data
for
if
the
impersonation
proxy
is
working,
here's
here's
how
to
connect
to
it
and
there
would
be
a
similar
little
tagged
union
field
for
the
cons.
C
The
token
credential
request
api-
that's
different
from
here
where
this
is
not
an
array,
it's
just
a
object
with
two
fields
that
each
has
each
of
these
fields
in
is
a
totally
different
type.
So
actually
this
status
field
here
and
this
status
field
here
are
two
different.
Go.
C
G
I
mean:
isn't
the
isn't
the
client
just
gonna,
so
I'm
thinking
about
a
client
of
this
authentication
thing
and
isn't
the
client
just
gonna
wanna
know.
Where
do
I
make
my
http
connection
and
like
maybe
what
do
I
use
for
a
proxy?
What
do
I
use
for
a
ca
bundle?
What
do
I
use
for
a
api
path,
something
like
that.
C
So
you
and
when
you
run
piniped
get
coop
config,
you
would
say
which
authenticator
do
you
want
to
use?
It
would
go
read
that
authenticator
status
and
say:
oh.
If
I
want
to
use
that
authenticator,
I
can
tell
that
it's
running
in
impersonation
proxy
mode,
so
the
kind
of
coupe
config
I
need
to
target.
That
is
one
where,
for
example,
the
server
and
the
ca
and
the
coop
config
point
at
the
proxy
and
then
the
command
that's
embedded
inside.
There
is
the
version
of
the
command
that
wraps
up
the
token
for
the
proxy.
C
F
Path
once
very
small
advantage
of
an
array
is
if,
if
the
user,
installs
pinniped-
and
they
ask
they
forcibly
say,
please
do
not
run
the
impersonation
proxy.
If
we
support
that
option,
then
the
server
could
just
omit
the
entry
from
the
array.
It
just
wouldn't
be
there
at
all.
C
That's
true,
I
would
almost
think
we
would
still
have
it
there
and
have
it
say,
disabled
by
config,
or
something
like
that.
So
it's
obvious
one
other
thought
I
had
is
that
with
the
with
this
shape
of
api.
This
is
this
is
a
really
trivial
thing,
but
in
this
shape
of
api
we
could
pull
out
the
status
fields
of
each
of
the
strategies
and
actually
pull
them
up
into
the
the
column
output
of
get
the
the
coupe
ctl
output.
C
So
if
I
do
cube
ctl
get
john
authenticators,
I
would
see
here's
an
authenticator
it's
running
in
impersonation
proxy
mode.
That
I
think,
is
harder
to
just
describe
in
open
api.
C
C
I
don't
know.
Also
I
mean
something
else
to
note
about
the
api
here
is
that
it
doesn't
prevent
a
particular
authenticator
from
having
both
strategies
working,
which
I
think
is
something
we
want
in
case.
We
add
in
the
future
a
new
strategy.
We
want
to
keep
one
of
these
old
strategies
around
for
existing
clients,
but
have
something
new.
That's
that's
newer
and
better.
That
should
work.
Okay,
I
think.
B
C
C
C
A
strategy
maybe
a
future
strategy
that
did
some
transport
level
credential
binding,
so
meaning
like
a
client
certificate
that
wouldn't
work
over
this
api.
It
wouldn't
work
with
this
strategy,
but
it
would
work
on
the
impersonation
proxy,
and
so
we
could
actually
say
hey
that
this
future,
like
cert,
authenticator
client.
Third
authenticator,
is
publishing
a
different
set
of
strategies
than
the
token
authenticator.
On
the
same.
C
Cluster,
so
anyway,
I
I
don't
want
to.
I
don't
want
to
drag
down
too
long
here.
Do
folks
feel
like
we
can
move
forward
with
this.
D
F
D
F
With
manual
enable
disable-
and
maybe
a
manual
auto,
give
me
a
load
balancer
or
never
give
me
a
balance,
or
it
would
become
the
two
option.
C
C
I
think,
though,
I
think
the
one
I
think
moe
dropped
off
the
call,
but
the
one
problem
that
mode
mentioned
with
that
is
on
a
cluster.
C
Both
of
the
the
current
strategies
working,
we
actually
don't
really
want
to
use
the
impersonation
proxy
unless
we
have
to
like.
So
if
the
token
credential
request
api
works,
it'd
be
better
to
not
even
run
the
impersonation
proxy
at
all
and
that's
awkward
because
of
how
kind
of
eventually
consistent
all
of
our
logic
for
creating
this
is.
C
Okay,
I
think
maybe.
F
C
B
C
C
Did
change
my
mind,
I
think
I
do
like
this
api
structure
router
now
that
we
talked
about
it
so
I'll
probably
rewrite
this.
Okay
did
mo
leave
a
comment
about
where
he
went.
D
C
Api
work,
the
work
so
far,
so
there's,
I
think,
there's
sort
of
two
pieces
of
work
so
far.
One
is
just
the
the
runtime
behavior,
the
one
runtime
implementation
of
running
the
proxy,
which
is
actually
independent
of
all
of
this.
This
is
all
just
like
how
we
tell
how
to
run
the
proxy
or
when
to
run
the
proxy
or
whether
to
run
the
proxy
in
the
case
where
we
do
run
the
proxy.
We
have
that
code
now.
Essentially,
maybe
it's
not
completely
done,
but
it's
there.
C
Then.
The
other
work
item
is
the
detection
of
detecting
these
different
cloud
provider.
Environments,
which
I
think
is
I
don't
know
mark,
do
you
have
any
status
it
sounded
like
the
status
was
the
first
thing
we
tried
didn't
work
very
well.
B
C
Yeah,
I
think
I
think,
there's
enough
metadata
in
the
cluster
that
I'm
confident
we
can
find
a
good
heuristic
because,
like
there's
the
the
node
the
field
on
the
node
object-
which
I
think
is
the
thing
we
started
with,
I
think
as
a
problem
you're
mentioning
there's
also
like
the
version
info
of
kubernetes
and
most
of
the
providers
ship,
a
version
of
kubernetes,
where
the
version
string
is
specific
and
has
has
the
name
of
the
provider
in
it.
C
Another
thing
is
looking
at
the
node
objects
and
finding
the
nodes
that
are
labeled
as
control
plane
nodes,
and
if
you
don't
find
nodes,
labeled
control,
plane
nodes,
then
you
know
you're
you're,
probably
on
a
cluster,
that's
not
self-hosted
anyway,
I
yeah,
maybe
maybe
we
need
to.
We
do
need
to
so
what
I'm
looking
at
the
issues
here.
C
This
is
the
story,
that's
finished,
but
not
delivered
because
we
haven't
merged.
We
were
basically
waiting
for
o50
to
land.
The
this
story,
I
think,
goes
away.
C
Maybe-
or
maybe
we
add,
we
rewrite
it,
rename
it
to
just
add
one
field
to
our
config
map
to
do
the
like
force
enable
this
is
the
status
information,
so
it
needs
to
be
rewritten
to
say
that
you've
discovered
this
information
via
the
authenticator
status.
C
So
we
can
take
that
after
this
and
then
these
two
stories
actually
stay
exactly
the
same.
I
think
so
and
then
maybe
there's
another.
B
Yeah,
I
think,
that's
a
separate
issue.
Those
last
two
stories
are
there's
already
like
work
for
those
on
the
impersonation
proxy
branch.
I
anticipate
that
changing
with
the
implementation
of
the
new
authenticators
back,
but
there's
already
like
stuff
in
flight
there.
I
think
the
branch
is
like
kind
of.
B
The
impersonation
proxy
branch
at
the
moment
is
kind
of
an
amalgamation
of
a
few
different
stories
and
isn't
quite
like
easily
teased
out
into
that
minimum
impersonation
proxy.
You
know
this
story
that
story
just
because
we've
been
of
holding
stuff
back
until
go
5.0.
C
That
makes
sense,
okay,
so
the
last
agenda
item
nancy.
D
C
Can
turn
it
back
over
to
you
if
you
want,
or
I'm
happy
to
just
finish
this
out?
The
last
agenda
item
is
about
this
story
or
that
this
pr,
which
might
we
may
we
may
or
may
not
want
to
ship
today
in
the
release
and
we
may
or
may
not
want
to
emerge.
I
haven't
I've,
read
this
issue
and
I
feel
like
I
didn't,
really
understand
it
and
I
don't
really
have
an
opinion.
C
I
know
that
mo
had
an
opinion,
so
maybe
we
just
need
to
table
this
and
and
talk
about
it
when
moe's
back.
I
don't
know.
C
After
this
call,
because
I
would
like
to
cut
the
release
like
in
an
hour-
yeah,
okay,
I
think
that's
fine
cool
window-
shout
outs
this
week.
I
guess,
but
we're
here
we're
waiting.
A
No
all
right!
Well
thanks
everyone
for
joining
the
community
meeting
if
you're
watching
this
from
home
after
it's
been
recorded.
Please
join
us
live
for
the
next
pinniped
community
meeting.
That's
happening
happens
every
first
and
third
thursday
of
the
month.
So
today
is
the
first
thursday
and
we
will
meet
again
on
the
third
thursday
of
february.
So
we
will
see
you
then,
hopefully
than
that.
Thank
you.