►
From YouTube: Pinniped Community Meeting - March 4, 2021
Description
Pinniped Community Meeting - March 4, 2021
Pinniped Community Meeting meets every 1st and 3rd Thursday of the month at 9am PST / 12pm EST. We hope to see you at the next one!
This week's meeting included discussion topics around how should caching work and updates on the project roadmap.
More details here: https://hackmd.io/rd_kVJhjQfOvfAWzK8A3tQ
A
All
right,
hi
welcome
to
the
first
thursday
peanut
community
meeting,
if
you're
watching
this
from
home.
Welcome
if
you're
watching
this
as
a
recording,
we
hope
that
you
are
able
to
make
it
to
the
next
meeting.
We
meet
every
first
and
third
thursday
of
the
month,
just
a
reminder
that
this
is
being
recorded
and
to
adhere
to
the
code
of
conduct
when
engaging
in
these
meetings.
A
If
you
could
go
into
a
little
bit
more
detail,
what
what
you're
working
on
just
to
give
some
insight
into
how
that
relates
to
the
pinniped
project
as
a
whole.
So,
first
up
andrew,
it
sounds
like
you've
been
working
with
mo
on
some
pinpoint
related
kubernetes
1.21
staff.
What
does
that
mean.
B
Yep
so
piniped,
as
many
people
know,
uses
the
clientgo
credential
feature
from
core
kubernetes
to
deliver
credentials
to
client
go
processes
like
cube
cuddle,
and
so
we,
since
we
rely
heavily
on
that
upstream
feature,
then
I
think
it's
valuable
for
us
to
invest
time
in
that
upstream
feature
to
make
it
better.
So
I
guess
one
of
the
headlines
of
that
feature.
Right
now
is,
is
it
beta
and
we
would
like
to
get
it
to
ga.
B
I
have
done
like
one
percent
of
this
work
and
mo
and
others
have
done
the
rest
of
the
99.
So
I
don't
want
to
take
credit
for
more
than
I've
done,
but
I
have
been
working
with
folks
to
get
that
feature
set
to
ga
so
that
pinniped
can
sleep
well
at
night,
knowing
that
we
rely
on
a
fully
fleshed
out
feature
set.
B
A
Cool
does
anyone
want
to
add
to
that?
Is
there?
Has
there
been
any
issues
with
with
working
on
this.
C
I
was
going
to
add
the
thing
that
andrew
worked
on
primarily
recently
is:
there
was
some
functionality
that
was
kind
of
broken
in
relation
to
interactive
flows,
so,
like
prompting
the
user
for
information
on
the
cli,
didn't
work
correctly
in
certain
scenarios,
and
I
think
andrew
and
I
spent
an
exorbitant
amount
of
time
like
looking
at
pipes
and
other
mechanisms
of
like
different
streaming
capabilities
and
files
on
different
os's.
C
I
guess
I
did
learn
a
lot
from
that.
I
learned
a
lot
in
the
sense
that
there's
very
little
you
can
do
when
you
only
have
three
streams
to
pass
all
your
data
through
and
they're
like
being
used
for
other
purposes
and
you're
just
trying
to
coerce
something
to
do
other
things
but
yeah.
I
I
think
we
landed
on
a
small
but
good
fix
for
121
with
a
road
plan.
C
A
Great
anything
else
to
add
from
anyone
on
that.
A
Okay,
moving
along
ryan
implementing
the
impersonation
proxy
feature
with
marco.
E
I
think
we've
gone
over
like
the.
Why
of
this
in
previous
meetings,
but-
and
this
like
past
couple
weeks,
have
been
like
kind
of
just
trying
to
make
it
as
robust
as
we
can
getting
tls
getting
a
load
balancer
up.
If
you
need
it,
that
kind
of
thing,
hopefully
we're
nearing
the
finish
line.
A
That's
good
any
serious
issues
or
blockers
that
have
popped
up.
C
Okay,
I
want
to
share
a
little
bit
about
all
the
fun
that
you've
had
trying
to
make
it
work
as
well
as
you.
You
would
think
it
would
work
like
the
actual
stuff,
like
watching
things.
I
know
have
been
difficult.
F
Yeah
I
was
going
to
talk
about
that
a
little
bit.
It
turns
out
the
delta
between
the
proof
of
concept
spike
that
we
did
in
like
a
day
and
the
real
thing
is
larger
than
we
expected
for
a
bunch
of
reasons.
A
lot
of
it
and
marco
and
ryan
can
can
talk
about.
This
is
like
the
tls,
provisioning
and
load
balancer
provisioning
and
all
the
asynchronous
kind
of
controllers
that
coordinate
to
do
that
and
then
also
the
actual
proxy
logic
itself
didn't
work
as
well
as
we
thought
it
worked.
A
Okay,
great
sounds,
like
things
are
chugging
along
nicely,
though.
Now
next
up,
we
have
mo.
C
Yeah,
so
let's
see
one
of
the
things
I
did
earlier
on
in
this
sort
of,
since
the
last
update
is
I
wrote
the
who
am.
I
request
api,
which
is
a
really
really
small
api
that
just
lets.
You
ask
the
api
server
who
it
thinks
you
are
and
what
groups
you're
in
and
that
type
of
stuff
and
for
an
authentication
project.
C
It's
really
nice
to
be
able
to
just
ask
that
question,
because
one
it
makes
testing
a
lot
of
things
simpler,
but
it's
also
just
nice
to
know
like
what
am
I,
what
am
I
actually
logged
in
as
that's
just
a
normal
end
user
and
it's
it's
fully
generic,
so
it
it
doesn't
actually
matter
if
you
authenticate
it
through
pinniped,
you
could
just
have
like
a
kubernetes
service
account
and
it
works.
Just
fine
it'll
tell
you
what
service
account
you
are
and
so
forth.
So
it's
just
a
little
small
improvement
there.
C
Otherwise,
there's
just
a
few
more
days
between
official
code
freeze
for
121,
so
I've
been
doing
upstream
code
reviews
and
various
little
bits
of
triage
here
and
there
in
regards
to
things
that
are
sort
of
a
little
bit
more
specific
to
the
pinniped.
I
worked
on
in
121
some
improvements
to
the
csr
api,
so
one
thing
that's
already
merged
in
is
that
the
upstream
csr
signer
is
now
not
single
threaded.
C
So
if
you
ask
for
a
bunch
of
certs,
it
will
hand
you
back
a
bunch
of
search
very
quickly
instead
of
very
slowly
so
that
will
hopefully
matter
for
us
in
the
future.
C
C
I
have
a
pr
open
for
short-lived
search,
but
I'm
still
trying
to
hopefully
within
early
one
either
late,
121
time
frame
or
early
122
drive
the
actual
behavior
of
the
api
changes
to
consensus.
I've
seen
good
arguments
on
both
sides
and
I'm
almost
tempted
to
like
propose.
Maybe
we
just
do
both
so
that
way,
people
people
can
pick
whatever
they
want.
That
might
be
the
worst
of
all
worlds,
instead
of
the
best
of
any
particular
world
yeah.
That's
it
for
me,.
F
F
I
think
we're
kind
of
expecting
to
finish
up
all
of
this
work
in
the
next
I'll
say
like
few
days
depending
there's
a
couple
of
bugs
that
we're
shaking
out
and
a
couple
of
things,
we
haven't
quite
started
that
we
think
are
small,
but
it's
actually
coming
along
better
than
it
kind
of
appears
in
in
get
lab
and
in
github
so
like.
If
you
you
look
at
the
0.70
milestone,
it
thinks
that
we're
just
a
third
of
the
way
done.
I
feel
confident
saying
we're
way
more
than
a
third
of
the
way
done.
F
F
I
also
wanted
to
just
mention:
I've
had
a
bunch
of
miscellaneous
tasks
this
week,
but
one
thing
that
came
up
was
getting
better.
Caching,
in
our
cli
for
performance
reasons,
so
you
don't
want
your
coop
ctl
commands
to
be
slow
because
you're
using
phone
ipad,
you
want
them
to
be
fast.
F
We
know
that
there's
a
bunch
of
opportunities
for
caching.
I
took
a
shot
at
one
of
them
and
I'm
not
sure
that
it's
actually
the
right
approach.
I
added
a
discussion
topic
for
for
later
about
that
pablo.
What's
pablo
here,
pablo
is
here
yeah.
G
Hey
y'all,
so
I
see
the
question
of
how's
that
roadmap
coming
along.
It's
coming.
F
G
The
last
two
weeks
have
been
a
little
bit
of
a
holding
pattern
for
me
because
I've
been
a
little
bit
knee-deep
in
unfortunately,
more
vmware
stuff.
That
is
program
related.
I
do
have
a
draft
that
looks
a
lot
like
the
harbor
draft
in
terms
of
the
layout
that
I
think
I
will
sort
of
look
over
with
matt
during
our
pre-ipm
today
and
just
get
a
gut
check
which
can
probably
get
pushed
to
our
get
probably
by
early
next
week.
Just
so,
it
provides
a
little
bit
more
visibility.
G
G
Other
than
that,
I
don't
have
very
much
to
report
aside
from
that
me
and
jacob
nosman
who's.
Another
pm
at
vmware,
who
is
in
the
same
security
and
identity
in
auth
space,
have
started
conducting
these
interviews
with
the
field
and
with
customers
around
their
auth
experiences,
which
will
also
inform
hopefully
our
roadmap,
especially
around
user
experience.
G
So
as
that
we've
conducted
two,
we
have,
I
think,
like
10,
more
scheduled
over
the
next
few
weeks
and
we'll
be
synthesizing
that
so
hopefully
I'll
be
able
to
provide
some
insights
here,
which
will
be
also,
I
think,
interesting,
the
community
by
way
of
like
what
actual
users
are
encountering
when
they're,
either
using
this
or
when
they're
trying
to
get
started
with
the
auth
experience
in
kubernetes.
So
that
should
be
pretty
cool.
A
Yeah
something
I
was
thinking
about
earlier
this
week
when
we
were
talking
about
cfp
submissions
and
whatnot
for
cubecon,
just
some
things
to
think
about
when
you're
having
these
discussions
like
there's,
really
any
interesting
use
cases
from
these
customers,
and
we
could
potentially
combine
that
into
a
talk
where
first
half
might
be
discussing
pinniped
and
then
inviting
someone
like
a
user
of
pinniped
and
to
discuss
their
use
case,
in
particular
with
it.
So
just
something
to
keep
in
mind
when
you're
having
those
discussions
that
interests
you.
G
Yeah
for
sure
yes,
the
most
of
the
interviews,
we're
conducting,
are
going
to
be
with
the
field,
and
so
there
are,
to
my
knowledge,
a
few
people
in
the
field
already
that
have
been
using
pinniped,
whether
they
know
it
or
not.
So
part
of
what
I'm
realizing
also
is
there's
there's
a
component.
G
That
will
be
interesting.
So,
as
I
learn
more
about
what
those
needs
are
I'll
definitely
share
them.
I
will
also
prioritize
finding
one
or
more
people
that
have
experience
using
this
with
customers
that
know
that
they're
using
pinniped,
because
they're
using
some
other
thing,
that's
integrated
with
pinniped
or
not
so
yeah.
I
can
do
that
I'll.
Take
that
as
an
action
cool.
F
Yeah,
that
sounds
good.
I
just
took
some
notes,
as
we
went
here.
F
F
I
think
there
are
two
caching
layers
that
should
exist
in
the
cli
right
now
we
have
like
one
cache
file,
it's
called
the
session
cache
and
it
contains
a
list
of
a
list
of
entries.
Each
entry
is
identified
by
kind
of
which
oadc
issuer
url.
There
is
which
client
id
you're
using
what
scopes
you
ask
for
and
the
redirect
uri.
So
it's
sort
of
like
all
the
information
you
would.
F
You
would
have
used
at
the
beginning
of
logging
in
to
start
that
login
flow
and
then,
when
you
get
logged
in
at
the
end,
we
cache
it
kind
of
under
that
key
and
that
cache
then
contains
your
refresh
token.
If
you
have
one
id
token,
if
you
have
one
in
your
access
token,
all
that
same
place
and
then
there's
some
relatively
complex
logic
to
then
say
when
you
log
in
a
second
time
or
when
you
run
coop
ctl.
F
F
The
id
token-
and
it's
already
ready
you
just
you,
just
return
it.
Sometimes
you
don't
have
an
id
token,
but
you
still
have
an
access
token,
so
you
can
go
ask
for
one
all
right.
Sometimes
you
have
a
refresh
token,
so
you
can
do
an
offline
refresh
operation
and
sometimes
that
fails-
and
you
have
to
just
log
in
it's
all
like
pretty
pretty
complex.
F
And
then
there's
some
other
other
parts
of
the
system
that
aren't
cached
right
now,
which
are
the
tokens
that
you
get
from
the
supervisor.
Sts
endpoint
we're
not
cached.
We
have
an
open
issue
about
that
and
then
also,
if
you
have
whatever
token
you
got
from
the
supervisor
or
from
your
odc
provider,
you
trade
that
into
the
concierge
and
get
back
a
short-lived
certificate.
You
don't
cash,
we
don't
cache
that
either.
F
So
I
went
to
go
sort
of
like
dive
in
and
add
caching
and
all
these
other
places-
and
it
was
just
going
to
make
this
caching
system
even
more
like
baroque,
basically,
and
I'm
sure
we
could
make
that
work
and
I'm
sure
it
would
actually
be
I'm
sure
we
can
make
it
work
correctly
and
safely,
but
it
just
is
a
lot
of
code
and
a
lot
of
windy
paths
to
the
code
to
do
all
the
different
things.
F
So
the
simplification
that
I
think
I
think
we
can
make.
I
think
this
was
most
idea-
is
split
the
cache
into
two
layers
with
different
purposes
so
that
we
have
one
cache.
That
is
just
refresh
tokens,
and
the
only
purpose
of
storing
the
refresh
token
is
to
avoid
having
to
open
your
browser
to
log
in
again
to
an
odc
provider,
so
that
cache
would
be
like
our
current
cache.
Except
we
delete
a
bunch
of
the
other
code.
F
It
would
only
have
refresh
tokens
in
it
now
and
so
the
code
then
that
our
oidc
client
code
that
initiates
a
login
would
use
that
cache
to
try
to
try
to
save
you
from
having
to
open
your
browser.
If
you
have
a
refresh
token,
it
would
try
to
use
it
and
then
a
second
cache
file
that
exists
purely
for
performance
reasons,
and
that
cache
would
basically
be
a
wrapper
around
our
cli.
F
F
You
did
something
you
got
back
an
exact
credential
if
you
run
the
coupe
ctl
again
with
the
same
parameters
and
your
exact
credential
from
the
first
time
is
still
valid.
It
should
be
reused,
and
so
this
works,
because
the
exact
credential
has
an
expiration
baked
into
it,
and
it
basically
means
that
we
get
all
the
performance
benefits
we
want.
F
If
you're
sitting
there
running
coop
ctl
against
the
same
cluster
over
and
over
again,
you
would
only
see
everything
would
be
almost
instantaneous,
except
for
like
every
five
minutes
where
it
would
have
to
do
a
refresh
and
then,
when
you
hit
that
sort
of
five
minute
timeout
when,
like
your
concierge
credential,
expires.
F
It
would
trigger
the
cli
to
go,
do
something
in
the
background
that
would
attempt
to
use
the
refresh
token
caching
layer
anyway.
What
I
like
about
this
is
that
I
think
it
would
simplify
the
odc
client
code
a
lot
if
it
was
only
worried
about
caching
refresh
tokens,
and
I
think
that
wrapper
layer
that
the
cli
layer
of
caching
would
actually
be
really
simple
to
build
too,
because
it
doesn't
care
about
any
of
the
semantics
about
the
concierge
or
the
odc
process
or
refresh
tokens,
or
anything
like
that.
F
It's
like
focused
on
one
specific
thing,
which
is:
if
you
ask
for
a
credential
with
some
parameters
and
and
then
you
ask
for
that
same
credential
again,
we
can.
We
can
return
it
from
the
cache
anyway.
I
wanted
to
run
that
idea
of
past
books.
I
know
I
don't
I
don't
have
like
a
design
doc
for
this,
yet
I'm
not
sure
if
it
needs
one.
F
B
F
And
so
this
was
basically
because,
if
you
ask
the
concierge
for
a
temporary
credential
and
you
get
one
back
and
then
you
go
to
your
idp
and
you
get
a
new
token,
you
don't
want
to
reuse
the
consideration.
Credential
again,
you
want
to
like
the
consecutive
might
exceed,
might
be
a
totally
different
identity
that
you're
trying
to
use.
F
So
this.
Actually,
I
think
this
logic
gets
simpler.
If
we,
if
we
say
that
this
is
not
just
caching,
the
concierge
exchange,
it's
caching,
the
entire
login
flow,
because
then
it
means
we
can.
Let's
see
we
can
cache,
I'm
just
going
to
pull
up
the
the
new
documentation
page.
We
have
about
the
command
line
options.
F
We
could
have
a
cache
key.
That's
basically,
almost
all
of
these
parameters
so
like
all
of
the
concierge
api
flags,
the
all
the
oidc
flags
and
basically
pull
all
these
things
together
and
then
probably
do
like.
I
did
in
that
pr
and
just
hash
them
to
get
a
kind
of
sort
of
opaque
string
that
will
change
any
time,
one
of
them
changes
and
use
that
as
the
cache
key.
So
if
any
of
those
parameters
changes,
it
will
not
reuse
that
cache
entry
again.
C
F
Is
an
interesting
idea,
so
we
I
think
we
could.
We
could
express
this
so
one
one
one
way
we
could
build
this.
I
don't
think
this
is
probably
the
right
way
to
do
it
for
petabyte.
We
could
actually
build
this
as
an
entirely
separate
cli
binary
and
whenever
you
have
a
coupe
config
that
has
an
exact
credential.
F
Then
you
could
also
then
have
a
async
refresh
kind
of
command
that
went
through
that
cache
file
and
tried
to
refresh
every
entry
in
it
because
it
had,
it
would
know
the
full
command
to
run.
I
don't
know
I
I
could.
I
could
imagine
like
building
that
I
don't
know
if
it's
worthwhile,
because
the
refresh
actually
shouldn't
be
onerous
as
long
as
it
only
happens,
rarely
but
the
other,
the
other,
the
other
property
that
would
be.
Nice,
though,
is,
if
your
refresh
token
expires,
then
you
really
kind
of
host
because
you
actually
need
to
you.
F
Do
a
interactive
user
login
to
get
a
fresh
token,
it
would
be
nice
to
have
something
that
would
catch
that
and
let
you
recover
from
it
without
a
command
kubectl
command
like
failing.
B
I
know
another
thing
we've
talked
about
is
storing
the
instead
of
storing
these
credentials
on
disks
to
store
them
in
some
os,
provided
trust
store.
Do
y'all.
Have
you
all
considered
like
if
we
were
to
build
this
caching
system,
as
you
all
had
come
up
with
it
in
your
heads?
Would
it
be
easy,
like
pretty
much
the
same
difficulty
as
moving
to
those
os
provided,
trust
stores,
as
as
we
as
if
we
tried
to
do
that
today,.
F
Yeah,
I
don't
think
I
don't
think
the
actual
file
formats
or
anything
would
be
that
different
and
I
do
think
that's
still
work
that
we
want
to
do,
though
I
don't
think
it's
work
that
we
have
filed.
So
maybe
that's
an
action
item.
C
C
Does
it
cause
the
entire
file
to
disappear
completely,
and
it's
just
only
in
the
key
store
or
are
we
storing
like
some
key
in
the
key
store
and
then
what
are
we
exactly
encrypting?
I
I
just
I
remember
from
some
previous
work
than
where
I
had
done
that
folks
liked
having
the
files
on
disk
for
various
workflows,
and
I
do
think,
there's
probably
some
value
in
at
least
letting
folks
understand
the
meaning
of
the
files
on
disk.
Even
if
you
can't
get
any
secrets
out
of
them.
That
makes
sense
like
having
something.
F
That
doesn't
mean
it
has
to
be
impossible
to
copy
it
off
your
machine,
so
we
could
also
have
like
a
export
session
command
or
something
like
that.
That
takes
some
kind
of
opaque
encrypted
session
file
and
lets
you
export
a
single
entry
of
it
in
a
more
clear
text,
format
that
then
you
can
import
on
another
machine.
F
I
think
this
is
kind
of
an
edge
case.
I
think
I
think
it
actually
exists,
because
people
want
to
have
like
surface
account
kind
of
log
in
from
things
like
ci
machines,.
F
I
do
think
it's
nice
to
have
something.
That's
maybe
somewhat
more
like
somewhat
more
debuggable
than
just
a
single
big,
opaque
blob.
So
hopefully
we
can
do
something,
and
I
also
think
just
keeping
all
the
data
in
the
keychain
directly
is
also
a
little
bit
hard
to
work
with,
because
each
the
keychain
apis
on
different
platforms
are
a
little
bit
different.
So
actually,
writing
the
caching
code
to
store
the
data
directly.
There
might
be
pretty
hard
and
then
also
the
the
ui,
for
how
do
you
go
look
and
see?
F
What's
in
the
keychain
is
wildly
different,
depending
on
what
platform
you're
on
and
like
the
mac
os
one,
it's
probably
the
nicest
ui
and
it's
still
really
really
difficult
to
find
the
right
thing
and
understand
what's
in
there,
and
I
think
it
would
be
nice
to
minimize
the
amount
of
integration
we
have
with
the
system
key
chain
and
still
get
all
the
security
properties
we
want
and
keep
keep
some
data
in
yaml.
D
Coming
back
to
the
caching
question,
it's
probably
worth
spending
more
time.
Thinking
about
like
one
thing
that
comes
to
mind
is
there
are
what
five
credentials
involved
that
the
cli
could
choose
to
cache.
There's
the
access
token,
the
id
token
the
refresh
took
in
the
sts
token
and
the
actual
credential
that
we
issued
for
the
cluster
for
the
workload
cluster.
D
All
five
of
those
could
have
different
expiration
dates,
and
so,
if
we
over
simplify-
and
we
say-
oh
the
only
expiration
date-
that
matters
is
the
last
one
and
whenever
that
expires,
we're
going
to
go
all
the
way
back
to
the
beginning
and
try
to
use
the
refresh
token
we've.
Actually,
maybe
we're
actually
causing
us
to
maybe
do
too
much
work
like.
Maybe
we
could
have
just
used
the
access
token
and
got
a
new
sts
or
maybe
the
sds
token
is
already.
It's
not
expired,
and
we
could
just
use
that
immediately.
F
There's
an
assumption
I
was
making,
which
is
that
the
lifetime
of
the
last
credential
you
get
like
the
concierge
credential
is
similar
to
all
of
the
intermediate
credentials
up
until
you
get
to
the
refresh
token
kind
of
like
your
concierge
certificate
might
be
about
five
minutes.
That's
our
current
value
and
your
access
token
and
id
tokens
might
also
be
about
five
minutes.
I'm
not
sure
if
there's
I'd
want
to
see
like
if
there's
a
good,
compelling
reason
why
the
lifetime
of
those
id
type
id
tokens
should
be
longer
like.
F
I
can
see,
there's
like
a
small
performance
benefit
of
doing
a
fewer
requests
in
those
cases,
but
the
simplicity
of
only
catching
the
last
one
is
very
attractive
to
me.
C
I
was
going
to
mention
if
we,
if
we
step
back
and
like
in
a
different
reality,
my
dynamic
upstream
cap
has
landed.
Those
two
exchanges
at
the
end
would
be
one
exchange
right.
They
only
exist
to
course,
a
supervisor
credential
into
a
kubernetes
credential
in
my
head,
they're,
actually,
the
same
credential
in
just
different
forms
right,
so
they
should
have
the
exact
same
lifetime,
because
they're
supposed
to
yeah.
F
In
fact,
I
remember
we
talked
about
when
we
picked
that
five-minute
value.
I
think
we
had
a
discussion
that
maybe
the
end
date
on
that
certificate
should
actually
match
the
expiration
of
the
jot.
If
you're
passing
in
a
job
yeah,
I
think
we
didn't.
I
think
we
didn't
do
that,
because
it's
hard
to
apply
the
same
semantics
to
an
opaque
web
hook
token,
because
we
don't
always
know
when
it
expires.
C
And
we
have
to
be
careful
with
the
search
too,
because
we
can't
revoke
them
in
any
way.
You
could
you
can
at
least
theoretically
revoke
id
tokens
by
just
stopping
trusting
the
signer
that
they
were
assigned
to,
but
you
can't
can't
do
that
with
the
search
right,
so
you
have
to.
Basically
you
have
to
pick
a
length
that
you
just
don't
care
about,
and
that's
five
minutes
basically.
F
Yeah,
it
was
much
worse
than
anything
we've
seen
in
testing
and
I
think,
there's
probably
something
wrong
with
that
environment.
That's
causing
it
to
be
that
slow,
but
real
clusters
sometimes
are
that
slow,
and
so
I
think,
even
if
I
think
we
want
to
account
for
the
case
that,
even
when
these
pet
components
are
like
behaving
really
poorly
on
the
server
side,
because
of
something
that's
out
of
our
control
like
the
node
is
over
subscribed
or
the
network
is
really
bad
or
something
like
that.
F
Anyway,
I
think
I
think
I'm
satisfied
with
the
discussion
on
that.
This
is
good.
Ea.
Don't
have
any
other
discussion
topics
for
today,
unless
anybody
has
anything
on
their
mind.
A
All
right,
so
thanks
everyone
for
attending
the
community
meeting
today.
This
will
be
up
on
our
youtube
playlist,
which
is
listed
out
in
the
agenda
doc
and
I'll
share
it
and
on
our
twitter
channel,
our
twitter,
twitter
channel
and
our
slack
channel.
If
you're
watching
this
recording
just
a
reminder.
We
hope
that
you
join
us
to
the
next
community
meeting,
which
is
gonna
be
on
march
18th.
A
We
have
these
every
first
and
third
thursdays
of
the
month
so
with
that
have
a
good
day
and
we'll
hope
to
see
you
next
time.