►
From YouTube: Pinniped Community Meeting - November 4, 2021
Description
Pinniped Community Meeting - November 4, 2021
We meet every 1st and 3rd Thursday of the month at 9am PT. We'd love for you to join us live!
This week the team discussed project roadmap items for November and Slack discussion on removal of supervisor http port 8080. See full notes here: https://hackmd.io/rd_kVJhjQfOvfAWzK8A3tQ?both#November-4-2021-Agenda
A
A
If
you
have
any
questions
on
pinniped
or
want
to
provide
feedback
to
the
maintainers
anything
you
are
interested
in
regarding
the
tool
as
well
as,
if
you
need
help
with
anything
or
you
want,
you
can
listen
in
lurk,
you
don't
have
to
actively
engage.
We
welcome
all
types
of
participants
so
come
and
join
us.
We
meet
every
first
and
third
thursday
of
the
month
at
9am,
pacific
time.
A
If
you
do
have
something
to
discuss
with
the
team,
you
can
put
that
down
in
the
discussion
topic
section
and
we
will
get
to
it
towards
the
end
of
the
meeting
you're
unable
to
join
and
still
want
to
reach
out
to
us.
You
can
find
us
at
the
kubernetes
workspace
within
the
pinniped
channel
and
if
you
aren't
a
member
of
that
kubernetes
slack
workspace,
you
can
request
an
invitation
to
gain
access
and
normally
pretty
quick
shouldn't,
have
a
problem.
A
Getting
access
to
that
and
then
you
can
also
find
us
on
twitter
and
project
pinniped
at
any
point,
when
you're
engaging
with
us
or
any
members
of
the
community,
we
ask
that
you
please
read
and
abide
by
our
code
of
conduct,
and
that
includes
interacting
with
us
on
slack
on
twitter,
on
github,
as
well
as
within
this
community
meeting.
A
If
you
are
using
the
tool,
we
would
love
to
know
more
about
your
use
case
with
it
and
the
organization
you
may
represent,
or
if
you're
independent,
using
pinniped
we're
really
curious
as
to
what
you're
using
it
for
and
want
to
know
more
about
about
that.
You
can
go
to
this
github
discussion
and
put
the
details
in
a
comment
as
well
as
adding
your
logo.
If
it's
an
organization-
and
we
could
do
some
cross
promotion
with
it
and
add
it
to
our
adopters
page.
A
Also,
when
you're
attending
these
meetings,
we
ask
that
you
put
your
name
in
the
organization
you're
representing
and
that
just
helps
us
keep
track
also
of
who's
attending
from
outside
of
the
maintainers.
A
C
So
the
first
one
supervisor
token
refresh
fails
when
the
upstream
refreshes
no
longer
works
for
oibc
is.
C
Pretty
much
done
or
the
the
basic
work
of
just
checking
the
refresh
open
still
works
is
done
so
that'll
be
whenever
we
cut
a
release,
including
that
I'm
done
with
the
basic
check
that
an
ldap
user
still
exists,
you
know
still
has
the
same
username
a
new
id.
C
C
And
I'm
still
working
on
getting
active
directory
upstream
refresh
working,
so
that
would
be.
You
know,
checking
that
specific
active
directory
attributes
like
has
the
password
last
set
date
and
time
is
that
after
the
time
when
you
logged
in
this
morning-
or
you
know
not
necessarily
in
the
morning,
but
that's
the
example
if
it's
after
you
walked
in
then
your
passwords
changed,
and
we
should
log
you
out.
So
you
can
re-authenticate
with
your
new
password
or
is
it
a
deactivated
user?
C
Because
there's
a
specific
attribute
that
we
can
check
to
figure
that
out
and
so
I've
gotten.
C
I
figured
out
the
integration
test
for
for
password
last
set,
and
so
that
was
kind
of
tricky,
because
previously
our
active
directory
test
environment
was
read
only
and
so
in
order
to.
C
You
know
give
a
test
user,
the
permission
to
create
our
test,
bind
user
permission
to
create
a
users
and
then
use
that
to
create
a
user
change
the
password
and
then
see
that,
after
that,
the
logging
in
as
the
user
works
before
you
change
the
password
and
refreshing
after
you
change.
The
password
doesn't
work.
C
And
I
think
that
works,
although
I
mean
I
haven't
finished
implementing
the
code,
that
the
test
is
testing
so
we'll
see
if
there
was
some
weird
bug
in
it.
B
Margaret
remind
me,
what
did
we
say,
we're
gonna
do
about
password
expired
and
disabled
or
locked
accounts,
not
disabled,
but
locked.
You
decide
on
those.
C
Still
useful,
but
not
quite
as
high
value,
like
at
least
the
way,
it
seems
to
me,
like
it's
very
high
value,
to
know
that
as
soon
as
you
deactivate,
a
user
account
inactive
directory
the
next
time
the
user
refreshes
in
five
minutes,
they
will
be
logged
out,
whereas,
like
password
expires,
only
happens.
Every.
C
C
C
B
Yeah
I
could
buy
that
we
should.
We
should
probably
write
that
down
and
make
sure
anjali
agrees
when
she's
back.
That
way,
I
mean
at
some
level
we
have
to
document
in
our
ad
api,
what
checks
we
do
and
what
checks?
We
don't
do
right.
That
way,
it's
clear
to
a
administrator
when
they're
configuring
this
stuff,
what
what?
What
can
they
rely
on
and
if,
for
some
reason,
they're
their
way
of
disabling
users
or
in
other
way
or,
however,
they
revoke
access
for
users.
If
that's
not
covered,
then
they're
aware.
C
B
Scott,
since
you
have
experience
with
a.d,
is
there?
Is
there
any
other
obvious
patterns
of
provoking
access
and
ad?
That
you're?
Aware
of
that,
we
should
try
to
cover.
D
So
that's
the
only
other
thing
that
I
can
think
of
which
is
you
know,
a
long
change,
but
because
of
the
way
that
pinniped
works,
where
it's
getting
the
group
membership
again
every
you
know
whatever
every
refresh.
If
I
remember
correctly,
then
you
know
that
shouldn't
be
as
much
of
an
issue.
B
C
Still
exists
is
by
the
dn
so
in
that,
if
the
end
changed,
then
you
would
be
logged
out
because
we
wouldn't
be
able
to
find
you
awesome,
but
we're
still.
C
To
figure
out
the
most
reasonable
way
to
do
upstream
groups
refresh
because
it
can
be
a
pretty
slow
operation,
so
it
might
not
make
sense
to
do
it
in
line
like
if
every
time
you
refresh
you
have
to
re
fetch
nested
groups,
which
might
in
some
large
directory
instances,
take
45
seconds.
C
B
D
So
it
would
just
create
a
very
weird
user
experience
where
the
user
won't
necessarily
understand.
Why
he's
logged
out,
if
he's
disabled,
he
should
know
if
his
password
changed.
He
knows
that
he
changed
his
password.
B
Right,
yeah
that
that
all
makes
good
sense
does
the
same
reasoning
apply
to
locking
yourself
out,
maybe.
D
That
is
that
that
may
actually
be
a
legitimate
case
to
lock
them
out,
because
that
means
that
the
person
is
making
a
mistake
with
their
password,
and
there
may
be
some
other
entity
trying
to
log
in
with
that
password
that
is
locking
them
out
so
that
usually,
in
that
case,
usually
from
my
experience,
locking
out
from
a
user
happens
from
logging
into
their
windows
machine
too
many
times
where
they
mess
up
with
the
password.
D
At
which
point
you
want
to
lock
out
any
permissions
that
they
have
anyways
as
well,
because
there's
some
security
breach
or
whatever
going
on
there.
So
I
think
that
locked
does
make
sense,
but
I
don't
but
yeah
so
I'd
say
lock.
No,
I
it
should
be
locked
out,
but
not
for
you
know
the
other
case.
B
Okay,
yeah,
okay.
So
if
I
could
distill
what
you
said,
if,
if
the
reason
for
the
account
not
working
is
some
active
reason
in
the
sense
that
it
was
locked
due
to
either
bogus
or
incorrect
attempts
from
the
user
or
someone
trying
to
break
into
the
account
or
the
account
was
physically
disabled
or
the
user
object
was
physically
moved
in
ad
as
a
way
of
stripping
permissions.
Those
are
all
active
things
that
happen
either
caused
by
the
user,
a
malicious
actor
or
the
it
admin,
and
we
we
use
those
as
signals
to
say.
B
But
in
the
case
of
password
expiration
that
just
happens
sort
of
implicitly
in
the
background
and
the
user
will
catch
up
to
that,
probably
within
a
couple
hours
the
next
day
and
that's
probably
fine.
It's
not
a
big
deal,
and
certainly
if
they,
I
suspect,
if
they
tried
to
make
a
new
session
with
pinniped
ad,
would
refuse
to
bind
because
their
password
has
expired
and
they
did
have
to
go
investigate.
Why
80
was
refusing
to
let
them
log
in.
D
Exactly
and
and
after
they
log
back
in
with
windows
within
five
minutes,
their
pinniped
session
is
now
disabled
again
because
they've
changed
their
password
when
they
log
back
into
windows.
So
now
they
just
log
back
in
with
the
new
password
that
they
just
set
yeah.
So
I
think
active
changes.
Yes,
what
happens
in
the
background
yeah?
Basically,
that
makes
perfect
sense.
B
Make
sense,
I
think,
on
the
group
stuff,
I
think
the
high
level
thoughts
we
have
right
now
is.
We
would
do
groups
refresh
for
oidc
every
time
because
access
to
like
doing
the
refresh
and
then
doing
a
user
info
lookup
is
supposed
to
be
quick.
B
So
I
think
we
could
probably
do
that
every
time
for
ldap
and
80,
I
think
probably
the
simplest
thing
to
implement
slash,
not
trash
or
ad
server
for
bad
reasons
would
be.
When
you
refresh,
we
would
lazily
in
the
background
initialize
a
group
refresh,
and
that
would
that
would
update
your
groups
slowly
at
some,
probably
some
cadence
in
the
background
and
then,
as
you
do
refreshes,
we
would
update
your
groups
to
what
we
thought
they
were
now,
but
it
would
probably
come
in
lazily
instead
of
trying
to
and
it
would
not
be
in
line.
D
D
Right,
which
I
think
makes
sense,
I
think
the
other
option.
If
that's
you
know,
the
other
option
is
just
have
a
pinniped
command
of
piniped
ldap
refresh
or
something
like
that,
where
it
would
be
a
ins
if
it
was
difficult
to
do
with
the
caching
and
not
in
line,
is
just
to
have
a
even
if
it
is
a
manual
command
of
pinniped,
refresh
ldap
groups
or
whatever
you
know,
having
a
mechanism
to
do.
D
B
D
C
D
Yeah
and
I
think
it's
definitely
a
last
resort
type
of
thing,
if
the
you
know
obviously
inline,
is
it
a
serious
issue?
So
that's
you
know
just
an
issue,
but
doing
something
async
would
be
definitely
preferable
to
that,
but
yeah
and
I
definitely
am
plus
the
login,
because
I
personally
log
in
with
multiple
users,
sometimes
and
if
I'm
doing
a
demo
or
testing
or
things
like
that
being
able
to
log
in
with
multiple
users
or
log
out
from
a
user
login.
With
my
you
know,
different
user
is
beneficial.
B
Yeah
totally,
I
think
you
know
we
have
that
story
open
for
helping
you
get
like
cube
configs
from
the
supervisor.
You
know
more
more
dynamically
and
stuff,
I
think,
sort
of
in
that
same
vein,
just
like
a
more
user
obvious
way
of
controlling
your
sessions
like,
like,
maybe
you
know
not
not
even
just
having
to
log
in
and
log
out,
maybe
you're
allowed
to
log
into
more
than
one
user,
and
then
you
tell
pinpad
like
this.
B
Is
the
active
user
or
whatever
or
or
maybe
it's
like
a
per
terminal
thing
where
you
can
say
well
in
this
shell?
I
want
to
be
mo,
and
this
other
shell
and
margot
or
whatever,
that
way
like
things
like
demos
and
just
demonstrating
the
system
working
are
not
like
semi
crippled
by
constantly
being
like
in
just
one
state
right.
D
Yeah
currently
for
demos
that
I
do
of
this
type
of
thing,
I
have
two
wsl
machines
opened
up
on
my
windows:
machine
with
piniped,
with
the
same
cube,
config
same
piniped,
cube
config,
but
this
one's
logged
in
as
a
this
one's
logged
in
as
b,
and
then
it's
all
local
on
my
machine.
B
Yeah
I
mean
you,
so
you
what
you
could
do
so
if
you
had
the
easiest
easiest
thing
probably
is:
if
you
just
had
two
separate
users
on
the
wso
vm
like
linux,
so
all
you
basically
have
to
do
is
give
pinniped
different
home
directories,
which.
B
D
B
D
B
B
After
speaking
with
rpm
and
others,
we've
decided
that
that
restriction
is
probably
too
harsh,
so
we'll.
Basically,
what
our
plan
is.
B
Certainly,
you
know
encourage
everyone
to
give
us
a
refresh
token
if
possible,
but
if
you
don't,
we
can
fall
back
to
basically
give
us
if
the
idp
gives
us
an
access
token
that
has
some
vagueness
of
long
lived.
So
it's
not
like
a
five
minute
or
an
hour
token,
but
it's
like
waiting
three
hours
or
something
and
the
idp
itself
has
or
the
oidc
idp
has
either
like
a
user
info
endpoint
or
a
token
introspection.
Endpoint,
that's
enough
for
us
to
probe
the
idp
with
just
our
access
token.
B
So,
like
hey,
does
this
token
still
work?
Meaning
is
the
user
backed
by
this
token,
so
we'll
try
to
do
that
graceful
fallback.
So
that
way,
you
know
if,
for
some
reason,
refresh
tokens
are
not
possible
because
they're
optional
in
the
oid
suspect
they're
another
requirement
right
and
you
still
have
a
choice
because
right
now
it
would
basically
make
something
like
using
saml
through
decks
with
pinniped
impossible,
which
was
not
our
intent.
B
D
No,
I
for
sure
I
I've
I
just
implemented
pinniped,
unfortunately
with
dex
as
a
back
end
for
active
directory,
because
this
customer
has
no
ldap
s
and
that's
planned
for
another
six
months
and
there
is
no
ability
to
change
their
timeline
of
when
they're
going
to
implement
ldap
s
or
start
tls,
so
dex
supports
389.
D
and
so
for
a
temporary.
That
was
done,
which
yeah.
B
And
in
that
that
same
train
of
thought,
that's
exactly
the
way.
I
want
people
to
think
about
decks,
but
piniped,
which
is
that
you
know
pinniped,
is
a
much
newer
project
with
a
much
stronger
set
of
hardened
security
opinions.
B
B
Cool,
I
think
I
can
talk
about
the
next
thing
real
fast,
so
in
the
same
line
with
hardening
our
upstream
refresh,
we've
been
working
on
hardening
our
tls
configuration.
So
today,
if
you
do
like
a
tls
scan
of
independent
components,
they'll
respond
with
I'll
support
triple
des
as
decipher
that's
what
they
do
and
then
we
failed
the
security
audit
pretty
hard
immediately.
B
B
This
is
relatively
going
well.
The
kubernetes
client
code
that
we
use
has
put
up
a
significant
fight
and
not
being
willing
to
configure
these
knobs
in
any
reasonable
way,
but
I
think
I
have
course
to
behave
after
like
three
weeks
now.
Some
some
absurd
amount
of
time
has
been
spent
on
this.
B
I
might
commit
to
60
something
right
now:
it's
fine
I'll
squash,
it
it'll
just
be
one
big
commit
with
all
the
bad
things
ignored
like
it's
gone
through.
So
many
revisions,
there's
like
so
much
commented
out
code
where
I
was
like
well
that
didn't
work
I'll,
just
comment
it
out
and
try
a
different
thing
now,
but
it's
going
well
and
so
basically
the
default
security
posture
of
pinniped
will
be.
B
B
B
B
If
you
insist
on
downgrading
because
dex
is
you
know
a
modern
ghost
stack
so
it
you
know,
I
can
do
all
the
ciphers
that
we
want
and
then
it'll
turn
around
and
basically
state
no
opinion
about
tls
to
the
thing
it's
connecting
to
it's
gonna,
be
your
ldap
or
your
id
or
whatever,
and
so
right.
So
you
have
an
out.
It's
not,
but
again,
our
default
posture
will
be
significantly
increased
and
also
for
components
like
the
concierge
that
don't
have
a
user
facing
component.
B
They
they
sit
inside
the
cluster
behind
the
kubernetes
api
server,
and
thus
the
only
client
that
talks
to
them
is
our
components
and
the
kubernetes
attack
server
and
that
one
will
be
much
more
strict
because
there's
no
reason
not
to
because
you
know
the
setup
right.
This
is
very
small
and
also
you
should
never
be
directly
talking
to
the
concierge
service.
B
There's
literally,
no
need
to
do
that
so
yeah,
so
that's
the
general
gist
and-
and
the
reason
I
sort
of
took
on
this
effort
is
because
we,
the
next
thing
on
the
list,
is
the
hips
work
to
get
us
in
a
state
where
you
could
have
a
fips
build
to
do
that
in
a
way.
That
would
be
sane
and
easy
to
reason
with
I
had
to
go
through
the
code
base
anyway
and
collect
all
the
places
where,
basically,
we
make
connections
either
incoming
or
outgoing
and
so
yeah
this.
B
D
B
Yeah,
and,
and
and
to
be
fair,
all
it's
going
to
require
for
you
to
get
the
artifacts
is
you'll
run
docker
build
on
the
fips
docker
file.
D
A
B
Totally
fair,
I
totally
expect
that
to
be
the
case,
but
yeah
we
just
want
to
make
it
easy
and
you
want
to
give
people
confidence
that
they
do
that
they're
likely
to
get
a
artifact
that
at
runtime
will
do
something.
Sane
sounds
good
cool.
I
think
I
think
that's
about.
As
far
as
we've
gotten,
I
don't
think.
We've
yeah
that
that
gives
us
until
the
end
of
the
year
and
then
we'll
figure
out
some
some
stuff.
A
Awesome
thanks
everyone
for
all
of
those
updates
and
discussions,
so
I
guess
now
we're
moving
on
to
the
discussion
topics
that
were
pulled
from
slack
mo.
Would
it
be
helpful
for
you
to
take
over
the
screen.
B
So
there
was
I
I
don't
need
to
because
there
wasn't
really
that
much
response
on
the
slack
thread,
but
I
I
can
give
some
context
and-
and
maybe
margo
can
provide
some
history
because
the
the
I
I'm
unfamiliar
with
exactly
why
this
port
exists.
I've,
I've
heard
ryan,
say
it's
for
health
checking,
just
fine,
I
guess,
but
that
doesn't
explain
the
rest
of
the
api
surface
being
available
on
that
port.
Margaret,
do
you
happen
to
remember
why
we
have
this
port
and
if
you
don't
that's
totally
fine,
I
just
I
wasn't
on
fraternity.
D
They
would
point
above
it
there's
definitely
an
ease
of
use
there
it's,
but
it
is
a
security
posture
issue
of
exposing
http
that
entire
length,
so
it
makes
sense
to
get
rid
of
it.
I
understand
the
logic
behind
it.
A
lot
of
projects
have
for
things
that
terminate
at
the
load
balancer
level.
That
doesn't
mean
it's
secure
or
a
good
thing
to
do.
A
lot
of
people
still
use
plain
ldap
and
that's
not
good
either.
B
Yeah-
and
I
think
you
can
maybe
maybe
reason
out
of
it
for
some
simple
web
app
or
something
but
but
but
if
if
this
is
the
app,
that
is
your
trust
anchor
for
your
kubernetes
environment,
probably
not
the
threat
model
that
you
want
to
have
like,
like.
B
Then
you're,
literally
passing
in
passwords
back
and
forth
to
that
plain
text
panel
seems
like
a
bad
idea,
and
it's
not
that
bad
to
configure
re-encryption
on
the
way
in
right
like
like
I,
you
know,
I've
checked
at
least
contour
and
I'm
shorting
nginx
stuff
all
definitely
probably
supports
this
just
there.
Yes,
you
have
to
configure
more
things.
You
have
to
tell
it
to
terminate
the
tls
at
the
elb
and
then
re-encrypt
and
send
it
back
to
the
pinnacle,
endpoint
and
make
sure,
pin
and
pad
itself
has
a
proper
serving
cert.
B
As
far
as
the
eob
is
concerned,
and
all
that
jazz
so
yeah,
it's
totally
more
work.
What
concerns
me
is
that
it
seems
like
so
you
know
I
went
and
looked
at
the
various
open
source,
folks
that
I
I'm
aware
of
that
use
our
stuff
and
all
of
them.
B
B
You
know
I'm
just
like
no
like,
like
I
get
it
if
you're
just
getting
started
it
kind
of
makes
sense,
but
once
it's
time
to
like
write
a
blog
post
or
push
it
into
your
core
cloud
project,
you
should
probably
just
provision
some
certs.
Also,
you
know
things
like
cert
manager.
Can
automate
the
bulk
of
this
work.
D
B
D
To
here
so
that
actually
makes
it
really
not
difficult
I've
yet
to
see
clusters
without
cert
manager.
These
days,
everyone
seems
to
install
it
for
something
that
requires
it.
So
having
start
manager,
even
if
it's
nine,
I
think,
if
that
was
documented
in
the
pinniped
docs,
how
to
do
that
with,
for
example,
ingress
engine
x
and
contour
are
the
most
you
know,
common
ones
or
whatever
it
is,
but
just
generate
that
cert,
I
think,
would
be
beneficial.
D
Possibly
you
know,
and
then
anyone
can
do
it
themselves,
though,
however
they
want
to
but
suggesting
at
least
possibly
using
cert
manager.
There
makes
it
easy.
B
Yeah
that
makes
sense
yeah.
B
I
think
one
of
the
things
we
had
planned
on
doing
as
we
worked
on
removing
this
is
we
would
write
the
release
blog
post,
for
whatever
release
increase,
includes
this
removal
and
include
in
there
like
all
right
like
you,
want
to
set
up,
pin
and
pad
there's
a
discrete
example
where
we
purposely
go
through
the
pain
of
doing
this
and
either
through
cert
manager
or
maybe
provisioning,
our
own
search
using
like
cf
ssl
or
whatever,
just
to
show
that,
like
here's,
a
concrete
example
and
as
you
mentioned
scott
internally,
you
can
use
whatever
ca.
B
You
want
self-signed
or
not
totally
fine.
You
don't
have
to
go
to
some
external
dki
or
anything
like
that,
and
also
I'll
probably
push
on
this
a
little
bit
more
upstream
and
kubernetes.
B
The
the
default
posture
inside
the
cluster-
you
know,
even
without
cert
manager,
included
a
a
local
service.
Signer
and
all
you
had
to
do
was
make
all
you
have
to
do
is
put
an
annotation
on
your
service
and
that
would
cause
a
secret
to
be
created
with
a
internal
to
the
cluster
ca
that
signed
it
for
you,
and
you
know,
gave
you
the
secret,
and
you
just
mounted
that
in
your
pod-
and
you
were
done.
B
That
was
all
you
had
to
do,
and
basically,
because
of
the
ease
of
that
that
was
like
omnipresent
there
was.
There
was
no
reason
not
to
re-encrypt
your
routes
congress.
Whatever
at
that
point-
and
I
I
do
want
to
kind
of
push
on
this
a
little
bit
more
in
upstream,
because
I
think
that
should
be
just
the
default
posture
in
kubernetes
that
you
don't
you
don't
need
to
install
it
to
me.
It's
the
same
thing
as
the
new
psp
stuff.
You
shouldn't
need
to
install
something
to
have
that
default
level.
Security.
B
B
Yeah,
it
would
totally
be
opt
in,
and
you
know,
in
whatever
way
it
made
sense
yeah
it
just
needs
to
be,
as
you
know,
as
things
are
made
easily
available,
you
know
people's
manifests
start
to
include
the
right
settings
sort
of
by
default,
but
they
have
to
be
there
kind
of
omnipresent
for
that
to
happen.
B
It
in
terms
of
the
actual
mechanics
of
the
removal.
I
I'm
unclear
on
exactly
how
we
should
go
about
it.
There
was
hesitation
from
I
think,
anjali
and
others
that
we
just
flat
out
removed
it.
It
would
break
people's
stuff,
and
so
maybe
maybe
we
would
do
a
graceful
removal
kind
of
kubernetes
style
where,
like
there'd,
be
a
configuration
that
would
start
off
when
it's
on
set
would
mean
yes
have
the
unsecured
port
and
then
later
would
mean.
B
If
you
didn't
set,
it
would
mean,
don't
have
the
instance
report,
but
you
could
still
turn
it
on
explicitly.
If
you
wanted
to
and
then
later
later
it
would
be
gone
and
the
report
would
be
removed.
So
you
would
have
some
window
to
figure
out
what
you
wanted
to
do,
I'm
still
debating
in
my
mind,
if
all
that
rollout
is
necessarily
needed,
but
maybe
it
is,
you
know.
Certainly
every
example,
I've
seen
externally
seems
to
be.
Everyone
wants
to
just
use
this
airport.
D
And
I
think
another
option-
and
this
just
happened
in
one
of
the
carvel
tools.
I
think
it
was
k
build
where
they
changed.
The
you
know
go
package
from
the
old
k14s
to
carvel
and
in
that
they
released
two
releases.
I
think
that
they
released
two
one
right
after
the
other
one,
without
that
this
change,
which
is
a
serious
breaking
change
for
these.
You
know
open
source
users
right
now
versus
you
know,
because
they
had
additional
features
coming
in
as
well.
D
I
think
having
this
in
its
own
release
come
even
right
after
if
it
was
planned
to
be
with,
you
know,
other
features
having
this
in
its
own
release
immediately
after
makes
sense,
because
it
gives
enough
time
until
the
next
release
for
anyone
to
make
the
change
or
they
stay
on
the
newly
released
features
that
came
out
of
pinniped,
that
you
don't
want
to
take
away
from
them
and
just
stay
on
that
release
for
the
meantime,
but
having
two
releases
one
after
the
other
is
a
easy
way
of
giving
features
and,
at
the
same
point
not
locking
them
out.
B
Yeah
that
makes
sense
yeah.
So
so,
even
if
we
did
sort
of
just
arbitrarily
make
the
change
at
some
point,
we
would
we
would
make
it
so
it
only
existed.
Like
so
say
we
released
say
we
started
this
work
right
now
and
it
was
going
to
go
into
0
13
the
next
release.
B
We
would
actually
not
merge
this
change,
whereas
all
the
stuff
we
just
talked
about
on
the
roadmap
first
release
13
and
then
immediately
release
14
and
say
that
13
had
all
those
cool
features
that
everyone
wants
for
all
these
hardenings
with
your
upstream
idps
by
the
way,
14
is
what
you
should
be
on,
and
all
that
is
also
remove.
This
port
so
upgrade
to
13,
get
everything
working
there
and
also
fix
your
tls
stuff
and
then
upgrade
to
14.
B
Oh,
I
have
a
sort
of
like
a
one
thing
that
we
recently
saw
is
in
a
pad
wasn't
working
correctly
on
a
private
gke
cluster
because
of
some
networking
constraints,
and
so
one
thing
I
planned
on
doing
was
changing
our
default
service
ports,
the
ones
that
are
sort
of
used
internally
from
instead
of
like
eight
four
four
three
to
like
ten
thousand
two.
Fifty.
I
think
that
support
the
reason.
I
would
pick
that
port
number
is
because
that's
the
port
number.
B
That
is
what
the
cubelet
listens
on,
and
so
it
tends
to
always
be
allowed.
So
that
would
make
it
so
in
those
environments
it
would
work
by
default.
B
The
concern
I
have
is,
I
think,
during
a
rollout
of
your
deployment
on
such
an
update
there,
probably
at
least
some
transient
time
where
traffic
doesn't
route
correctly.
I
mean
it
might
be
on
the
order
of
a
minute
on
like
a
fast
bus
or
whatever,
it's
just,
probably
not
a
big
deal.
What
what
I'm
trying
to
figure
out
is.
B
Certainly,
you
know
that
would
be
included
in
any
release,
notes
that
whenever
such
a
change
happened,
but
I'm
trying
to
think
of
like
how
how
people
view
upgrading
a
component
like
this
like
do
like,
is
that
one
minute
blip?
Okay,
do
we
have
to
come
up
with
more
fancy
approaches
to
try
to
not
make
that
happen?.
D
And
I
think
one
minute
is
okay.
As
long
as
it's
noted
in
the
release
notes
from
my
side
at
least-
and
my
I
mean
it's
less
than
a
minute-
usually
changing
a
port
with
in
a
fast
cluster
in
slow
clusters.
It'll
take
some
time
I
mean
in
fast
clusters,
it'll,
be
I'm
almost
instantaneous
the
change
there,
depending
on
what
cni
you
have
in
terms
of
the
routing,
if
it's
celium
and
you're,
not
using
q
proxy.
D
It's
a
matter
of
you
made
the
change
and
it
just
works
basically
even
in
large
clusters,
but
I
I
think
that
that
would
be
fine.
The
one
thing
that
I
wonder
is
if
there
are
people
creating
a
bunch
of
network
policies
in
you
know
whatever
that
could
get
messed
up
by
this
port
being
changed.
B
Yeah,
that's
fair,
I
mean
I,
I
guess
at
a
higher
level.
I
guess
the
question
would
be.
Is
the
arbitrary
port
that
we
picked
like
part
of
our
public
api,
or
is
it
like
like
something
you
should
depend
on?
That's
like
part
here.
I
guess
I
don't
know
if
you
have
a
choice,
if
you
actually
don't
remember
how
network
policies
are
configured,
you
can
do
them
at
the
service
level
right,
you
don't
have
to
do
it.
Just
pure
ports.
B
D
Changed
a
lot
recently
with
adding,
but
it
is
ports,
I
believe,
because
they
just
added
like
port
ranges
and
all
of
that
into
there
instead
of
just
single
ports.
I
think
it
is
by
port
only.
B
Well,
I
suppose,
if
it's
included
in
the
release,
note-
and
you
know
what
the
old
port
is
and
what
the
new
forward
is.
You
could
update
your
network
policy
first
to
allow
either
one.
D
B
It's
hardcoded
today,
there's
no
config
for
any
of
those
ports,
they're
just
stacking.
We
I
have
to
figure.
So
there
is
at
least
some
desire
to
have
those
ports
be
configurable.
I
don't
know
if
I
want
to
put
them
in
the
ytt,
because
the
use
case
I've
seen
so
far
for
making
the
ports
configurable
is.
B
I
am
running,
pin
pad
on
a
cluster
with
no
cni,
so
I
need
to
set
the
pots
to
host
networking.
I
was
like
that
sounds
awful.
Why
would
you
do
that?
But
okay
yeah,
so
I
yeah
the
the
stuff
I
want
to
see
in
ytt
is
the
stuff.
I
expect
people
to
change,
not
the
stuff
that,
like
someone
wants
to
hack
on
like
use
your
own
well.
D
D
B
Yeah
exactly
it's:
it's
like
an
escape
hatch
that
I'm
like.
If
you
want
to
do
it,
go,
do
it
not
my
fault,
if
you
break
it
it
to
me,
it's
just
a
very
low
level,
advanced
use
case,
and
if
you're
in
that
networking
environment,
you
know
that
you
put
yourself
right
there
and
you
already
have
to
do
overlays
to
change
our
manifests
to
be
like
host
network.
True
because
they're,
not,
why
would
they
right
so,
okay,
I
think
we're
about
out
of
time.
Thank
you,
scott,
for
attending.
D
D
B
D
Exactly
it's
messed
up
because
all
of
the
kubernetes
invitations
and
all
of
the
calendar
invites
still
show
an
hour
late
in
my
calendar
and
nothing
sinks
correctly
in
terms
of
invites,
and
it's
terrible
but
yeah.
A
Hey
well
thanks
everyone
for
some
really
great
discussion
for
this
meeting,
if
you're
watching
from
home-
and
you
have
feedback
that
you
wish
to
share
on
what
was
discussed
today.
Please
come
find
us
on
the
kubernetes
slack
workspace
in
the
pinniped
channel.
We
would
love
to
hear
from
you
and
we
also
welcome
you
to
come
and
join
us
for
the
next
community
meeting,
which
will
be
in
two
weeks
from
now.
We
meet
every
first
and
third
thursday
of
the
month
at
9
a.m,
pacific
time.