►
From YouTube: ClusterAPI Self Assessment Working Group 20220119
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so,
first
of
all,
thank
you
so
much
for
adding
the
comments.
I
know
robert
nader,
you
s
and
I
think
lubomir
also
added
some
comments.
So
thanks
a
lot
for
that.
A
So
I'll
also
explain
a
bit
in
terms
of
what
main
things
I
added
before
we
go
deeper,
so
more
or
less
the
overview
communication
channels
with
some
minor
updates
are
pretty
much
the
same
project
overview
project
non-goals.
I
made
a
couple
of
updates,
but
more
or
less
again
that's
same
the
personas
looked
great.
I
think
this
was
added
by
ankita
and
there,
if
I'm
not
wrong,
so
that
that
is
pretty
much
unchanged
types
of
clusters
same
thing
and
then
we
go
to
data
stores
and
data
flows.
A
A
So
this
I
think
robert
has
taken
ownership
to
implement
so
we'll.
Let
him
talk
about
it
later.
Main
changes
here
are
on
stride
threats,
so
we
went
through
all
the
categories,
all
the
threats.
I
could
come
up
with,
thank
you
all
for
feedback
there
and
then,
after
that,
basically,
security
issue
resolution
is
again
something
that
still
needs
some
content.
We
can
discuss
it
more
and
I
can
write
it
up
based
on
our
summary
and
that's
about
it.
So
that's
how
the
whole
document
looks
like.
A
Okay,
cool
all
right
so
nadir.
Let's
talk
about
this
comment.
We
need
to
add
a
version
of
this
that
doesn't
include
use
of
secrets
manager,
as
that's
a
typical
pro
for
all,
except
aws,
which
will
highlight
additional
threats.
A
A
Would
it
make
sense
to
do
it
if
we
end
up
doing
a
second
iteration
where
secrets
manager
is
not
included
and
other
providers
are
included.
B
A
Okay
sounds
good,
so
that
means
would
would
we
be
adding
like
one
two
threads
or
like
ten
more
threads?
It's
one
or
it's
one
more
thread,
but
it's
pretty
big.
Okay.
Okay,
so
that's
good!
I
think
it's
not
a
beaut
change.
I
think
that
should
be
easy
to
do.
Would
you
need
help
making
the
updates
on
the
diagram?
A
No
okay,
because
we
are
still
using
the
excali
draw
one.
So
I
was
wondering
if
you
need
the
raw
copy
to
make
the
data
flow
different.
No,
so.
B
I'm
because
we
want
to
prd's
in
to
the
api
repo,
I
I'm
creating
a
new
branch
on
my
fork
and
we
don't
use
we've
always
been
using
plant
uml.
We've
used
it
forever
in
the
cluster
api
repo
right
and
I
can
more
or
less
get
com.
The
component
diagram
thing
in
plant
eml
to
do
what
we
want.
So,
okay,
that's.
A
Gonna
go
away,
okay,
that
works
yeah,
I
mean
as
long
as
we
can
be
consistent
and
it
is
easier
to
version
control.
Yeah,
that's
fine
with
me,
okay,
so
we'll!
Let
you
take
care
of
this
and
then
let's
move
on
so
one
thing.
I
wanted
to
confirm
if
people
were
able
to
go
through,
especially
cluster
life
cycle,
folks
go
through
the
content
in
the
data
flows
here
and
kind
of
fact
check
me,
because
there
is
a
good
chance,
I
might
have
messed
up
something
or
I
might
have
missed
something.
B
Yeah
yeah,
I
think
it's
more.
I
didn't
read
it
in
detail,
but
I
will
go
for
it
and
probably
what
I
do
is
I'll.
B
A
Yeah
yeah,
so
I
totally
agree
with
markdown.
My
hope
was
we
get
the
initial
reviews
in
google
doc
and
then
kind
of
create
a
pr
in
six
security
that
where
the
markdown
can
lift
and
then
cluster
life
cycle
or
a
cluster
api
project
can
link
it
from
there.
A
A
A
Okay,
cool
all
right,
so
that's
definitely
one
of
the
items
at
the
end
of
the
reviews
that
let's
convert
this
to
markdown
and
create
a
pr,
so
I
can
take
care
of
that
and
then
all
the
smaller
updates
we
can
maybe
do
it
there
as
well.
A
Okay,
robert,
you
want
to
talk
about
this.
I
think
nadir
has
a
good
number
of
people
who
can
help
out
and
also
some
talk.
That
is
relevant
here,
looks
like.
D
I
filled
that
in
last
year.
Okay,.
D
A
Okay,
so
all
right,
so
if,
if
you're
good,
unless
there
is
an
item
where
we
need
an
edit
here,
I
can
mark
this
as
done
or
you
want
to
me
to
keep
it
open
for
you
to
make
updates,
keep
it
open.
I
wouldn't
mind
you're,
seeing
them.
A
B
C
D
D
So
I
can
dig
up
the
contacts
and
send
them
to
you,
but
there
should
be
resources
available
to
do
some
of
that.
A
Yeah,
that
would
be
helpful
yeah.
If
you
can
add
the
details,
robert
here,
I
think
yes,
you
know
fabrizio,
might.
D
F
A
Okay
sounds
good
cool,
okay,
so,
okay,
so
maybe
before
we
move
into
threads,
because
there
are
a
lot
of
them.
Any
high
level
comments,
thoughts
on
the
list
of
threads.
How?
What
are
we
going
to
do
about
them?
Etcetera,.
A
Okay,
I'm
guessing.
That
means.
I
wrote
it
well
okay,
so
my
thinking
mainly
here
is
for
us
to
figure
out
okay.
A
I
think
there
are
potentially
a
couple
of
ways
to
deal
with
it.
One
is
we
could
decide
severity
of
each
threat
and
decide
based
on
that
when
we
want
to
implement
this,
and
the
other
thing
could
be
just
leaving
it
to
the
project
and
you
all
decide
based
on
your
backlog
and
other
priorities
when
you
want
this
implemented,
but
at
least
having
the
issues
will
make
sure
that
we
don't
kind
of
lose
track
of
it.
D
So
just
just
paralleling
the
cncf
review
process,
but
I
think
that
the
output
is
to
just
identify
the
issues
and
make
recommendations.
D
It's
really
then
kicked
back
to
the
project
to
prioritize
and
find
find
volunteers
to
fix,
yeah
produce
whatever
designs.
So
this
is
it's
not
a
pass
fail
right.
It's
just
here's
a
set
of
things
we
found
and
here's
a
set
of
recommendations,
yeah
exactly
yeah,
and
we
can
make
some
loose
high
medium
low
assignments
of
priority.
A
Yeah
yeah
yeah,
exactly
that's
what
I
was
thinking
too
yeah
my
sense,
okay.
I
I
also
know
that
there
there
is
some
backlog
web
page
on
the
website
or
the
book.
If
I'm
not
wrong
for
cluster
api,
would
that
be
a
place
which
we
would
need
to
update
as
well,
where
I
see
like
road
map
and
backlog
mentioned
somewhere.
B
A
A
Oh,
this
doesn't
work
the
way
I
thought
it
will
okay
anyway,
I
think
robert
has
got
this
anyway.
So
next
one
was
spoofing
of
infra
admin
and
there
were
some
discussions
on
this.
So
the
assumption
here
is
like
the
credential
is
stolen.
I
mean
there
are
multiple
ways
that
I
mentioned
it
could
be
stolen
and
it's
not
so
much
of
a
theoretical
possibility.
I
feel
where
it
is
impractical
that
hey
it
can
never
be
stolen.
A
So
that's
the
main
assumption
behind
this
and
the
idea
is
like
because
there
are
so
many
privileges
to
that
infra
admin
credential.
They
can
really
do
pretty
much
what
they
want
with
with
the
resources.
So
what
can
we
do
to
implement
it?
And
these
were
some
some
suggestions
and
one
thing
I
wanted
to
again
fact
check
me
from
cluster
life
cycle.
These
mitigations
make
sense
in
my
head.
There
is
a
good
chance.
They
won't
make
sense
when
we
actually
implement
it
or
something
else
might
make
more
sense.
B
I
think
they're
fine,
I
think
the
number
four
mitigation
probably
doesn't
make
sense.
Basically,
the
issue
is
particularly
for
outside
of
aws
is
the
let's
take
vsphere
for
an
example
you're
going
to
have
credentials
to
recenter
in
your
management
cluster
as
a
secret?
B
So
you
don't
need
to
be
an
info
admin
to
exfiltrate
those
credentials.
You
just
need
to
have
access
to
the
name
space
in
which
cluster
api
vsphere
is
running,
and
if
you
have
that
you
can
export
trade,
the
secret.
A
The
one
thing
I
forgot
to
mention
and
related
to
the
number
four
thread
mitigation
was:
there
will
be
potentially
some
threats
that
could
be
mitigated
just
by
the
end
user
of
cluster
api,
doing
something
that
we
recommend
so
number
four
could
be
one
of
that
where
we
say
hey,
if
you're
going
to
create
a
new
cluster
or
you're
going
to
add
a
new
node,
make
sure
you,
the
admin,
is
not
doing
it
by
themselves
so,
based
on
that,
there
is
potential
scope
for
documentation
updates
again
in
maybe
the
book
that
we
have
for
cluster
api.
A
B
Yeah
yeah,
I
think
that's
why
I
think
yeah,
so
it
will
be
least
privileges
and
and
setting
access,
control.
A
B
On
namespaces
on
across
the
api
controller,.
A
Because
spoofing
will
not
potentially
allow
us
to
detect
like
with
audit
logs,
if,
because
the
credential
used
is
the
same,
there
is
a
chance
that
we
won't
really
know
whether
it's
spoofed
or
not.
Unless
we're
doing
something
really
fancy
like
machine
learning
and
trying
to
find
out
when
the
infrared
men
does
something,
whether
that's
anomalous
or
not,.
A
Probably
not
good
yeah
yeah,
I
see
what
you
mean.
I
think
this
ties
to
what
robert
was
also
saying.
Is
there
some
way
to
do
separation
of
duties
where
we
say
there
are
two
different
rules
for
admins?
One
is
the
one
who
can
control
management
cluster
and
doesn't
get
access
to
workload
cluster
when
it's
created
and
the
other
one
is
like
a
workload
admin
which
gets
only
access
to
workload
clusters.
A
There
is
some
personas,
I
think,
defined
here.
If,
if
I'm
not
wrong
cloud
admin
and
platform
operator
and
workload
operator,
I
don't
know
if
those
match
to
what.
B
You're
talking
by
by
anyone
other
than
the
yeah,
so
things
could
be
something
along
the
lines
of
like
you
know
you
lock
down
the
controller
name
spaces,
so
the
infrastructure
secrets
are
protected.
Now
that
I'm
just
thinking
about
it,
I
just
thought
of
another
attack
director
for
the
workplace
clusters.
So
this
is
not.
This
doesn't
affect
aws
so
much,
but
everyone
else,
maybe
not
azure
as
well,
but
certainly
everyone
else
so.
B
This
is
really
the
difference
between
running
your
own
cluster
and
the
managed
cloud
provider
like
gkeks
is,
you
know,
we're
running
cpi
and
csi
in
the
workload
cluster
right?
They
also
have
credentials
so
most
cases
those
credentials
being
copied
across
from
the
management
cluster
into
the
workload
cluster
anyway,
yeah
there
isn't
yeah
to
which
we
don't
have
a
good
solution.
We
do
within
like
vmware
products,
but
like
not
not
for
the
upstream,
not
yet
we
might
actually
that
might
require
changes
to
cpi
itself
long
term.
A
A
D
A
A
A
I
just
want
to
know
about
that
so
that
we
don't
create
a
duplicate,
github
issue
that
is
already
being
worked
on
and
if
it's
blank,
I'm
gonna
assume
we
haven't
really
done
or
created
any
work
for
this
mitigation
and
we'll
just
end
up
creating
a
github
issue
on
that.
So
that's
the
main
idea
behind
status.
A
Okay,
so
this
help
would
also
be
appreciated
if
you
can
just
add
wherever
you
can,
and
ideally,
if
you
have
github
issue
which
says
this
is
implemented
in
this
issue,
that
is
even
better.
D
A
Yeah,
this
is
where
I
was
thinking
like.
We
have
to
be
careful
of
kind
of
boiling
the
ocean
with
the
people
we
have.
If
we
just
scope
to
what
we
have.
We
know
in
the
data
flow.
What
apis
and
cloud
services
we
are
using
like
ec2,
alb
secrets
manager
and
maybe
couple
of
others,
and
we
just
focus
on
that,
but
there
is
a
good
chance
that
any
other
cloud
provider
service
would
have
a
similar
threat,
but
in
terms
of
in
favor
of
specificity,
we
can
just
talk
about
that
here.
B
D
D
A
B
Is
all
the
days
in
fact
you
could
like,
even
if
you
don't,
and
you
can
always
test
it,
because
we
in
our
test
grid
logs,
we
have
all
the
cloud
trail
data
for
everyone,
so.
B
That's
in
our
like,
if
you
just
look
at
our
test
test
grid
and
the
all
the
pr
tests
and
end-to-end
tests,
yeah.
A
Cool
okay,
so
I
think
okay,
I'll
update
this
later
for
for
now
looks
like
the
mitigation
is
server
side.
Identity
is
confirmed
by
https
and
there
were
some
discussions
on
legitimate
mitm
needs
for
tls.
For
some
governments
do
we
want
to
go
deeper
on
this
or
talk
about
this.
B
E
A
Yeah
yeah,
I
agree.
I
I
feel
like
this
is
one
thing
where
we
can
let
the
downstream
products
of
cluster
api
take
care
of
this
requirement
and
as
upstream
project
we
say,
we
don't
recommend
this
and
we
will
assume
the
verification
happens
by
https
for
server-side
services.
Does
that
sound,
reasonable
yeah.
B
I
mean
for
the
aws
one
example.
We
can
give
additional
guidance
to
users
if
they
want
to
like
locking
down
the
fixed
end
points.
That's
all
stuff
they
can.
We,
I
don't
know
if
we
expose
the
outbound
tls
options.
Maybe
that's
the
thing
that
we
can
follow
up
and
sort
out,
but
you
can
even
you
can
lock
it
down
from
the
im
perspective,
so
you
can
say
the
user
is
only
allowed
to
connect
to
these
regions.
B
I
think
once
they're
in
govcloud
that
can't
really
do
much
yeah
you
can,
you
can
lock
down
like
it,
has
to
be
fips.
Otherwise,
it's
not
even
going
to
get
authenticated
and
it
will
do
anything
so
there's
a
whole
bunch
of
stuff.
I
mean
this
is
all
going
to
be
customer
yeah,
I
mean
end
user
guide,
yeah
yeah,.
F
A
F
A
Cool
okay,
rest
of
it
is
same
next,
one
is
tampering,
so
oh
did
I
miss
something:
no
okay,
spoofing
of
worker
node
and
control
plane.
Yes,
I
missed
this.
I
think
you
took
an
action
item
nadir
to
kind
of
create
replay
rewrite
a
couple
of
them.
Fourth
and
fifth:
yeah.
B
I
need
to
rewrite
that
one
yeah
but
yeah
yeah,
it's
doable,
but
just
not
in
a
way
it's
described
there.
Yeah.
A
Exactly
okay
sounds
good
I'll.
Let
you
take
care
of
that,
going
back
to
tampering
okay,
so
here
the
idea
was
what,
if
the-
and
this
is
more
on
from
the
supply
chain
perspective,
what
if
the
component,
binaries
and
configuration
files
were
tampered
so
supply
chain
perspective,
it
would
be
like
during
build
time,
install
time
and
then
run
time
would
be
when
it's
already
there
and
somebody's
actively
trying
to
tamper
an
existing
environment.
A
So
for
build
time
the
hope
was
like
do
something
similar
like
kubernetes,
is
doing,
create
an
s
bomb
and
then
have
some
level
of
check.
Some
verification
like
we
have
for
docker
images
as
a
post,
build
action
just
to
make
sure
that
we
got
everything
that
we
think
we
have.
B
What
what
is
the
current
state
enough
art
of
say,
binary
verification,
so
we're
like
cluster
cattle
is
downloading
something
from
github
release
like
what
what's
this
they
are
beyond
just
verifying
the
tls
connection
to
get
up
what
else
should
we
be
doing.
C
A
So,
post
download,
my
hope,
would
be
like
similar
to
docker
images
when
docker
image
is
downloaded.
My
understanding
is,
it
also
looks
for
a
checksum
file
from
the
server
and
then
once
it's
downloaded,
it
verifies
whether
the
image
has
the
same
checksum
as
the
checksum
that
was
hosted
by
the
service
or
the
registry.
C
B
B
What's
what's
the
com,
I
mean
I'm
familiar
with
like
saying
how
the
hashtag
plugins
work
with
like
the
gpg
signature
thing,
but
yes,
I
know
that
that's
a
couple
of
yet
my
knowledge
of
all.
That's
a
couple
of
years
old
now,
so
I
don't
know
what
how
what
what
what
the
current
state
of
thing
that
whole
landscape
is
today.
Let
me.
F
F
Want
to
look
at
part
project
six
store,
yeah
yeah,
so
coastline
has
been
working
with
with
kubernetes
already
so
cosine's
kind
of
already
integrated
into
kubernetes
to
an
extent,
because
they've
been
looking
at
salsa.
Compliance
and
they've
been
talking
to
cosine
and
six
store
that
and
notary,
but
but
cosine
is
probably
more
in
a
usable
state.
At
the
moment.
Notary
still
alpha.
F
So
in
total
is
like
about
is
about
the
supply
chain
piece.
I've
been
meaning
to
do
more
within
total,
because
I
know
that
you,
you
know
you
can
use
cosine
for
signing,
and
then
you
can
put
it
onto
a.
They
are
kind
of
like
trusted
list
of
signatures
to
get
keyless
signing,
which
is
all
very
cool
in
total,
is
more
about
build
process
so
cicd
and
going
through
each
stage
of
the
cicd
pipeline
and
testing
things
through.
A
Right,
I
I
feel
like
we
would
probably
get
very
similar
kind
of
design
if
we
look
for
what
cube
ctl,
binary
downloads
are
being
used
and
done
in
terms
of
signing
and
integrity
checks
by
release
engineering
team,
so
I'll
check
that
potentially
they're
using
cosine
for
binaries
as
well
as
images.
A
A
Yeah
that'd
be
great
thanks.
Okay,
cool
this
one
seems
more
like
end
user
guidance
again
like
hey.
If
you
see
some
component
getting
restarted,
just
let
check
if
you
expected
that
to
happen
or
not
otherwise,
yeah
one
thing
I'm
skipping
here,
which
is
sort
of
a
very
something
that
people
do
when
other
things
are
very
well
mature
from
security
perspective
is
memory
corruption
for
anything
that
is
running
so
that
I
have
completely
skipped
here,
but
just
for
the
sake
of
completeness
wanted
to
share
that.
A
B
This,
it's
just
a
fun
tip,
there's
just
a
bunch
of
templates,
so
the
templates
by
default,
don't
include
any
ami
reference
and
cluster
api.
Aws
looks
up
images
in
a
well-known
account
that
vmware
owns
at
the
moment.
We
want
to
transfer
it
to
cncf
at
some
point.
That's
right!
So
rio
owns
all
the
images
yeah
and
they're
just
there,
so
I
have
put
it
to
do
because
you
can
override
that
ami,
which
is
an
attack
director,
yes
perform,
which
is
performable
by
because
typically
the
platform
operates.
B
So
you
know
going
back
to
earlier
like
who
can
create
a
workload
cluster
we
kind
of
end
goal
is
to
like
enable
some
sort
of
platform
as
a
service.
B
B
They
can
then
choose
an
arbitrary
image,
so
I
know
so
like
I
think
us
armies
like
they
do
they
put
gatekeeper
on
top
and
lock
down
that
field,
so
you
can
only
select
the
amis
that
are
allowed
from
a
certain
list,
but
that's
the
end
user
thing
and
there's
probably
something
where
we
can
giving
better
access
control
restrictions
directly
plus
the
api
might
be
something
we
want
to
look
at.
A
Yeah,
I
think
that
would
be
great.
It
definitely
seems
some
there
is
so
looks
like
there
is
a
workaround
that
can
be
done
on
end
user
side
through
some
level
of
policy
management
by
oppa
or
gatekeeper,
but
there
is
something
we
can
do
in
built-in,
which
will
allow
us
to
do
somewhat
similar,
yeah,
okay
cool.
That
sounds
like
a
good
one.
Did
you
add
a
to-do
already
for
yourself.
C
A
It
on
path,
number,
five:
okay,
that
works
this
one
right,
okay,
yeah,
that's
good!
Should
I
move
it
to
seven
looks
like
that
is
closely.
D
A
A
I
am
not
sure
about
the
current
state
of
cloud
api
auditing.
How
does
that
work.
B
That's
a
end-user
action
I
mean,
I
think
really
they
like
you,
should
be
giving
in
the
aws
case
using
a
im
role
or,
in
the
vmware
case,
a
specific
service
account,
which
is
got
only
the
privileges
that
are
needed.
So
it's
not
there,
not
just
recycling
administrator
at
respear.local,
yeah.
B
A
Okay
cool,
so
maybe
I
miss
this
comment.
Ssh
is
off
by
default,
yeah,
okay.
I
think
this
seems
reasonable.
Probably
an
end
user
guidance
thing
looks
like.
A
C
A
All
right,
okay,
so
I'm
assuming
no
comments
means
we'll
review
it
later
or
everything
looks
good.
So,
let's
move
to
this
one
in
interest
of
time
information,
disclosure,
exposure
of
cube
config
credentials,
so
I
think
it.
The
main
thing
here
was:
can
is
there
a
way
for
us
to
make
this
short-lived
or
can
deleted
after
use
from
secret
manager?
I
think
this
part
is
done
for
aws.
If
I'm
not
wrong.
A
As
and
of
course,
there
is
another
thread
that
you
mentioned,
where
we
don't
do
secrets
manager
so
that
might
have
a
good
adding
that
threat
might
be
a
good
way
good
one
to
add
somewhere
here.
Whenever
you
get
a
chance.
B
Yeah
and
just
just
it's
pretty
cute,
I
think
when
we
cube
adm
control
playing
key
materials.
A
B
B
So
I
think
if
we
want,
I
think
that
is
a
bit
strap
taken.
That
thing
is
what
we
mean
by
cube
conflict
credentials.
A
A
Okay
sounds
good,
so
so
this
is
an
interesting
one.
I
don't
think
blue
boomer
is
around
I'll,
probably
catch
up
with
him
later
and
see.
B
B
Oh,
oh,
oh
no,
hold
on.
I
see
what
you
no
okay,
all
right.
I
think
we
need
the
deep
dave,
not
deep
dive.
B
B
Exfiltration
of
admin,
coupe,
config
or
or.
B
Yeah,
so
the
description
is,
is
using
one
pod
on
control
plane
right.
Let's
see,
kubernetes
get
everything.
B
B
B
Is
exposure
of
cube,
cube,
adm,
token
credentials,
partial
control
over
once
exposure
of
cube
adm
token,
located
on
the
node
or
cloud,
can
lead
and
can
be
reused
to
register
as
the
control
plane,
node
itself,
after
which.
B
I
mean
we
just
need
to
say
short-lived
or
deleted.
It
doesn't
matter
where
it
is
yeah,
yeah,
fair
point
disable
that
implement
yeah
yeah.
We
need
to
look
at
pod
security
because
we
were,
we
didn't
do
anything
for
ages,
because
yeah,
the
old
pod
security
policies
were
just
deprecated
forever.
It
was
like
okay
yeah
do
anything
until
that
sorted,
but
I
think
now
that
that
we
do
have
a
suitable
replacement.
We
need
to
visit
yeah.
A
My
really
hope
and
something
that
gets
me
excited-
is
we
creating
some
level
of
template
or
cluster
class,
where
we
exempt
all
the
control,
plane,
name
spaces,
but
apply
baseline
policy
for
everything
with
what
we
have
as
like
the
newer
bot
security
admission
by
default
for
cluster
api
yeah.
That
would
be
really
nice
and
I
would
be
some
something
I
would
be
happy
to
implement.
A
A
This
is
again
bot
security,
yeah
makes
sense,
ca,
rotation
is
involved
and
we
don't
have
tools
that
perform
automation.
I
think
this
is
a
long-standing
topic
if
I'm
not
wrong
in
general,
for
kubernetes
ca
management,
where
we
want
some
level
of
rotation,
that's
automatic,
but
it's
really
not
been
there.
B
A
All
right
cool
so
keep
going
here,
not
sure
how
feasible
for
this
but
good
idea
so
yeah.
This
was
something
I
felt
like.
Oh,
this
is,
this
seems
really
obvious,
and
maybe
we
have
thought
about
it
so
just
wanted
to
get
thoughts
on
so
there
are
very
similar
ones,
13
and
14.
So
the
idea
is,
we
can
create
unlimited
number
of
resources
with
the
credential.
We
have
unlimited
number
of
clusters
or
unlimited
number
of
cids,
and
I
think
there
was
one
more
if
I'm
not
wrong.
Okay,
pretty
much
those
three.
So
the
idea
was.
B
Yeah,
oh
yeah,
oh
yeah,
funny
enough.
I
had
floated
this
idea
before,
not
even
from
a
security
perspective
just
from
a
sort
of
like
I
think
you
probably
want
to
place
limits
on
your
platform
operators
and
what
they
can
do
so
yeah,
but
no
one
thought
it
was
important
at
the
time.
So
I.
A
Think
it's
very
simple:
okay,
good
all
right!
So
now
we
have.
I
think
I've
got
a
very
good
reason
to
do
it,
because
this
is
a
finding
so
cool
all
right.
So
plus
one
looks
like
still
and
then
misused
to
exist,
exhaust
at
cd,
and
then
he
shared
a
hcd
link
to
a
issue.
A
I
think
there
was
some
expectation
in
that
link
where
it
said
that
we
should
apply
some
level
of
pod
level
quotas
or
limits,
so
that
led
to
me
wondering
for
all
cluster
api
ports
static
or
not.
Do
we
have
some
basic
limits
and
resource
resource
limits
applied
or
we
just
let
them
run.
B
We
have
just
let
them
run,
we
don't
have
great
data
on
what
is
well
at
least
we've
not
been
able
to
get
good
data
on
what
is
reasonable
from
consumers,
but
yeah.
We
should
do
that.
B
A
Right
right
makes
sense:
okay,
yeah,
so
next
one
was
deleting
elastic
lb.
That
is
basically
the
glue
between
the
management
cluster
and
the
workload
cluster
api
server.
Since
the
connection
happens,
I
mean
people.
B
Oh,
the
cluster
is
dead
once
you
delete
the
elastic
lv
and
that
people
have
done
that
for
real
well.
A
B
B
E
B
That's
it
so,
like
us,
army
does
a
workaround
where
they
they've
done
a
whole
bunch
of
redirection
in
like
route
53
and
using
external
dns,
but
the
whole
thing's
a
mess
right
with
network
load
balancer.
So
the
new
format,
type
of
load
balancer
for
aws
would
technically
solve
it.
But
it's
got
other
restrictions
mainly
that
it
doesn't
properly
support
hip
in
that
right,
which
we
need
so,
but
there
are
workarounds.
B
We
just
haven't
done
the
work
and
there's
been
a
long-standing
thing
about
right,
creating
a
new
load
balance,
a
separate
load,
balancer
provider,
which
would
then
give
the
user
options
around
like.
We
want
to
use
powerdns
and
have
a
friendly,
alias
for
our
stuff.
So
there
is
there's,
probably
a
load
of
work
that
can
be
done.
I
think
yeah,
so
the
heartbeats
are
there
to
be
fair,
but
number
two
is.
B
A
Gone
right
right,
okay,
cool
all
right!
I
would
love
like
a
link
to
this
where
heartbeat
some
implementation
or
something
exists,
just
to
read
more
and
understand
it.
So
whoever
wants
to
take
that.
A
I
see
what
you
mean:
okay,
okay,
okay,
yeah.
That
makes
sense.
A
A
Okay,
got
it
all
right
I'll,
just
make
a
note
of
that
long-term
load
provider.
We
are
almost
running
out
of
time,
so
I
just
quickly
check
if
we
have
something
else.
This
can
be
quite
the
problem,
even
non-intentionally,
so
exhausting
ec2
elb
apis
by
just
doing
a
create
delete,
update,
create
delete,
update
forever
so
felt,
like
rate
limit
or
second
pair
of
eyes
or
alerting,
would
help.
G
B
Sure
we
will,
we
will,
I
mean
we'll
get
rate
limited
at
both
ends,
both
aws
and
the
controller
will
start
right
limiting
right,
but
we,
but
we
re
rate
limit
purely
not
to
not
to
get
fully
rate
limited
by
aws,
so
they're,
actually
quite
they're,
much
more
aggressive,
but
azure
isn't,
for
instance,
so
azure
was
quite
happy
accepting
hunters
to
request
a
lot
recenter.
B
You
can
like
dos
very
quickly,
but
you
can
just
I
know,
that's
been
it's
I
think
that's
fixed.
Now
there
is,
but
you
could
quite
just
from
boxing
plus
api
respect
when
it
would
just
start
creating
10
000
sessions
against
the
reseller,
and
that's
it
like
just
from
creating
one
cluster.
So
yeah.
B
Yeah
yeah,
yes,
so
there's
a
reconciliation
limit
as
well,
so
yeah
the
club
that
by
default
is
10
it's
never.
None
of
the
controllers
are
going
to
reconcile
more
than
10
pluses
at
once.
Oh
really,
okay,
that's
a
good,
that's
good
to
know,
but
there
probably
is
something
I
just
need
to
think
about,
but
yeah.
I
don't
know
that.
There's
probably
something
around
this,
but
we're
probably
going
to
start
running
into
limits
of
what
information
that
kubernetes
api
can
tell
us
about.
Who's
done
a
particular
action,
and
things
like
that.
F
B
F
B
A
Yeah,
okay,
that
seems
reasonable,
so
no
real,
like
clear
mitigation
in
terms
of
code,
we
can
write,
seems
like
right
now
in
terms
of
what
already
exist
versus
what
we
can
do,
but,
okay,
next
one,
okay,
we're
couple
of
minutes
away,
yeah.
I
think
this
is
pod
security
related.
So
I
we
discussed
about
this.
A
C
B
G
A
That
would
be
mad,
so,
okay,
okay,
great
all
right,
so
I
think
we
wrapped
up
pretty
much
all
the
threads
yeah
okay,
so
this
one
I'll
take
care
later,
but
yeah.
Thank
you
again
for
talking
on
this.
I
think
we
got
around
through
most
of
the
docs
that
we
wanted
to
implement
or
talk
about.
A
My
hope
is
like
we'll
sort
of
go
to
a
standardized
set
of
content
in
a
week
or
so
and
before
we
meet
again,
I'm
thinking
of
converting
this
to
markdown
by
then,
hopefully,
we'll
have
the
diagrams
and
the
missing
threads
or
the
ones
we
are
going
to
rewrite
and
then
we'll
probably
wrap
up
a
meeting
after
that
so
early
february,
or
so
does
that
sound
reasonable
if
folks
want
to
meet
next
week
instead
of
waiting
for
a
couple
of
weeks,
let
me
know.
B
D
Okay
sounds
good.
Yes,
chris
already
got
back
to
us
says
he
totally
supports
the
fuzzing
so
check
your
email
thread,
we'll
see
if
he
is
available
for
the
next
wednesday
call
or
if
we
can
set
up
something
else.
Okay,.
A
Sounds
good
to
me
thanks
robert
okay,
if
not
thanks,
thanks
all
talk
to
you
in
a
couple
of
weeks
and
we'll
keep
chatting
on
slack
until
then
yeah
sure
all
right,
bye.