►
From YouTube: Kubernetes SIG API Machinery 20230809
Description
- [mo] APIService support for URL?
What would be the correct way to limit abuse in terms of network connections being made from KAS to random URL?
Could this be implemented via an ExternalName service? Should it be?
Seems like this may be possible already when --enable-aggregator-routing is disabled (though it is unclear to me what hostname the serving cert is checked against)?
- [geetasg] Consider separate etcd cluster for CRDs https://github.com/kubernetes/kubernetes/issues/118858
A
Great
and
now
it's
recording
good
morning
good
evening
good
afternoon,
depending
where
you
are
welcome
to
C
API
Machinery
by
weekly
meeting
for
kubernetes
open
source,
my
name
is
Federico
van
Giovanni
I'm.
Here
with
my
colleagues
and
co-chairs
today's
August
9
2023..
We
are
going
to
go
through
a
couple
of
topics
today
and
the
first
one
is
from
MO.
So
I
will
let
you
start
with
it
more.
B
C
B
Servers
and
it
all
works,
I
was
reading
through,
like
the
service
resolving
code
and
all
all
that
logic
there
and
I
couldn't
exactly
tell
how
far
you
could
stretch
that
the
the
gist
of
my
thought
process
for
a
use
case
was
just
I
want
to
be
able
to
have
an
aggregated
API
service,
but
not
host
any
of
it
on
the
cluster,
so
basically
just
point
it
to
a
remote
endpoint
somewhere.
B
Obviously,
then
it
brings
up
all
the
concerns
about
API
server
accessing
Network
stuff
based
on
a
config,
that's
in
the
API,
which
might
not
be
under
the
control
of
the
right
actor,
yeah
I
think
I.
Think
David.
You
and
I
talked
about
this
a
long
time
ago,
but
I
don't
actually
remember.
D
It
has
been
a
very
long
time.
The
very
first
implementation
of
aggregation
actually
did
allow
arbitrary
urls
at
the
time
Daniel
expressed
concerns.
I
wish
he
was
here,
happens
many
times,
but
I
wish
it
was
here.
So
we
could
ask
him
this
time
not
going
to
to
bother
him
in
his
rest.
D
D
B
Oh
so
like
create
a
service
manually
set
its
IPS
to
whatever
you
want
and
then
just
abuse
that
basically.
D
I'm,
not
I'm,
not
asking
about
abuse
per
se.
I
am
wondering
if
that
is
already
a
path
to
do
what
you
want.
B
Maybe
and
I
I
didn't
try.
It
too,
specifically,
like
I
was
looking
at
the
code
and
like
the
way
we
do.
Service
resolution
is
dramatically
different
based
on.
If
you
have
the
aggregating
routing
flag
set
or
not,
which
is
a
little
awkward.
If
you
wanna
I
mean
I,
guess
I,
don't
care
if
this
works
on
someone
else's
cluster
about
mine.
D
I
think
a
good
starting
point
is
going
to
be
figuring
out
what
is
actually
available
today,
rather
than
necessarily
what
people
intended
right.
What
can
actually
be
done
and
I
think
as
a
starting
point
for
discussion,
that's
going
to
be
important
for
people
to
know.
I
don't
have
a
fundamental
concern
with
it
myself,
the
there
are
and
and
why
why
not?
D
So
it
is
different
than
an
admission
web
hook,
but
the
difference
is
that
you
you
can
get
and
it
is.
It
is
a
proxy
result.
Not
a
separate
call
and
the
cube.
Api
server
says
act
as
this
user.
So
if
you
own
a
cluster-
and
you
can
point
it
at
another
cluster-
you
are
able
to
do
things
like
I
am
a
system
Master
here
go
run
this
call
somewhere
else
in
a
certain
identity
of
system
Master.
When
you,
when
you
use
admission
workbooks,
you
can't
do
that.
D
Yeah,
it's
going
to
be
worthwhile
understanding
what
we
have
in
existence
already
and
I.
Don't
think
I'm
fundamentally
opposed
to
allowing
someone
to
set
a
URL,
but
it
might
be
I
have
to
think
about
it.
D
It
was
an
initial
prototype.
My
first
prototype
was
based
on
a
string
URL
not
based
on
a
service
and
truth
be
told
I,
don't
like
I
said:
I,
don't
actually
know
whether
you
can
wire
this
up
and
manage
the
service
or
endpoints
manually,
I,
think
you'd
definitely
race,
but
you
have
limitations,
then
on
say
the
server
name.
That's
going
to
be
expected,
and
so
you
could
still
fail
negotiation
if
it.
If
the
thing
you're
pointing
at
isn't
actually
my
service,
you
would
still
fail
negotiation.
D
D
If
I
had
a
URL
to
Jordan's
cluster
and
then
I
am
a
system
master
in
my
cluster
and
I
pointed
a
Jordan's
cluster,
and
if
someone
has
not,
if
someone
does
not
have
distinct
trust
domains
for
the
different
clusters,
which
I
agree,
would
probably
be
a
configuration
error,
but
if
they
did
not,
then
suddenly
I'm
able
to
act
as
a
particular
API
service,
a
particular
group
against
Jordan's
cluster
and
if
I,
do
something
devious
like
create
an
API
service
for
crde.
D
That
is
important
in
Jordan's,
cluster
I
could
potentially
do
clever
things
nefarious
things
with
my
powers.
B
D
B
D
Would
expect
us
to
require
the
server
name
to
match
that
service
right?
So
so,
if,
in
the
example,
I
gave
Jordan
Jordan's
cluster
sorry
I
was
looking
at
your
picture.
Jordan's
cluster
does
not
respond
with
a
certificate
valid
for
service,
whatever
I
would
expect
the
TLs
negotiation
to
fail,
so
I
would
never
submit
the
request.
B
B
B
B
Yeah
so
I
I,
don't
I,
don't
think
you
can
rely
on
server
name
resolution
to
protect
you
from
a
malicious
actor
that
has
a
very
high
privilege
on
a
particular
cluster
and
trying
to
use
it
to
then
attack
a
different,
closer
I.
Just
don't
think
you
can
do
that
because
the
the
actor
would
just
keep
tweaking
the
config.
B
Okay,
yeah
I
can
book
of
that
that'll
be
kind
of
a
fun
experiment,
but
yeah
I
I,
guess
just
stepping
back
for
a
second
irregardless
of
what's
possible
today.
B
B
A
C
E
I
think
even
more
than
aggregated,
so
we're
trusting
Cube
API
server
is
a
front
boxy.
We
would
expect
aggregated
servers
to
delegate
authorization
calls
back
to
the
central
server,
and
so
you
would
also
need
Network
visibility
from
this
off
cluster
aggregated
server
back
to
Cube
API
server
to
make
those
authorization
calls.
D
E
B
Are
you
talking
about
Cube
bind
I?
Think,
yes,
so
I
will
admit
I
stared
at
the
cube
behind
repo
trying
to
figure
out
how
it
worked,
but
there
was
so
many
very
fancy.
Looking
diagrams
I
as
far
as
I
could
tell,
though
it
was
purely
based
off
of
crds
and
singing
priorities
across
different
levels
of
monsters.
B
A
F
Hi,
yes
I'm
here.
This
is
the
first
time
I'm
attending
this
meeting,
so
hello
to
everyone.
I
have
been
part
of
the
hcd
community
for
some
time.
So
I
bring
a
CD
related
question
here
today.
The
proposal
is
that,
can
we
have
a
separate
hcd
cluster
for
crds,
just
like
we
allow
for
events
and
the
motivation
to
propose?
F
This
is
similar
to
the
motivation
for
moving
events
out
to
another
cluster,
which
is
mainly
to
protect
the
performance
of
the
main
hcd
cluster
I
got
comments
from
Dr
shamanski,
suggesting
that
this
is
something
that
was
discussed
earlier,
but
it's
not
clear
if
this
is
architecturally
aligned,
so
I'm
willing
to
investigate
this
further.
If
if
this
is
architecturally
aligned
with
future
Direction.
G
I
vaguely
recall
that
we
had
a
discussion
about
the
way
that
you
would
configure
something
like
this.
So
there's
a
cube,
API
server
command
line
flag
that
you
can
use
to
separate
out
which
SCD
is
used
for
storage
between
some
resources,
but
it's
not
enabled
for
crds
and
I.
Think
one
of
the
questions
was.
Would
we
want
to
continue
to
extend
that
flag
and
make
it
even
more
complicated?
Or
do
you
want
to
switch
to
something
like
component
config
as
a
way
of
enabling
these
types
of
things.
D
I,
don't
remember
it
with
component
config
I
do
remember.
There
were
questions
about
using
a
flag
where
you
had
to
predict
the
name
of
the
crds
that
you
were
interested
in.
It's
not
as
simple
as
I
want
all
custom
resources
to
be
stored
separately,
because
even
if
your
goal
is
I
want
a
separate
SCD
for
all
my
important
things
for
servicing
the
control
plane.
D
Right,
you
know,
and
you
have
to
know
which
ones
those
are.
G
G
Now
Daniel
had
been
pointing
out
that,
since
crds
are
Dynamic
using
a
you
know,
flag
that
is
statically
configured
at
the
startup
of
an
API
server
seems
not
to
match
very
well
with
the
life
cycle
of
crds.
Like
you
introduce
the
crd
now
you
have
to
restart
the
API
server
to
say
where
it's
stored,
that's
kind
of
like
a
chicken
egg
problem.
Do
you
create
do?
Do
you
restart
the
apis
server
first
and
set
up
the
flag
and
then
configure
the
crd?
F
F
So
we
have
seen
so
I
work
for
eks
and
we
are
managed
kubernetes
service
and
we
often
see
things
like
Argo
events
or
decks
objects
with
millions
and
get
listed
a
lot
and
they
affect
the
performance
of
the
cluster
hcd
cluster.
So
if
there
was
a
mechanism
to
separate
out
those-
and
those
is
not
like
all
crds,
they
should
be
configurable.
If
there
was
a
mechanism
to
do
so,
it
would
give
us
a
way
to
give
better
performance.
D
D
So
you're
talking
about
two
different
categories,
but
it's
not
clear
to
me
how
you
establish
which
category
is
which
yeah
so
IC
Jordan
has
a
hand.
Sorry.
B
Yeah,
so
a
few
things,
let's
see,
let's
see
very
recently,
I
refactored,
all
the
horrible
code
that
implements
this
layer
of
routing
or
storage,
and
so
it's
a
lot
cleaner
now
so
technically
implemented
was
much
easier.
Now,
the
the
thing
I
was
going
to
ask,
though,
is
to
like
to
Joe
your
point
right
encryption
at
rest.
Lets
you
configure
what
custom
resources
that
you
want
encrypted,
whether
they
exist
or
not
or
come
or
go,
isn't
really
relevant
to
it.
C
B
It,
maybe
that's
fine
I
guess
though
I
I
was
gonna,
ask
like,
would
it
would
it
be
so
bad
to
just
let
you
configure
just
all
crds
in
a
different
place,
at
least
they
would
well.
D
Probably
wouldn't
support
the
cold
right.
So
imagine
you
have
an
SCD.
You
wanted
to
put
Argo
on
it
because
Argo
crossed
your
LCD
server,
but
you
have
a
crd
that
represents
your
CSI
driver
so
or
your
cni
plugin
of
some
kind
you're,
not
money
ahead
by
putting
them
together,
because
if
you
are
unable
to
establish
how
your
storage
or
network
should
work
for.
B
Eign
I
was
curious,
where,
like
the
boundary
of
like
a
fault
domain
was
getting
established
right,
like
I,
could
see
three
boundaries
that,
because
we
those
exist
in
the
API
server
today,
which
is
core
resources,
aggregated
resources
and
crds
right.
So
you
could
who
split
it
out
that
way?
If,
if
that
was
where
the
fault
domains,
that
you
had
lied,.
F
So,
ideally,
we
would
like
it
to
be
configurable,
but
if
there
is
dramatic
difference
between
the
effort
required
for
separating
out
all
crds
versus
a
specific
one,
all
crds
might
be
something
to
think
about.
For
for
me,
as
the
requester
for
this,
but
in
general,
is
this
a
good
idea
should
I
start
digging
into
it
further.
D
It's
a
non-trivial
idea
right,
I
I,
agree
that
the
there
are
issues
around
prediction
of
what
you
would
want
to
place
in
one
location
versus
another.
There
are
problems
with
who
is
actually
making
that
decision
right.
In
the
example,
I
think
you
have
I
believe
you
would
like
Amazon
to
be
able
to
make
the
decision
about
which
resources
go
in
which
location,
but
a
user
creates.
The
crd
I
think
there
are
potential
issues
around
crosstalk.
What,
if
I
put
mine
on
a
spot,
that
you
don't
like
it,
and
there
are
issues
around.
F
F
Yeah
yeah
that
totally
makes
sense,
even
I'm,
not
clear
who
makes
the
decision
like
I
would
leave
it
up
to
the
customer.
We
can
just
recommend
that
this
is
what's
happening
to
your
cluster
split
it
out
and
they
could
make
the
config
change
or
as
an
operator
we
could
do
it.
So
who
makes
the
changes
unclear
to
me
also
just
looking
for
a
mechanism
to
make
the
change
and
establish
authorization
over
who
can
do
it
and
then
go
from
there?
Perhaps
so
I
agree.
F
This
does
seem
non-trivial
to
me,
of
course,
but
something
I'm
interested
in
digging
into
if
it's
not
already
in
progress
somewhere
else.
You
know
otherwise
I
would
join
that
effort.
G
I,
just
posted
in
chat
one.
It
was
actually
back
in
the
history
of
the
agenda
notes.
One
of
the
issues
that
had
been
opened
about
this
before
actually
PR
had
been
open
about
it
before
and
then
closed,
I'm
kind
of
with
David
I
I
would
be
really
interested
to
see
somebody
dig
in
and
try
and
tease
apart
all
the
problems
that
exist
here.
You
know
how:
how
is
the
flag
defined
and
configured
is
this
purse?
Is
this
for
all
crds?
Can
you
do
it
per
crd?
F
All
right
sounds
good.
I
will
review
this
issue.
The
82580
issue
I
see
the
component
config
term
I'm
not
familiar
with
it,
but
is
that
was
that
the
last
design
thought
that
was
being
pursued.
E
That
that
was
talking
about
how
we
give
configuration
to
the
API
server
like
whether
we
do
it
in
Flags
or
whether
we
do
it
in
a
more
structured
way
and
trying
to
add
complexity
around
the
ETC
overrides
flag,
like
it
was
already
kind
of
a
twisty
command
line
flag,
and
this
was
trying
to
add
like
more
more
things
you
could
express
in
there.
We
kind
of
didn't
want
to
explode
the
complexity
of
that
flag.
That's
pretty
orthogonal
to
whether
or
not
to
allow
changing
LCD
locations
for
custom
resources.
E
One
question
I
have
you
mentioned
like.
Ideally,
the
customer
would
get
to
choose
where
these
things
go.
That
seems
really
strange
to
me
actually
for
a
managed
offering.
Presumably
you
don't
expose
FCD
level
options
to
the
customer
today.
They
just
you
know,
run
kubernetes
and
you
decide.
You
know
whether
to
put
events
in
one
entity
or
not.
It
seems
pretty
strange
to
give
them
visibility
and
control
to
Storage
level
configuration
so
I
I,
wouldn't
think
we
would
put
NCD
level
or
Storage
level
options
up
into
the
API.
E
F
Yeah
you're
you're
right
I
should
clarify
what
I
meant.
Is
that
I'm
not
sure
who
will
make
the
decision
if
it
turns
out
that
it's
unacceptable
for
the
operator
to
make
these
calls
for
some
reason,
then
we
could
explore
getting
approval
from
customer
or
authorization
of
some
sort,
but
as
as
of
now,
it's
TBD-
and
you
are
right
today
at
city
is
internal
concern
like
they.
Don't
they
don't
know
or
have
any
control
over
hcd
configuration.
It's.
E
Worth
noting
that
the
the
only
use
of
this
NCD
override
that
I'm
aware
of
is
to
relocate
where
kubernetes
events
get
persisted
and
those
expire
after
like
an
hour
or
a
day,
so
I
actually
don't
know
of
anyone
using
this
option
to
relocate
data
that
persists
that
we
care
about
so
I
I.
Think
that
should
probably
give
pause
like
a
lot
of
the
complicated
issues
like
what
happens
if
you
change
it
after
the
fact
or
what?
E
If
you
know,
you
can't
move
from
one
to
another
like
those
just
sort
of
hand,
wave
and
go
away
with
events,
because
it's
like
they
were
going
to
go
away
after
an
hour
or
after
24
hours
anyway.
So
we
kind
of
just
don't
worry
about
those.
That's
why
people
use
it
to
relocate
events,
so
doing
it
generally
for
data
that
you
may
care
a
lot
about
is
just
a
lot.
Scarier.
F
F
But
let
me
reflect
on
on
this
feedback
as
I
start
digging
into
it.
So.
G
F
Our
observation
is
the
object
count.
Sometimes
it
gets
into
millions
and
affects
the
the
performance.
G
That
one's
also
interesting
I
have
seen
cases
where
no
matter,
no
matter
what
you
do,
the
use
case
will
create
creating
more
objects
until
you
find
a
scalability
limit
right.
So
if
you,
if
you
give
them
10,
if
you
had
10
million
that's
the
limit
before
and
you
change
things
to
make
it
better.
They'll
hit
the
10
million
limit
for
whatever
the
next
level.
It
is
so
in
some
cases
it
might
be
better
to
find
a
way
to
restrict
what
the
total
reasonable
number
of
objects.
Something
can
be.
A
Right
I
was
going
to
say
two
things
believe
it
or
not.
I
cannot
find
how
to
raise
a
hand,
maybe
because
I'm,
the
host
of
the
meeting
so
I
know
for
the
posterity.
So
one
is
this
is
I
know
is
not
the
question
you
asked,
but
you
know
from
when
you
said:
Argo
I.
A
It
came
to
my
mind
a
lot
of
cases
that
I
have
seen
in
production
where
Argo
was
the
problem,
and
sometimes
we
found
that
actually,
there
was
not
like,
like
a
human
person,
creating
these
objects,
of
course,
especially
because
it's
a
it's
a
cicd
to
mostly
Argo
the
family.
Like
it's
an
automated
process,
you
know
misconfigure
going
crazy
and
you
know
the
solution
was
not.
To
give
you
know.
More
storage
or
more
performance
was
to
solve
the
root
cause.
A
So
maybe
that
helps
you
sorry
and
the
second
is
a
question
for
the
group,
because
Joe
was
talking
about
limiting
the
number
of
options
and
I
know
we
have
resource
quota
for
the
native
types,
but
we
don't
have
it
as
far
as
I
know,
for
crds.
A
So
that
could
be
another
way
of
guard
grailing.
You
know
your
clusters
either
like
looking
into
that,
but
I
like
if
the.
If
there
is
a
you
know
valid
genuine
case,
why
Argo
has
to
create
millions
of
objects
and
you
restrict
it.
You
may
run
into
it
side
effects.
Also
there
right.
F
Right
most
of
these
cases
we
kind
of
don't
know
if
the
workload
was
we
have
limited
context
on
the
on
the
workload.
So
pushing
back
is
a
tricky
tricky
decision,
because
we
don't
have
the
information
insight
about.
Is
it
doing
real
work
or
not
sounds
good?
Thank
you.
Thanks
a
lot
for
all
the
feedback.
I
will
review
the
old
issue
mainly
and
dig
a
little
bit
more
and
probably
we'll
stop
by
again
for
one
more
round
of
discussion.
A
Well,
I,
guess
that's
all
for
today!
Thank
you,
everybody
for
joining,
we'll
record
upload
the
recording
later
I
posted
the
playlist,
where
all
the
meeting
goes
and
I
will
see.
Hopefully
many
of
you
again
in
two
weeks
have
a
great
Wednesday
and
rest
of
your
week.
Thank
you
for
joining.
Thank.