►
Description
Discussed making APIServer Network Proxy able to transmit traffic from cluster to control plane. Also update on Credential Provider.
B
Thank
you
I
was
about
to,
but
if
you're
doing
it
great,
so
this
is
july,
16
2020
cloud
provider,
extraction
working
group
meeting.
This
is
part
of
the
sig
cloud
provider,
which
is
part
of
cncf.
We
have
a
policy
of
conduct
to
please
be
respectful
and
inclusive
of
all
of
your
other
contributors.
So
please
be
you
know,
inclusive
and-
and
you
know,
considerate
of
all
your
other
contributors.
B
Having
said
that,
I'm
going
to
go
ahead
and
kick
off
the
meeting.
I
know
we
have
a
few
folks
from
the
api
server
network
proxy
project.
B
This
is
originally
started
as
the
ssh
tunnel
replacement,
and
there
was
some
interest
in
extending
the
api
server
network
proxy
to
be
able
to
handle
traffic
to
go
from
the
cluster
to
the
control
plane.
So
with
that
introduction
would
either
sri
or
locopo
care
to
kick
in,
and
let
us
know
what
you're
thinking.
B
C
Hi
hi
all
here
jacob,
so
actually
I,
as
walter
said
I
I
quite
recently
discovered
this
ipa
server
proxy
and
I
tried
to
use
it
in
a
project.
C
I
I'm
working
on
where,
basically,
we
have
the
control
plane
of
some
kubernetes
clusters
that
is
running
in
a
dedicated
kubernetes
cluster
that
and
the
the
cluster
network,
so
the
the
worker
nodes
are
normally
running
in
a
distinct
network.
C
Currently
we
are
using
openvpn
in
order
in
in
order
to
do
basically
the
job
of
the
api
server
proxy,
but
it
would
be
interesting
for
us
to
to
replace
these
and
use
the
api
server
proxy
instead,
but
this
would
solve
only
a
part
of
our
issue
because
we
would
be
also
interested
in
using
the
aps
server
proxy,
basically
for
the
cluster
to
control,
plane,
communication-
and
I
saw
there-
was
already
an
issue
open.
There
was
some
discussion
around
this.
C
And
so
I
probably
there
are
some
complications
to
put
this
in
place,
but
I
took
a
look
to
to
the
code
base
and
I
think
the
the
tunnel
already
I
mean
there.
There
is
no
much
difficulty
in
using
the
tunnel
on
both
directions.
C
Probably
the
the
tricky
part
is
the
bootstrap,
because
the
components
that
are
running
on
the
cluster
network
need
to
communicate
with
the
apa
server
during
the
bootstrap
phase.
So
probably
I
mean
for
the
instead
of
using
a
demo
set.
C
C
Basically,
this
is
why
I'm
here
so
I
would
like
to
to
get
your
opinions
about
this
topic
and
yeah.
I'm
I'm
also
available
to
to
contribute
to
this.
If
there
is
interest
in
this
in
this
topic,.
B
So
while
people
are
considering
that
and
please
feel
free
to
chime
in
any
thoughts,
opinions,
etc,
a
few
things
that
I
think
are
worth
mentioning,
justin
santa
barbara,
specifically
justin
when
he's
working
on
the
aws
side,
I
know
was
considering
doing
something
very
similar.
So
I
definitely
would
make
sure
that
you
loop
in
justin
sb
and
I
in
fact
I
invited
him
to
show
up
to
this
meeting
warning
him.
This
subject
would
be
here,
but
I
don't
see
him
on
the
call,
but
that's
one
quick
note.
B
You
already
mentioned
the
interesting
part
of
we
have
today.
We
generally
start
these
up
the
agents
up
as
daemon
sets,
but
we've
got
at
least
two
other
options.
The
one
you
mentioned,
which
is
to
make
it
a
static
pod.
You
could
also
have
it
as
a
service
on
that
vm,
etc.
B
I
know
a
lot
of
folks
do
the
service
solution
for
things
like
npd
on
cubelets,
so
I'll
just
mention
that
that's
possible
when
we're
routing
traffic
in
the
other
direction,
and
I'm
not
sure
what
your
solution
in
mind
is,
but
I'll
mention
the
cube.
Api
server
is
the
only
thing
that
currently
talks
from
the
control
plane
down
to
the
cluster,
and
it
has
some
code
specifically
to
look
to
direct
traffic
through
the
tunnel.
B
I
don't
know
what
it
is.
You
have
planning
on
going
from
the
cluster
up
to
the
control
plane,
but
obviously
you
need
to
solve
the
problem
of
getting
the
traffic
that
you
want
to
send
to
the
control
plane
into
the
tunnel
to
begin
with.
So
I
am
not
a
networking
expert,
but
I'll
just
mention.
B
I'm
fairly
sure
that
we
need
to
work
out
how
you
plan
on
doing
that,
and
while
the
tunnel
segment
from
the
server
or
from
the
client
to
the
server,
which
is
to
say
from
the
cube
api
server
to
the
connectivity
server,
is,
has
currently
been
set
up
to
handle
multiple
different
protocols.
So
it'll
do
mtls,
hdb
hddb
connect,
mtls,
grpc
or
grpc
uds.
B
The
only
thing
we've
actually
built
into
the
segment
going
between
the
agent
and
the
connectivity
server
is
grpc
over
https.
Although
you
do
have,
I
think,
some
options
on
how
you
want
to
do
authentication,
so
I
believe
that
segment
can
be
authenticated
either
via
mtls
or
via
a
authorization
request
to
the
cube
api
server.
B
So
I
think
you
know
if
you
wanted
to
do
that
via
something
like
openvpn
or
something
else,
there's
clearly
some
work
there.
That
would
need
to
be
done.
We
had
someone-
and
I
don't
remember
who
asking
about
this
specifically
a
couple
of
days
ago
on
the
api
server
network
proxy,
and
I
will
say
in
this
meeting
the
same
thing
I
put
in
the
issue,
which
is
that
I
am
wholly
in
support
of
that
work.
B
We
just
need
someone
motivated
to
actually
sit
down
and
do
it,
but
any
one
of
the
leads
of
the
project,
I
think,
would
be
happy
to
provide
some
advice
or
and
some
review
cycles
to
make
that
happen.
D
Yeah
cool
so
walter,
like
I
just
wanted
to
add
few
things
to
it,
so
just
to
give
a
quick
introduction:
I'm
balaji
I'm
working
as
a
senior
developer
in
ek
and
amazon
eks.
So
in
eks
what
we
do
is
like.
We
have
two
set
of
networks
and
we
run
all
our
master
components
on
one
network
and
then
the
customers
workload
runs
on
the
other
network
and
then
so
we
do
like
the
way
we
set
up.
D
So
that's
what
we
were
trying
to
do
and
having
the
connectivity
from
the
cluster
to
control
plane
through
this
api
server
proxy
will
help
us
there
as
well,
so
that
we
don't
have
to
like
set
up
the
network
for
every
master
instances
that
comes
up.
B
Yeah
that
sounds
familiar
and
sounds
eminently
sensible,
a
couple
of
quick
thoughts,
and
I
I
don't
want
to
tell
you
how
to
do
your
thing.
So,
but
just
a
quick
thought
is,
you
may
want
to
run
two
copies
of
connectivity
server
in
the
sort
of
setup
you're
talking
about
just
because
aha
concerns
are
a
real
thing
and
you
don't
really
want
to
have
it
go
down
and
suddenly
you
lose
all
communication.
B
B
Okay,
so
if
you
wanted
to
it's
pretty
easy
to
set
up
too
okay
awesome
but
yeah
that
that
makes
perfect
sense,
sorry,
I
think
we
do
it
a
slightly
differently,
but
we
we
we
do
something
very
similar,
so
yeah.
I
it
makes
sense
to
me
yeah,
okay,.
B
So
I
personally,
I
would
start
with
the
with
the
design
proposal
right
and
you
may
want
to
have
two
copies
if
you
need
some
of
it
to
be
internal
and
then
just
sort
of
an
external,
but
I
think
you
know,
as
far
as
the
some
of
the
deployment
stuff
goes,
that's
unused
to
solve
so
deploying
the
agent
other
than
as
a
damon
said.
I
don't
think
that's
something
we
need
to
worry
about
right,
it's
really
about
tunneling
the
traffic
right.
So
you
need
to
understand
how
you
want
to
authorize.
B
B
If
you
want
to
do
the
nice
simple
thing,
I
would
basically
just
have
a
way
to
configure
on
the
connectivity
server
some
restriction
that
is
this
equivalent
to
only
send
traffic
to
the
cube
api
server.
Maybe
even
only
send
traffic
to
the
cube
yeah
I
mean
I
don't
know
how
to
put
it,
but
I
I
said
you
probably
don't
want
to
let
things
in
the
in
the
cluster.
Do
things
like
talk
to
your
ncd
server
or
equivalent?
B
If
you,
if
you
have
some
lcd
look-alike,
so
I
would
be
a
little
careful
about
what
happens,
especially
since
you
probably
put
a
firewall
and
a
hole
in
the
firewall
to
get
to
the
connectivity
server.
B
If
the
connectivity
server
then
can
fan
out
to
anything
in
the
control,
plane,
you're
you're
kind
of
opening
yourself
up
to
attack,
so
you
right
need
to
one
think
how
we're
going
to
to
control
that
and
then
how
we
get
traffic
in
how
we
get
traffic
out
and
whether
I
mean,
if
you
want
to
use
grpc
from
the
agent
to
the
client,
then
we
probably
just
need
to
extend
the
handshake
code
a
little
bit
to
allow
connections
to
happen
in
the
other
direction.
B
In
which
case
there's
some
code
there
that
needs
to
be
written,
that's
what
comes
to
my
mind.
Ciao
did.
Did
you
have
any
other.
D
All
right
so
with
respect
to
authentication
like
we
do
have
something
called
as
an
iam
authenticator
that
we
run
on
the
worker
node
so
that
when
you
talk
to
api
server,
api
server
calls
another
web
book
that
we
that
runs
for
the
authentication
and
then
that
returns
either.
This
guy
is
authenticated
and
then
what
are
all
the
set
of
groups
that
this
particular
user
belongs
to.
D
So
in
this
case,
let's
say
we
go
ahead
with
this
and
then,
if
an
agent
has
to
like,
if
an
agent
receives
the
packet
from
any
of
the
cubelet
or
the
port
and
then
sends
it
all
the
way
to
the
api
server,
then
the
for
this
authentication,
then
there
has
to
be
some
more
work.
That
needs
to
be
done
right
or
is
the
work
already
here
so.
B
There's
multiple
levels
of
authentication
here,
so
I'm
gonna
work
my
way
at
the
bottom
and
work
up
so
the
first
one
is
authenticating
the
tunnel.
So
when
the
agent
talks
to
the
cuba
talks
to
the
connectivity
server,
it
has
to
be
able
to
authenticate
itself.
So
that's
one
level.
Mtls
is
a
fine
way
of
doing
that.
If
you
want
to
do
anything,
one
of
the
other
option
that
we
have
right
now
is
to
be
able
to
get
authentication
materials
from
the
cube
api
server
and
use
those.
B
B
After
that,
you
then
have
the
issue
of
if,
when
the,
when
the
customer
for
lack
of
a
better
term,
when
the
customer
talks
to
the
it
sends
a
packet
to
the
agent,
you
have
the
option
of
authentic,
forcing
them
to
authenticate,
which
probably
means
the
customer,
whether
it's
cubelet
or
whatever,
has
to
have
a
special
library
or
you
could
set
up
a
routing
rule
that
just
sends
the
traffic
and
there's
no
authentication
needed
between
the
customer
and
the
agent.
B
Let's
go
with
that
for
now
and
correct
me,
if
that's
an
invalid
assumption,
but
I
I'm
assuming
you
don't
really
want
to
write,
run
custom
code
in
the
cubelet,
so
it's
the
agent's
just
going
to
hand
the
traffic
up
to
the
connectivity
server
and
the
so
now
the
connectivity
server
has
to
do
something
with
that.
So
I
think
you
probably
want
to
have
some
sort
of
rule
about
where
traffic
can
be
dropped
on
the
network,
possibly
something
that
says.
B
I
have
a
list
of
known,
good,
cube,
api
servers
or
I
I
have
an
address
of
a
load
balancer
across
the
cube
api
servers
and
the
traffic
has
to
go
there.
I
don't
know
enough
about
your
infrastructure
to
know
exactly
how
to
do
that,
but
I'm
assuming
you're
going
to
want
some
sort
of
restriction.
It
may
even
just
be
a
port
restriction,
but
something
to
ensure
that
the
traffic
only
goes
to
the
cube
api
server
once
the
packet
hits
the
cube
api
server.
B
I
think
you're,
okay,
because
cubelet
already
has
authentication
materials
for
talking
to
the
cube
api
server.
So
it's
going
to
have
some
sort
of
service
account
and
a
service
account
token
or
an
mtls
or,
however,
you
set
it
up,
but
kubernetes
should
already
have
authentication
of
anything.
That's
talking
to
the
cube
api
server.
So
I
don't
think
you
need
to
worry
about
that
particular
problem.
I
think
that's
already
been
solved
for
you
that
was
kind
of
a
long-winded
answer,
but
did
that
make
sense
right.
D
C
Basically,
I
was
using
envoy
in
order
to
tunnel
the
traffic
using
http
2
connect,
it's
a
feature
that
has
been
recently
introduced
in
envoy,
so
basically
the
I
used
a
tunneler
that
is
running
on
each
worker
node
that
is
basically
equivalent
to
the
agent
and
then
there
is
another
envoy
on
the
on
the
control
plane
network.
C
This
is
basically
the
tunneling
and
sending
the
request
to
the
api
server
this
with
this
solution.
Invite
allows
also
to
restrict
the
possible
destinations,
as
walter
mentioned,
to
avoid
to
have
a
too
much
opening.
I
mean
you
cannot
send
to
whatever
destination
and
in
order
to
intercept
the
the
traffic
and
send
it
to
the
tunneler
that
would
be
equivalent
to
the
agent.
C
I
tried
several
solutions.
One
is
based
on
ip
tables.
C
I
also
tried
with
using
ipvs
similarly
to
what
is
done
by
the
the
cube
proxy
and
another
solution
was
basically
I'm
creating
a
dummy
interface
and
the
the
agent
is
binding
to
this
dumb
interface
that
I'm
using.
C
I
am
associating
a
private
ip
to
this
dummy
interface
and
I'm
using
this
ip
as
advertised
ip
in
the
api
server,
meaning
that
the
the
endpoint
associated
to
the
kubernetes
service
will
use
this
ip.
So
the
traffic
will
be
sent
directly
to
this
interface
and
will
be
tunneled.
C
I
worked
on
this
spock
with
yusuf
that
is
also
present.
So
if
he
wants
to
add
something
if
I
missed,
but
this
is
basically
how
we
solve.
F
The
day,
basically,
I
just
to
make
it
clearer
for
this
idea,
and
this
was
basic.
We
choose
this
solution
because
it
seemed
to
be
the
easiest
and
there
wasn't
any
ip
tables
rules
to
implement
and
all
the
kind
of
overload,
I
would
say
the
idea
is
similar
to
not
local
dns.
I
mean
it's
the
same
idea.
You
run
on
the
host
network.
F
You
have
a
dummy
interface,
which
you
don't
care
if
it's
up
and
down,
and
you
send
us
into
the
dubai
this
interface
and
you
receive
all
the
traffic
as
simply
as
that,
and
we
wanted
to
use
basically
the
same
link
local
network
of
not
local
dns.
But
this
is
one
of
the
restriction
of
on
the
default
service
of
kubernetes,
which,
where
you
can't
use
a
link,
logo,
subnet,
localhost,
subnet
and
all
the
kind
of
restrictions
so
which
was
one
of
the
three
ip
three
private
ips
range
and
while
currently
is
working.
Fine.
F
To
be
honest,
I
mean
we,
we
are
managing
basically
to
send
the
traffic
from
all
the
worker
nodes
to
different
control
planes
behind
one
load,
balancer
thanks
to
envoy
in
the
http
to
connect.
Basically,
we
are
able
to
to
route
the
traffic
on
on.
Basically,
as
as
we
tunnel
the
traffic
we
inject
manually
or
well
automatically,
I
would
say
the
the
name
of
the
clusters
and
then
we
load
balance
this
to
the
right
api
server.
F
So
that's
why
we
wanted
basically
to
have
this
full
connectivity
server
from
both
sides,
because
well
from
one
side,
sounds
like
really
promising
and
it
would.
It
would
remove
all
this
whole
open,
vpn
stuff
that
we
put-
and
I
mean
it's
integrated
directly
to
kubernetes,
so
even
better.
B
Yeah
sounds
great
to
me:
I
mean
yeah,
please
we're
happy
to
advise
we're
happy
to
provide.
You
know
review
cycles,
whatever
we
can
do
to
help
make
this
happen.
C
B
I'm
sorry,
and
maybe
I
misunderstood
what
yousef
was
saying,
but
it
sounded
like,
even
though
I
want
one
agent
talking
to
one
cube
api
server,
it
almost
sounded
like
there
might
have
been
an
idea
of
routing
all
the
agents
through
one
connectivity
server
and
then
having
them
correctly
land
on
the
appropriate
cube
api
server.
Maybe
I
misunderstood.
F
That
when
I
said
the
load,
balancing
is
done
well,
basically
on
the
load
balance
inside
aws
or
whatever
I
mean
that
the
agent
the
traffic
from
the
agent
to
the
connectivity
server
is
sends
to
the
http
to
connect,
is
sent
to
the
right
connectivity
server.
So,
even
if
you
are
behind
just
one
public
load
balancer,
we
are
able
to
buy
the
connectivity
or
the
agent
with
the
correct
server.
So
basically.
C
We
have
multiple
control
planes
on
the
same
network
and
we
use
a
single
load
balancer
to
basically
route
all
the
traffic
to
the
various
api
servers.
B
Okay,
that's
a
lot
easier.
I
was
hoping
that's
what
you
were
saying,
but
it's
almost
sounded
to
me.
Like
you
were
saying
you
wanted
the
connectivity
service
to
provide
some
of
the
load
balancing
and
I
was
like
okay,
it's
doable,
but
a
lot
harder.
So
that
sounds
a
lot
more
reasonable.
Okay
yeah.
That
sounds
great
ciao
thoughts.
E
So,
apart
from
the
authorization
things
you
have
been
talking
about,
I
think
we
also
need
to
duplicate
all
the
traffic
forwarding
and
listeners
right,
because
currently,
all
those
things
are
written
for
one
direction
right
you
we
need.
We
need
to
open
listeners
for
like
on
the
agent.
We
need
to
listen
for
the
dials
from
the
cubelet
right
and.
B
B
E
Yes,
today,
today,
the
agency
has
a
lesson
on
the
for
the
grpc
right.
That's
all
we
have
today.
Well,
I'm
saying
like
once
you've.
B
Established
the
connection
we
have
a
dream:
we
have
a
stream
listener
right,
and
so
that's
already
once
once
the
stream's
established,
the
existing
code
should
work.
Our
problem
is
getting
to
the
point
where
this
being
able
to
set
up
the
stream
in
the
other
direction.
E
Yes,
and
to
set
up
the
stream
we
also
need
to,
because
today
we
have
our
own
protocol
right,
we
have
the
dial
request
style
response,
but
if
we
don't
change
cubelet,
how
can
we
achieve
this?
I
think
we
still
need
to
because
we
have
already
modified
the
api
server
we
have
buried.
Our
client
into
apiv
is
importing
our
client,
so
it
will
send
a
dial
request.
First
right.
B
B
It
is
then
going
to
generate
a
dial
request
and
send
it
in
the
other
direction.
So
we
definitely
need
a
new
listener.
We
need
it,
but
there's
a
new
code
path
for
doing
dialing,
and
so
then
we
we
send
the
dial
up
up
to
the
connectivity
server.
It
drops.
The
dial
request
goes
to
the
cube
api
server.
E
E
B
E
B
B
It's
like
that's
no
problem
jakobo,
but
I
know
that
the
last
one
we
had
was,
I
think,
someone
who
wanted
to
be
able
to
run
a
control
plane
in
one
place
and
the
cluster
in
another
place
and
connect
basically,
like
you,
know
some
sort
of
on-prem
scenario,
but
with
a
central
with
remote
clusters
and
a
centralized
control
plane,
and
they
wanted
to
be
able
to
connect
everything
using
a
vpn.
C
Workers
yeah
one,
I
think
the
main
advantages
we
we
see
at
the
moment
are
first,
that
it
would
give
us
the
possibility
to
route
the
the
traffic
to
the
targeted
control
plane
using
passing
from
one
single
load
balancer.
That
was
one
of
our
design
goals
and
on
more
long
term.
I
think
it
could
also
allow
us
not
to
expose
the
apa
server
directly
to
the
internet.
B
B
So
thanks
guys
definitely
looking
forward
to
seeing
some
pr's,
maybe
even
a
design,
doc
happy
to
help
collaborate
and
just
to
reiterate,
I
think
you
probably
wanna
reach
out
to
justin
espy.
B
So
I
think
he
knows
quite
a
bit
about
this
and
in
fact
also
knows
quite
a
bit
about
the
amazon
use
cases,
because
you
know
he
has.
He
has
been
the
one
of
our
amazon.
Sig
leads
cool.
If
we're
done
with
that
nick,
I
think
you
have
the
next
agenda
item.
A
Yeah,
it's
basically
just
an
update
on
credential
provider
extraction.
So
I'm
coming
back
to
this
now
I've
been
didn't
have
cycles
during
the
last
release,
so
yeah,
essentially
where
we
stood
when
I
was
last
working
on
it
is.
A
A
Working
on
it
sooner
in
this
release
cycle
so
yeah
my
goal:
my
goal
is
to
to
front
load
it
this
time
since
we're
here,
we
are
early
in
the
release
cycle,
so
the
other
sort
of.
A
Last
time
we
were
kind
of
deciding
whether
or
not
we
wanted
to
in
the
alpha
version
start
out
with
migrating
into
staging
or
not
and
because
it
the
conversation
had
occurred
a
little
bit
late
in
the
release
cycle.
We
had
decided
to
not
do
that,
but
I'm
inclined
to
just
get
that
up
and
running
early
this
time,
yeah
all
right,
sweet
yeah.
A
So
let's,
let's
do
that
and
we'll
have
a
we'll
have
a
better
a
better
version
of
alpha,
given
the
given
extra
time
so
yeah
I'd
say
just
watch
the
watch,
the
pr
and
as
soon
as
we
have
some
progress
on
an
aws
kind
of
example,
credential
provider
binary
we'll
we'll
start
getting
some
other
cloud
providers
working
on
as
well.
But
personally
I
just
need
to
to
get
back
into
it
and
kind
of
remind
me
of
myself
exactly
where
the,
where
the
the
work
is
right
now.
B
I
think
that'll
definitely
be
beneficial
in
the
long
run,
a
couple
of
quick
questions,
so
obviously
each
of
the
cloud
providers
will
be
responsible
for
doing
their
own
credential
provider
sidecar.
What
about
the
non-cloud
provider
use
cases?
So
do
you
think
there's
any
value
in
it
for
something
like
kind
bare
metal
or
the
fake
cloud.
E
A
Separate
from
the
the
the
external
credential
provider,
similar
to
what
we
have
today,
I
think
both
are
are
viable
right
now
I
have
the
the
default
docker
credential
provider
just
still
still
baked
in
not
actually
a
a
a
separate
thing,
so
yeah,
if
you,
if
you
think
we
should
sort
of
break
it
out
a
little
bit
more,
then
feel
free
to
comment
on
the
on
the
pr
or.
A
That's
a
that's
a
that's
a
good
point,
I'll
I'll
revisit
that
and
just
kind
of
yeah.
Maybe
we
can
have
a
conversation
if,
if
it
seems
like
it's
feasible
to
do
that
could
be
an
easier
reference
implementation
to
build
and
we
could
get
that
out
faster
which
would
allow
people
to
to
get
their
implementations
going
faster.
So
yeah.
B
B
B
Sounds
like
not
all
right!
Thank
you,
everyone.
You
know
great
discussion
today
and
I
right
now.
This
meeting
is
every
other
week
alternates
with
the
regular
sid
cloud
provider
meeting.
So
you
know
I
I
think
we're
doing
pretty
well
on
time
and
we'll
upload
this
the
recording
for
this
soon.
B
If
we
discover
that
we
have
more
stuff
or
we
need
to
meet
more
often
for
the
network
proxy
code,
then
every
other
week
we
can
always
have
a
separate
meeting,
but
for
now
I
think
this
is
working
pretty
well.
Does
that
sound
reasonable
sounds
good
all
right
awesome!
Thank
you.
Everyone
bye,
cheers.