►
From YouTube: Episode 27: Lets fix the KPNG KEP ! + look @ K8s 1.24
Description
Kubernetes Enhancement Proposals are the way we get stuff approved upstream, and were behind on the KPNG kep ! Today we'll go through https://github.com/kubernetes/enhancements/pull/2094/files , and update it for sig-network to go through and approve. We'll also go through the general KEP template and look at what all it takes to itereate through to get a KEP merged.
A
So,
okay,
let
me
see
here
we
go
so.
A
Let
me
add,
let's
see
here
so
I
have
the
I
have
the
cap
here
and
it
looks
like
chris
just
joined.
A
Chris,
your
your
microphone's
working
too,
it
is
yes,
so
I
I
didn't
so.
This
is
actually
a
good
one
for
you
to
join,
because
we
kind
of
sent
the
tweet
out
at
like
kind
of
last
minute
and
forgot
to
tell
folks
to
retweet
it.
So
this
will
be
a
pretty
casual
episode.
I
don't
know
how
many
people
are
going
to
show
up.
So
this
is
episode,
I
think
28
I
don't
know
but
anyways.
What
we're
gonna
do
today
is
chris
is
to
start
helping
us
to
some
folks
are
starting
to
roll
in.
A
So
that's
cool.
So
if
folks
are
coming
in
feel
free
to
say
hello
and
let
us
know
where
you're
coming
from
chris
is
going
to
start
helping
us
host
a
few
shows
so
chris.
While
I
get
started
over
here
and
do
some
notes,
why
don't
you
introduce
yourself
to
people
and
let
us
know
what
you,
what
what
you're
going
to
be
showing
us.
C
Yeah,
so
my
name
is
obviously
chris
greit,
I'm
a
solutions
architect
from
the
with
vmware
on
the
avi
ako
operate
so
I'll
start
off.
Probably
you
know
showing
some
aco
stuff,
some
amto
stuff,
looking
at
ingress
generally
and
just
going
from
there
really
thanks
for
inviting
us
yeah,
of
course,
the
first
one
of
these
I've
done
as
a
live.
A
Whatever
do
you
feel
like
you're
famous
right
now,
what
so
one
nut
so
we're
gonna
do
two
things
so
we're
gonna
do
one
nut
thing:
oh
here's,
my
favorite
part
about
stream
yard.
This
is
something
we
do
chris,
we
type
in
here
hi
chris,
thank
you,
okay
and
then
we
like
can
do
this.
We
can,
like
click,
these
and
bloop,
see
how
it
comes
up.
C
A
That's
our
big
deal,
that's
what
we
do
here.
So
writing
the
kpng
cap
and
then
let's
do
1.24
right.
So
what
we're
going
to
do
today
is
we're
not
doing
we're
not
going
heavy
on
the
networking
stuff
today,
but
we
can,
if
folks,
want
to
ask
stuff
david.
What's
up
hello,
looking
like
episode,
27,
yeah
27,
I
think
I
kind
of
made
yeah
so
by
the
way
folks
folks
are
on
twitter.
A
I
don't
know
I
guess
people
are
leaving
twitter
nowadays,
but
not
me,
but
if
folks
want
I'll
give
you
the
link
here
folks
want
to
retweet
today's
show.
So
we
can
get
more
people
to
come
here
is
the
link
to
it
and
then
so
chris
has
come
okay,
so
1.24,
so
the
I
think
what
we'll
start
with
is
because
kubecon's
coming
up,
you
know
I
thought
it
might
be
nice
to
sort
of
look
at
what
went
into
124
because,
like
you
know,
they're
really
so
official
release
came
out.
A
You
know
not
too
long
ago,
and
I
don't
think
we
really
ever
talked
about
it
on
the
show
so
docker
sim
removal-
that's
nothing
new
technology
wise,
but
it's
like
official.
Now,
we've
we've
talked
about
that
before
csi
volume,
health
monitoring
being
able
to
load
a
sidecar,
so
this
article
is
from
sysdig
and
so
being
able
to
load
a
sidecar
that
checks
for
the
health
of
persistent
volumes.
Now
cluster
administrators
will
be
able
to
react
better
to
faster
events
like
persistent
volumes
being
deleted
outside
of
kubernetes.
A
A
A
Exposed
as
a
kubelet
volume
stats
metric,
so
I
guess
now
we
have
a
prometheus
metric
for
this
and
it'll
we'll
have
a
persistent,
oh
and
it
has
a
label
so
for
every
single
claim.
That
makes
okay,
that's
nice.
So
now
we
have
this,
and
so
you'll
have
something
like
this,
so
you'll
have
so
now
what
you'll
have
is
you'll
have
like
this.
A
Pvc,
three
right
and
so
on
and
so
forth.
So
for
each
one
of
the
pvcs
that
you
have
in
your
graph
in
your
in
your
time
series
in
your
prometheus
thing
you
could
see,
for
example,
if
this
specific
pvc
had
a
problem
or
whatever
you'd
be
able
to
see
it,
so
you
could
alert
on
this
stuff
when
you're
monitoring,
your
kubelets
and
so
on
and
so
forth.
You
might
see
something
I
mean
if
I
look
at
this
now.
A
Is
this
something
that
has
somebody
already
like
written
a
blog
post
on
this,
like
I'm
sure,
somebody's
made
a
screenshot
of
this
or
something
right
so
like?
Where
is
the?
Is
anybody
nobody's?
Well,
we
should
do
this
someday.
Maybe
we
could
do
it
here.
Let's
see
here
yeah,
it's
like
nobody's
demoed
this
before
that
sucks
too
bad,
like
I
I'd
like
to
see
how
how
this
is
plotted
in
prometheus,
but
if
I
I
could
grep
for
it
right
here
we
go.
A
Who
else
is
your
june?
Gin?
Is
here
hi
junjin?
So
if
folks
have
any
questions,
we've
got
our
entry
experts
we're
going
through
124
june
jen.
So
today
we
may
not.
We
may
not
harass
you
too
much
with
ridiculous
questions.
So,
okay,
here
it
is
so
yeah.
Here's
the
test,
volume
stats,
collector
okay.
So
this
is
the
kubelet
collector
that
has
this
metric
defined
somewhere
in
it,
and
then
here
it
is
here's
roughly
what
the
metric
actually
looks
like
okay.
A
So
this
is
the
unit
test
and
we
can
see
in
here
so
you'll
have
this
metric
and
it'll.
Have
the
space
and
it'll
have
the
name
of
the
claim,
so
you
could
look
up
exactly
what
you'll
be
able
to
see
based
on
this
integer
value
right
where
one-
and
I
think
that
was
in
the
that
was
in
the
in
here
right
it'll-
have
a
value
of
one
if
it's
unhealthy
or
zero.
A
Otherwise,
okay,
so
you
expect
these
generally
to
be
at
zero,
and
if
you
see
these
that
value
going
up
anywhere,
then
you
know
that
you're
having
pvc
errors,
so
that's
that
there's
another
csi
one,
let's
jump
over
the
csi
one,
because
we
don't
have
all
day
to
look
at
csi
stuff,
but
like
what's
going
on
with
all
these
new
csi
features.
A
Is
this
all
cs,
like
okay
storage
capacity,
tracking,
this
enhancement
tries
to
prevent
pods
to
be
scheduled
on
nodes
connected
to
csi
volumes
without
enough
free
space
available?
How
do
you
know
connected
to
csi
volumes
without?
Why
would
you
have
a
csi
volume
that
didn't
have
enough
free
space
available?
I
guess.
A
Storage
capacity-
and
why
is
this
azure
specific?
I
wonder:
okay,
so,
okay,
so
we're
we're
12
12
minutes
in
so
I'm
going
to
try
to
go
fast
here:
storage
capacity,
constraints
for
pod
scheduling,
so
this
doesn't
seem
like
it's
azure
specific.
I
wonder.
A
Let's
see
here,
csi
drivers
can
expose
how
much
capacity
a
storage
system
has
available
by
the
api
server.
This
information
can
then
be
used
to
the
scheduler
to
make
more
intelligence.
So,
okay,
we
have
a
new
api
field.
The
api
field
is
not
in
any
way
azure
specific
to
my
knowledge,
but
for
some
reason
the
feature
gate
is
that
what
am
I
missing
here?
A
Weird?
Maybe
it's
just
maybe
that's
just
specific
for
the
azure.
I
don't
know
cloud
provider
or
something,
but
I
guess
the
real
feature
gate
is
csi.
Migration
is
true.
So
if
you
turn
on
the
csi
migration
and
that's
defaulted
to
true,
okay
openstack,
oh
these
are
all
csi
migration
feature,
so
there
must
be
something
special
about
the
csi
migration
so
for
folks
playing
at
home.
I
think
it's
probably
worth
it
if
you're
interested
in
this
stuff
to
know
what
this
feature
gate
is.
A
What
this
whole
csi
migration
flag
is
because
clearly
it's
it's
relevant
to
like
a
lot
of
the
csi
features
that
are
going
in.
So
what's
this
feature
gate
all
about
you
know,
if
scott
was
here,
you
could
tell
us,
I
don't
know
if
he's
here.
I
see
six
people
are
here,
but
I
don't
know
if
it's
scott
is
here,
though
now
volume,
populator,
okay,
morsi,
everything,
storage,
okay,
other
stuff,
wait!
So
are
these
all
storage
related?
A
I
clicked
here
and
it
sent
me
into
the
csi
stuff,
but
like
docker,
shim
beta
apis
are
off
by
default.
I
think
we
okay,
beta
apis,
are
not
considered
stable
or
enabled
by
default.
This
had
a
good
side
as
it
accelerated
adoption.
However,
this
opens
the
gate
for
several
issues.
For
example,
if
a
beta
api
bug
it'll
have
90
it'll
be
on
90
of
all
clusters,
so
they've
turned
that
off.
So
that's
new,
I
guess
beta
apis
are
off
by
default.
I
I
I
didn't
know
that
that
changed,
so
that's
new
deprecating.
A
A
I
have
this
repo
here
and
I
usually
on
this
show
when
I
do
things
I
go
here
and
I
go
to
junior
100,
k8s
prototypes
kind,
and
I
just
run
this
kind
local
up
script
with
the-
and
I
say
you
know:
cluster
equals
andrea
cluster
equals
andrea,
dot,
slash
kind,
local,
up,
dot,
sh,
and
that
makes
a
a
cluster
a
like
a
just,
a
local
kind
cluster,
and
that's
usually
what
we
use
on
this
show,
because
we
don't
really
do
a
lot
of
vmware
specific
stuff
on
here.
A
This
is
supposed
to
be
like
kind
of
a
community
thing.
Sometimes
we
do
tanza
specific
stuff,
though.
So
if
I
look
at
one
of
these
object,
get
pods
dash,
hey,
let's
just
edit
a
podcoop
ctl
edit
pod,
I
don't
know,
did
everything
have
a
self
link
before
I
don't
know.
Is
that
so
this
is
new,
that
not
everything
has
self
links.
A
So
I
definitely
don't
see
a
self
link
here.
I
don't
know
if
old
clusters
used
to
have
that
or
not,
but
I
guess
I
don't
see
one
here:
okay,
124.,
it's
kubernetes
system
components,
log,
standardization,
okay,
some
logging,
stuff
api,
efficient
watch,
resumption
api
server,
conditionalize
its
watch
cache
faster
after
a
reboot
okay.
So
that's
a
performance
thing.
A
And
types
open
api
v3:
how
does
this
change
things?
It
serves
one
schema
per
resource
instead
of
aggregating
everything
into
a
single
one.
Okay,
so
that's
api
hygiene,
stuff.
Okay,
then
there's
the
apps
api,
so
max
unavailable
for
stateful
sets.
So
that's
going
to
give
you
the
total
maximum
number
of
so
what
happens
if
you,
when
does
that,
take
effect,
so
this
is
in
in
the
last
case,
so
let's
make
a
stateful
set.
I
have
a
way
to
do
that.
We
have
that.
Also
in
our
demos.
Here
we
have
a.
A
We
have
a
smoke
test
in
here
we
can
use
smoke
tests
and
I
think
this
one
has
a
stateful
set
in
it.
So
we
can
try
to
create
this
and
see
if
we
get
anything.
Okay.
A
B
A
A
So
if
we
do
docker
ps,
docker,
exec
dash
t
I,
if
we
jump
into
one
of
these
nodes,
I
guess
if
I
cat
at
c,
I
don't
think
I'm
gonna
do
this
right
now,
but
if
I
cat
etsy
kubernetes.
C
Are
we
saying
that
this
is
stable,
because
it's
in
alpha
in
that
new
feature
where
everything
is
disabled?
If
it's
not.
A
B
A
A
A
Where
is
the
api
server
here?
It
is
coupe
ctl
edit
pod
that
dash
and
coop.
Now
I'm
just
kind
of
curious
like
where
do
we?
How
do
we
send
feature
gates?
So
I'm
just
going
to
check.
Where
do
we
send
the
feature
here
they?
Okay,
so
we
send
the
feature
gates
here
right.
So
here's
a
feature
gate
and
I
guess
it's
network
policy
import
equals
true.
A
I
must
have
set
that
feature
gate.
Somehow.
That
seems
like
something
that
manually
I
would
have
set.
So
maybe
this
is
just
my
own
fault.
A
Yeah,
I
okay,
I
guess-
or
maybe
andrea
sets
that
so
if
you
go
into
andrea
cni
and
you
get
so
when
we
use
this
script,
what
it
uses,
it
uses
the
andrea
utility
to
do
this.
Okay.
So
actually
you
could
sort
of
hack
that
up-
and
you
could
add
this
feature-
gate
right.
We
could
add
this
max
and
available.
We
could
set
this
to
true
right,
so
this
file
is
the
andrea
kind
setup
script.
So
andrea
has
its
own
setup
script
for
kind
that
it
comes
with.
A
A
Kind
delete
cluster
dash,
dash,
name,
andrea
kind,
delete
cluster
dash,
dash
name
calico,
so
we
can
delete
these
two
clusters
and
keep
going
through
stuff
and
then
we'll
recreate
a
new
cluster
and
we'll
turn
on
this.
This
feature
gate
and
at
least
folks,
will
know
how
to
turn
on
clusters
with
special
feature
gates,
because
evidently
that's
a
thing
now,
it's
probably
more
important
than
it
used
to
be
now
that
things
aren't
on
and
beta
so
add,
suspend
okay.
So
I
think
we
are
almost
30
minutes
in
so.
A
A
So
you
can
suspend
joe,
I
didn't
know
you
could
do
that,
but
that's
going
to
stable,
so
jobs
can
temporarily
be
suspended,
so
there's
a
suspend
field
in
the
jobs.
So
if
you're
running
a
job,
you
can
stop
it.
I
don't
know
how
people
used
to
stop
them.
What
did
they
do?
Delete
the
job
change
the
cadence
delete
the
pod
that
the
job
started.
I
don't
know
track
ready,
pods
and
job
status,
csr,
duration,
okay,
so
there's
a
bunch
of
odd
stuff
couple,
auth
features.
A
Reduction
of
secret-based
service
account
tokens.
This
is
a
security
feature.
Maybe
automatically
created
a
service
account
secret
when
creating
a
pod
that
token
secret
contained
the
credentials
for
accessing
api.
Now
api
credentials
are
obtained
directly
through
the
token
request
api
and
are
mounted
into
pods
using
a
protected
volume.
Okay,
so
pods,
don't,
although
these
tokens
will
be
automatically
invalidated
when
they're
associated
pod
deleted,
you
can
still
create
the
token
secrets.
A
A
C
B
A
A
Did
we
used
to
make
secrets
for
free?
Oh
I
mean
our
meme
already
has
a
bag
of
t-shirts
secret
of
the
service.
I
think
per
service
account.
I
already
said
that,
so
you
don't
get
a
shirt,
you
don't
get
anything
sorry
buddy
see
yeah,
so
they
make
one
secret
per
service
account
by
default.
Do
they
not
do
that
anymore?
A
C
I
don't
think
they
do
like.
I
was
looking
at
this
with
hashtag
vault
recently,
and
I
had
to
do
it
that
second
way
had
to
get
the
token
instead
of
the
secret.
C
C
I
had
a
token
and
that's
embedded
into
it
as
it
builds,
isn't
it
so
it's
I
can
find
the
link
embed
it
into
what
the
pod.
Isn't
it.
Isn't
it
it's
in
this.
A
A
All
right
you
keep
looking
at
that,
I'm
going
to
go
forward
ricardo
network
policy
status
alpha.
He
finally
got
it
in
so
this
is
something
that
came
about
because
of
all
the
debates,
around
port
ranges
and
cilium
didn't
support
them,
but
andrea
did
and
calico
did,
and
so
we
needed
a
way
to
publish
metadata
about
network
policies
what
what
network
policy
apis,
weren't
weren't
supported.
So
finally,
we
have
that
field.
A
So
thanks
to
tim
and
ricardo
and
everybody
else
who
worked
on
that
cup
and
as
you
can
see
in
andrea
now,
when
we,
when
we
spin
that
actually
sort
of
enables
that
so
that
other
future
gate,
I
guess
we
would.
We
would
also
want
to
enable
what
was
the
feature
gate
we
were
enabling
before
it
was
the
port
range
right.
A
Well,
I
guess
it
doesn't
matter,
because
andrea
will
look
for
those.
A
Here
it
is
here's
the
future
gates.
So
this
is
the
feature
gate
here.
It
is
network
policy
status,
so
we
may
want
to
ungate.
A
I
don't
will
allow
users
to
receive
feedback
on
whether
network
policy
and
its
features
have
been
possibly
parsed
and
helped
help
them
to
understand.
So
I'm
not
sure
how
that
feedback
is
supposed
to
be
provided.
A
I
guess
the
idea
is
that
the
status
field,
and
then
I
think
after
this
we
should
look
at
the
k,
p
and
g
cap,
so
the
status
field
is
something
that
the
cni
itself
can
like.
How
would
we
query
it?
Do
we
so
it's
inside
of
the
okay,
so
there's
a
network
policy
status,
so
there's
a
network
policy
and
then
inside
of
the
network
policy,
there's
no
network
policy
status
and
I
suppose
the
cni
provider
needs
to
have
right.
A
I
don't
know
I
assume
the
idea
here
is
that
cni,
let's
look
for
I'm
sure
somebody
wrote
it's
up
to
the
network
policy
provider
to
decide
how
this
feature
is
going
to
be
implemented,
and
this
can
be
even
an
early
admission
web
web
hook
right
so
so
this
is
some
status
metadata
and
I
suppose
that
status
will
just
get
updated
by
the
network
policy
provider
so
like,
for
example,
does
that
mean
I
guess
antonin's
here?
Does
that
mean
antonin
that
we
update
this
field
in
andrea?
We
maybe
we
do
like.
If
I
look
for.
B
A
Yeah
chris,
if
you
want
to
interrupt
here.
A
I
bet
you
anything,
though
our
friends
at
andrea
probably
did
a
good
job
filing
this
issue
right
that
we
need
to
implement
this
somehow.
Okay,
here
we
go
support
for
network
policy
status.
So
here
we
go,
quan
went
ahead
and
implement.
This
proposal
has
exposed
the
status
of
andrea
policies
to
the
status
field
corresponding
speaker.
Oh,
this
is
yeah
here
it
is
network
policy
status,
so
he
took
that
from
the
cap.
A
Besides
the
kubernetes
api
uses
generation
as
part
of
the
track
generation
of
the
desired
state
automatically
when
changing
okay,
his
proposal
looks
great
okay,
so
evidently
1442
they
implemented
this
network
policy
status.
Folks
want
to
know
how
entry
implemented
it.
That's
where
they
did
it.
Oh
no!
You
don't
implement
it
yet,
but
we
are
working
on
upgrading.
Our
client
go
dependency,
120
first,
so,
okay,
we
don't
we
don't
have
to,
but
guess
what
andrea
implements
all
the
kubernetes
apis
for
network
policies.
So
don't
worry
about
it.
A
Maybe
some
other
cni
providers
may
need
to
so,
but
may
need
to
worry
more
about
this,
but
but
not
us,
so
I
think
that's
probably
that's
probably
good
enough
of
a
review
of
the
121.
Well,
I
mean,
let's
at
least
go
to
windows
right.
A
Oh,
so
you
don't
have
to
do
like
a
curl
where
you
have
a
huge
is
it
that
is
it
that
patch
now
doesn't
require
a
huge
json
payload
at
the
end
of
it.
I
don't
know.
Okay,
so
now,
service
type
load,
balancer,
graduating
to
stable
class.
C
Yeah,
this
is
interesting
because
this
comes
from
us
as
well.
This
was
zudong
and
his
team
that
created
this
and
it's
almost
like
an
ingress
task
for
low
balance
service.
So
in
like
a
aws
use
case,
you
could
specify
it
not
to
use
elb
and
use.
You
know
another
load
balancer,
for
example,.
A
C
Yeah
yeah,
I
have
a
okay
actually.
C
Thank
you,
yeah
a
new
thing.
It
was
kind
of
quite
neat.
A
Oh,
this
came
from
the
cloud
provider
stuff,
yeah
yeah,
it's
been
around
okay
yeah.
This
has
gone
back
a
long
ways.
Folks
have
been
working
on
this
andrew
worked
on
this,
and
then
this
came
out
a
while
back
and
now
it's
stable,
so
zudong
helped
finish,
making
it
stable.
It
looks
like
and
worked
on
it
for
a
while
cool.
C
Yeah
it's
relevant
for
me
as
like
an
ako
person,
because
now
I
need
a
you
know
in
the
cloud
load
balancers.
We
don't
have
to
use
the
native
load
balancer
or
unite
the
use
cases
like
that.
A
Okay,
cloud
provider
stuff:
here
we
go:
here's
one
I
mean
memes
here,
windows,
operational
readiness,
so
this
is
another
one
from
us
from
over
here
at
vmware,
meme
and
chinchi
doing
this
one.
I
worked
on
this
one
with
with
them,
and
this
comes
from
sig
windows,
giving
us
a
real
definition
of
what
a
windows
kubernetes
cluster
is
and
what
it
means
to
make
it
conformant.
A
A
Let's
go
let's
get
out
of
here:
let's
go
do
our
cap,
so
one
of
the
things
we
have
so,
as
you
know,
we've
we've
talked
about
it.
A
This
is
on
the
show
several
times
the
kp
g
project,
this
new
project,
where
we're
trying
to
rebuild
the
entire
coupe
proxy
to
be
pluggable
and
so
for
people
that
are
new
I'll
show
you
the
diagram,
the
overall
architecture,
I'm
not
going
to
dig
deep
into
it
because
we've
gone
over
it
so
many
times
on
the
show
and
there's
so
much
so
many
people
like
rajas,
is
doing
a
kubecon
talk
this
week
about
it
or
a
rejects,
talk
and
there's
other
folks.
There's
a
lot
of
ways
to
learn
about
this.
A
Now,
we've
done
two
tgiks
on
it,
but
overall
it
separates
the
the
the
thing
that
talks
to
the
api
server
right.
It
separates
that
in
that
from
the
back
ends
that
write
load,
balancing
rules.
So
this
is
our
update
to
this
is
our
new
new
version
of
the
coupe
proxy,
and
so
we
have
all
these
back
ends
now,
right
and
rajas
just
finished
getting
the
user
space
port
working.
A
So
the
upstream
user,
space,
linux
and
user
space
windows
are
both
now
working
in
kpng,
which
is
great
because
it
means
we
can
now
really
deprecate
the
user
space.
Proxying
injury
really
kill
it
for
good,
but
anyways
we're
working
hard
on
windows,
I'm
working
on
that
with
some
other
people,
matt
and
doug
and
stuff.
Now
here's
the
thing
we
have
this
all,
but
dan
winship
was
like.
A
When
are
you
all
gonna
update
the
cap?
Because
the
cap
is
just
a
total
disaster,
we
haven't
looked
at
it
in
like
years,
and
so
that
kep
is
so.
Let's
look
at
that
enhancements
cap
and
let's
look
at
the
comments
and
see
how
far
we
can
get
and
if
folks
are
sort
of,
have
never
worked
on
it
kept
before
and
want
to
sort
of
help
us
to
finish
this
we'd
we'd
love.
Anyone
to
help
us
to
finish
this
off
so
needs
to
be
rebased
onto
the
old
new
format.
A
A
A
A
We
split
all
this
out,
so
the
the
thing
watching
the
api
server
doesn't
actually
even
need
to
be
running
on
the
same
thing:
that's
routing
the
rules.
So,
for
example,
you
could
have
a
windows
coupe
proxy,
that
literally
just
wrote,
hns
networking
rules
and
and
had
no
connection
to
the
kubernetes
api
server
and
ins,
and
it
would
just
read
the
grpc
from
kpg.
So
it's
a
much
faster,
cleaner
implementation
of
decoupling,
the
data
model
for
kubernetes
from
the
networking
model
that
kubernetes
needs
from
services
to
pods.
A
A
B
A
A
So
I
don't
really
have
an
opinion
there,
we'll
see
what
see
where
dan
responds
on
that,
and
you
know,
rajas
is
going
to
start
working
on
this
cap
also,
but
so
we
need
to
rebase
it,
and
so,
let's
let
me
bullet
this
part,
two
part
one
right,
joss
cocker,
do
you
want
to
rebase
this
one,
because
I
know
he
he
wanted
to
sort
of
clean
up
this
cap
and
finish
it.
A
Let's
see
what
other
comments
are.
This
sounds
more
like
the
original
model,
where
there's
going
to
be
a
daemon
as
opposed
to
what
I
understand
to
be
current,
where
the
shared
code
is
more
part
of
the
library
or
am
I
long.
This
cap
is
born
from
the
conviction
that
more
decoupling
of
the
service
object
and
the
actual
implementation
is
required
by
introducing
immunity
and
node
level
abstraction
provider.
This
abstraction
is
expected
to
result
in
the.
A
I
don't
understand
this
question
really.
This
cap
is
born
from
the
conviction.
Maybe
we
just
need
to
delete
this
sentence.
I
think
we
can
delete
this
sentence.
I
think
we
can
delete
this
sentence.
A
A
Okay,
so
I've
got
this
service
here
and
I
guess
the
the
point
that's
being
made
there
is
that
we're
saying
you
know
say
that
any
of
these
fields
change
or
whatever
we
don't
want
to
have
to
change
the
way
our
you
know,
if
I
have
a
back
end
like
in
here,
let's
take
windows,
my
favorite
and
let's
take
the
proxy
in
here,
and
we
don't
want
this
to
have
to
depend
specifically
on
the.
A
A
So
the
thing
that
that
mediates
all
that
for
us,
lives
out
here
the
global
model
right
so
that
global
model,
if
you
look
at
it,
the
data
model
that
we're
actually
using
is
this.
Okay,
it's
this
very
simple
yaml
model,
and-
and
this
is
the
only
thing
that
any
backend
in
kpng
ever
needs
to
read.
Okay.
So
if
I
go
back
here
again,
this
is
similar
to
dan
danwinship's
comment.
A
A
Tricky
thing
is,
if
it
doesn't
provide
enough
information,
that'll
either
be
completely
unused
or
watch
more
resources.
For
example,
if
we
can
pre-compute
some
flattened
api
from
service
and
endpoints,
but
we
actually
need
some
input
from
the
endpoint
that
isn't
the
flattened
resource.
We
still
need
to
that's
a
good
point
right
now.
We
don't
have
that
problem
in
kbng.
You
can
run
a
full
proxy
and
nobody
has
to
watch
anything.
A
A
A
Okay,
so
he's
saying,
because
sdo
uses
the
service
account
account
when
making
service
mesh
decisions
about
whether
a
route
to
something
or
not.
Maybe
I
don't
know
so
it
still
needs
to
watch
the
api
server.
I
mean
it's
a
fair
point:
okay,
touche!
A
A
A
Okay,
we
got
ten
minutes
left
so
by
the
way
chris.
I
know
this
is
all
totally
randomly
new
stuff,
and
if
you
have
questions
you
can
totally
interrupt
me
and
let's
see
ricardo's
ricardo's
laughing
at
me,
because
I
always
tell
him
how
much
I
hate
writing
caps
and
now
I
I'm
writing
one
on
the
internet.
So
is
this
still
part
of
the
plan.
So
now
dan
is
asking
include
equivalent
implications
in
project
one's
user.
Land
ipvs
optional,
help
proxy
implementation,
the
same
subsystem
to
cooperate
more
easily.
A
A
Oh
actually,
no!
That's
not
true,
because
we
actually
are
doing
that
for
contract.
A
A
A
So
because
when
we
run
kpng
now
you
can
send
it
a
using
the
two
api
and
two.
A
In
the
to
back
end
implementation,
so
technically,
when
you
start
kpng
right
now,
you
don't
have
to
run
a
separate.
A
You
don't
have
to
run
this
back
end
separate
from
the
from
the
thing
that
watches
the
api
server.
You
can
run
it
all
as
one,
but
I
don't
know
there
could
be
some
weird
hack
where
maybe,
when
you
do
that,
we
still
actually
serve
everything
on
one
two.
Seven,
so
I'll
have
to
double
check
that.
A
But
it's
a
good
question
does
not
require
rebuild.
Release
seems
like
a
dubious
feature
in
general.
Any
interesting
the
plays
it
can
be
getting
new
features
of
its
own
as
well
so
rebuilding
release
anyway.
A
This
decoupling
allows
coupe
proximity
to
evolve
in
their
own
time.
Frames
instance.
A
Yeah,
like
as
an
example,
the
endpoint
slices
that
was
that
didn't
affect
the
back
ends
at
all,
because
once
endpoint
slices
were
introduced,
they
just
went
into
the
sort
of
into
the
the
thing
that
creates
this
data
structure,
reads:
endpoint
slices
and
creates
these
data
structures
and
then
sends
them
to
the
back
end.
So
endpoint
slices.
A
So
now
we're
at
458
we've
got
two
minutes
left,
so
oh
ricardo
loves
caps.
A
So
so
that's
like
just
the
very
beginning
of
us
starting
to
rewrite
this
just
sort
of
cleaning
up
this
cup
and
we're
going
to
be
working
on
this
with
rajas.
If
anybody
wants
us
to
help
finish,
the
kpng
kept
up
and
move
it
to
production
to
get
this
yeah
the
the
tests
we
run
right
now.
This
is
a
good
thing
he
mentioned
so
right
now,
the
ci
that
we
run
sig
network.
A
We
we
do
something
like
this
and
so
right
now
in
ci
we
run.
A
Scalability
tests
that
we
run
yet,
though,
if
there
is
there's
a
large
scale,
one
time
coup
proxy
scale
test,
we
should
run.
Let
us
know
maybe
something
in
coop:
there's
a
kube
loader
project,
there's
a
hundred
node
ci
job
that
runs
also
for
upstream.
So
maybe
that
would
be
a
good
enough
test.
A
C
No,
it's
good
to
me.
Everyone
thanks
for
the
invite.
Really
it's
been
an
interesting
interesting
hour.
I
didn't
say
much
but
interesting.
A
Yeah,
so,
okay,
so
chris
is
gonna,
start
helping
us
and
hosting
some
shows
and
alternate
with
me
and
chinchi.
So
big
thanks
to
chris
for
helping
us,
because
we
are
we've
been
doing
this
for
ever
now
and
we're
almost
on
our
30th
episode
and
we're.
We
need
help,
and
it's
just
great.
So
I
just
really
appreciate
you
jumping
in
into
the
fire
with
us
here
and
so
we're
looking
forward
to
learning
a
lot
of
stuff
from
you
and
great
get
to
know
you
here.
A
So
this
is
our
own
little
community
that
we
sort
of
created
from
scratch
and
marketing.
There's
no
marketing,
there's!
No
anything!
It's
just
us
hacking
around
every
week.
So
if
anybody
wants
to
do
a
show
with
chris
or
whatever
you
know
where
to
find
us
we're
all
on
upstream
kubernetes
slack,
you
can
leave
a
comment
on
the
youtube
show
whatever
and
or
with
me
or
with
chinchi,
and
we're
always
happy
to
host
a
show
with
you.
Thanks
a
lot
everybody
if
you
enjoyed
this
show
like
and
subscribe.