►
From YouTube: Kubernetes SIG Network 2018-03-08
Description
Kubernetes SIG Network meeting from March 8th, 2018
A
B
So
we
think
that
this
is
the
narrow
policy
pod
selector
and
news.
A
selector
combined
PR,
basically
ready
to
go
Tim
said
he
was
happy
with
it.
No
one
else
has
made
any
major
objections,
so
I
guess
we're
just
it
can't
have
this
complaining
that
it
doesn't
have
a
release,
milestone,
I,
guess
we're
just
waiting
for
111
now
or
something
like
that.
D
A
A
Cool
sounds
good
to
me:
Dan,
maybe
you
wanna,
oh
you've
got
the
link
cool,
so
people
can
find
a
link
to
that
in
the
agenda
doc.
A
F
So
this
is
a
this
is
kind
of
a
different
networking
approach.
I
wanted
to
run
by
it's,
no
I,
don't
spend
a
lot
of
time
in
the
background
things
like
that,
but
there's
a
link
there,
which
kind
of
explains
the
details
of
what
it
does
right.
No
well,
the
one
way
to
think
about
it
at
a
high
level
is:
it
is
abstracting
application,
networking
networking
in
general
to
the
application
level.
You
know
one
way
to
kind
of
characterize.
F
This
is
it's
kind
of
like
containers
for
networking,
I
mean
exactly
like
what
docker
or
containers
have
been
huge
its.
If
you
look
at
the
compute
stack
your
physical
machine
sports
machines
and
then
containers
on
top
progressively
higher
in
the
software
stack.
If
you
look
at
the
networking
stack
so
there's
physical
networking
and
then
virtual
networking
Sdn,
and
all
that
what
would
be
the
equivalent
of
darker
on
the
networking
stack
right.
So
that's
that's
just
a
high-level
view
of
what
what
absolutely
is
gonna
supposed
to
be
can
I
share
my
screen:
yeah,
okay,
okay,.
F
F
You
can
still
see
my
screen
right,
yeah,
okay,
so
this
is
what
it
works
like.
So
there's
there's
this
app
at
the
top
and
the
network
API
calls
made
by
that
app
are
intercepted
by
this.
This
inspired
name
of
trap
generator,
so
it's
basically
generating
the
traps
whenever
the
application
makes
any
call
related
to
the
network.
I
mean
one
way
to
think
about.
This
days
is
kind
of
like
network
equal
in
their
shoes
when
application
makes
via
for
system
calls
fuse
forwards
them
to
the
users
face.
F
Similarly,
when
an
application
makes
network
related
system
calls
like
listen
by
internet
text,
that
gets
calls
they
get
forwarded
to
user
space
or
the
traveler
there's
couple
ways
to
implement,
which,
in
this
particular
information
that
this
paper
talks
about
is
a
kernel
based
implementation.
It
uses
current
rays
points
to
trap.
The
application
system
calls
and
then
forwards
them
to
this
trap
handler
which
is
running
in
user
space.
F
So
you
know,
then,
once
the
system
calls
are
in
user
space,
then
the
handler
will
decide
what
to
do
with
those
system
calls.
Ultimately,
the
only
way
for
application
to
access
networks
is
through
that
API.
So
you
can
do
a
bunch
of
things
at
that
orange
you
can.
You
can
say
if
the
application
is
trying
to
connect
to
an
IP
address,
like
you
know,
1.1
1.1,
an
arbitrary
IP
address
it.
Can
it
can
then
change
that
name
to
a
different
IP
address,
or
it
might
reject
that
call
altogether
or
it
might?
F
F
So
there
is
this
cluster
here
on
the
left.
In
this
case,
there's
three
hosts
and
the
apps
are
running
on
top
and
if
there
is
an
app
like
DB
one
here,
it
happens
to
be
a
server.
Then
the
existence
of
this
DB
one
server
is
too
old
to
other
hosts
on
this
cluster
bi.
We
are
this
gossip
channel,
so
this
the
service
table
can
a
keeps
track
of
all
the
servers
in
the
cluster
and
they
get
propagated
to
other
nodes
in
the
cluster.
F
So
when
a
client
on
a
different
node
wants
to
reach
that
particular
server,
then
then
it
is.
It
is
connected
back
to
that
server.
So
so
this
the
server
the
service
table
has
an
API.
You
can
kind
of
think
of
it
as
IP
tables,
so
IP
tables
acts
on
the
the
packets
that
are
flowing
out
and
in
of
a
system
in
this
case.
F
F
You
know
172
whatever,
so
it
it's
kind
of
built
on
top
of
the
this
underlying
SR
tables
mechanism
and
this
mechanism,
which
is
kind
of
network
equal
in
touch
use
and
the
service
router,
is
responsible
for
listening
to
the
servers
that
are
coming
up
on
different
hosts
and
propagating
that
information
to
other
notes
or
other
clients
that
would
like
to
connect
to
those
those
servers.
So
again,
I
ran
through
this
architecture,
but
let
me
get
to
the
demo
and
just
very
quickly
do
a
demo
and
then
and
then
we
can
start
I,
don't
like
that.
F
E
I'm,
a
little
unclear,
I
started
to
look
at
this
a
little
bit
and
the
introductory
verbiage
was
talking
about.
I
thought.
The
sort
of
improper
bindings
between
addresses
IP
addresses
and
application
identities,
but
if
you're
intercepting
system
calls
that
work
with
IP
addresses,
I
mean
you
you,
you
have
to
work
with.
Ip
addresses.
F
Yes,
you
have
to
work
with
I
here.
Does
this
only
to
the
extent
being
backward
compatible?
You
know
I
mean
there's
existing
applications.
That
I
expect
servers
to
be
named
in
the
form
of
an
IP
address
on
a
port,
but
really
it's
just
a
name
internally.
It
converts
into
a
64-bit
UUID.
In
fact,
you
know
some
of
these
things
actually
might
become
this
demo.
Here
you
can
specify
an
arbitrary
IP
address
of
your
choosing
when
you
bring
up
an
application
and,
and
then
clients
would
be
able
to
reach
that
server
application
on
that
identity.
F
If
you
don't
specify
any
IP
address,
you
could
just
specify
a
name
and
the
clients
would
then
be
able
to
access
that
service
through
that
name.
If
you
don't
specify
either,
then
then
it's
not
it's
not
a
server,
it
cannot
be
a
server,
it
would
just
be
a
it
can
be
a
client
and
it
doesn't
need
an
ID.
The
clients
don't
need
an
identity.
So
some
of
these
some
of
this
is
discussed
in
that
paper
as
well.
F
Right,
okay,
so
what
I
have
here
are
actually
to
be
hosts.
The
top
two
terminals
here
are
host
0
0
and
there's
host
1
and
host
I'm
going
to
bring
up
this
nginx
server.
The
you
know
the
command
here
is
actually
a
X
I,
give
it
and
I
get
res.
Like
I'm
saying
you
know
this
IP
address
and
I
run
engine
X.
F
Like
that
and
and
then
now
the
server's
running-
and
this
IP
address
obvious-
there's
nothing
actually,
it
has
nothing
to
do
with
interfaces
on
the
host
and
and
there's
no
interface
with
this
ideas
create
or
anything
like
that.
It's
just
completely
virtual
now,
I'm
gonna
connect
to
this
server
here
you
know
and
maybe
I'll
give
it
a
different
like
address
and
I
would
call
I
will
connect
to
that.
So,
if
I
did
this,
it
connects
you
know
and
in
fact,
because
it's
a
client
we
were
saying
it
doesn't
need
any
identity.
F
So
now,
I
have
two
instances
of
engine
X
running
on
host,
1
and
host
to
each
supposedly
bound
to
that
same
IP
address
1,
1,
1
1.
So
when
I,
when
I
try
to
connect
to
that
same
I
could
rest
from
a
client,
it
actually
gets
load
balanced
across
across
the
two
hosts.
So
you
know
so
I
mean
it
went
to
host
one
here
and
those
two
weeks
I
mean
right.
Now
it's
using
random,
but
but
basically
there's
no
need
to
configure
a
load
balancer
and
things
like
that,
so
you
just
give
it
the
same.
F
F
F
F
Let's
say
test,
then
it's
going
to
connect
the
only
only
the
test
server.
It
will
never
go
to
the
prod,
even
though
both
prod
and
servers
happen
to
be
running
on
that
same
IP
address.
So
this
kind
of
convenient
when,
when
you
have
applications
which
are
which
have
the
statically
configured
I,
could
resist
in
their
configuration
I
want
to
create
multiple
instances
of
that
application
on
the
same
cluster
and
each
may
be,
you
know,
a
different
environment
with
the
staging
production
testing
and
each
time
I
bring
up
an
environment.
F
F
E
F
This
is
this
is
actually
so.
This
is
something
that
we
are
using
internally,
in
fact,
with
kubernetes,
and
we
didn't
need
to
change
applications
or
the
network.
In
fact,
in
this
particular
case,
not
even
communities
we
run
this
is
we
just
we
just
prefixed
the
application
with
this
with
this
binary,
it's
a
statically
linked
binary
and
that's
all
Darrius.
The
same
binary
runs
as
a
daemon
and
also
as
a
client
and
there's
absolutely
no
other
dependencies
to
this.
F
It
runs
on
open
to
whatever
it
is
strokes,
so
it
is
actually
transparent
with
respect
to
what
changes
are
needed
to
the
applications
on
top
or
the
infrastructure
network
into
the
bottom.
So
we,
this
is
actually
one
of
the
one
of
the
discussion
points.
That's
the
main
purpose
for
representing
this
here
as
well.
What
should
be
the
right
way
to
integrate
this?
F
F
So
then
you
are
able
to
present
that
IP
address
to
that
application,
and
of
course
you
know
so
this
could
be
a
perhaps
and
you
would
make
up
an
IP
address
and
assign
it
to
the
pod
and
just
like
how
cool,
but
it
is
creates
I
here
to
see
stripe
am
and
assigns
to
powers.
The
same
thing
is
possible:
okay,.
E
G
F
F
F
F
F
Sure
you
could
do
that
so
right
now,
you
know
see
I
mean
I.
Just
did
it
with
the
name
and
it
works,
but
we
have
a
different
implementation,
which
is
able
to
intercept
the
library
layer
as
well,
and
in
which
case
you
don't
need
to
run
the
DNS
server.
You
would
just.
You
were
just
intercept
all
Network
API
and
return
the
appropriate,
consistent
results
and
obligations.
Yeah.
F
So
you
don't
have
to
be
in
the
data
path,
see
right
now.
Let's
look
at
Q
proxy
right,
I
mean
how
would
you
implement
something
like
cluster
ip+
right?
B
is
a
fake,
IP
and
you're
connecting
to
it,
and
you
can
program
IP
tables
to
convert
all
references
to
that
IP
to
the
real
polity
in
this
case,
I'm,
not
in
the
data
path
at
all,
I
only
intercept
the
connect
once
and
it's
as
if
the
client
is
already
connecting
to
the
right
server
at
its
right
identity.
C
F
Just
that
you
know,
I
mean
applications
needs
to
work
for
any
application,
even
applications
that
don't
rely
on
that
may
not
use
DNS
I
mean
they
might
have
some
statically
configure
IP
addresses
they
can
fake.
It
should
work,
then
also
so
there's
there's
different
places.
You
can
do
this
in
direction.
You
could
do
it
as
a
proxy.
You
could
do
it
at
dns,
but
some
applications
may
not
use
DNS.
Then
then
you
will,
then
you
would
lose
control
and
in
this
case
the
interception
also
is
done
at
the
kernel.
F
So
even
if
your
application
is
statically
linked
like
most
low
binaries,
it
would
still
work
as
opposed
to
other
approaches
like
a
liquid
out.
So
I
mean
the
focus
is
to
is
the
breadth
of
application
support.
You
should
generally
work
for
any
application
and
in
fact,
we've.
We
use
this
for
specifically
legacy
applications
like
web
logic.
Asap.
You
know,
JD
Edwards.
Those
kinds
of
applications
can.
F
F
So
that's
the
daemon
and
it's
it's
again
the
same
name
and
it
tells
the
daemon
about
the
attributes
that
have
been
passed
on
the
command
line
and
then
it
tells
the
colonel
to
start
tracking
this
application
and
eggs
X
and
then
the
X
commander,
clients
out
of
the
picture,
and
it's
really
the
application.
That's
running
directly
on
the
OS
and
it's
system
calls
get
forwarded
to
this
demon
and
the
demon
is
obviously
talking
to
demons
its
counterparts
on
other
nodes
in
the
cluster
and
then
exchanging
routes.
And
things
like
that.
F
D
I
guess
that
meets
the
strict
definition
of
not
changing
their
container.
That
would
be
frowned
upon
by
people
who
are
security,
conscious
if
we
were
to
do
that,
I'm
looking
for
is
there
a
way
that
we
can
do
this
more
transparently
without
in
Jack
anything
that
the
user
could
see,
touch
smell
or
taste.
Li
see.
F
There's
this
one
where
I
mean
this
is
exactly
the
discussion
that
I
wanted
to
have
so
there's
a
trade-off
here.
See
one
way
to
do
this
kind
of
tracking
is
at
the
namespace
layer.
I
could
create
an
additional
container
in
the
same
pod
and
that
container
would
would
basically
start
enable
tracking
on
itself
and
all
the
processes
in
that
network.
Namespace
would
automatically
be
tracked,
so
you
can
have
transparency,
but
then
now
you
have
a
dependency
on
having
the
network
namespace,
which
unfortunately,
it
requires
privilege
in
order
to
create
network
name
space.
F
D
F
Fact
that
container
also
wouldn't
need
privilege
the
whoever
creates
that
network
namespace
obviously
needs
privilege
and
I.
Guess,
that's
fine.
You
know
so
creation
of
the
name
space
needs
privilege
in
this
case.
If
you,
if
you
are
willing
to
have
the
ax
fix,
then
you
actually
don't
need
privilege
you
can
for.
D
F
So
it's
a
trade-off.
I
mean
you
know:
we've
been
thinking
about
the
the
right
trade-off
here.
Should
it
be?
Is
it
okay
to
have
the
extra
fix
and
not
me
privilege,
or
is
it?
Is
it
better
to
require
a
privilege?
At
least
it's
not
you're,
not
requiring
privilege
for
a
X?
It's
you
can
create
an
opening
space
outside
you
just
do
IP
narrowness
add,
but
that
requires
privilege.
Once
you
have
an
open
space,
then
you
can
exact
into
that
level.
Namespace
and-
and
then
you
won't
be
privileged
after
that,
but.
D
F
D
F
D
F
I
I
We
don't
really
have
a
set
thing
to
look
at
and
so
we're
always
kind
of
scrambling
for
requirements
there,
and
so
this
would
kind
of
allow
us
to
formalize
that,
and
it's
not
a
big
deal
for
what
we
already
do
like.
We
already
have
like
feature
proposals
and
all
that,
and
so
this
wouldn't
be
a
big
deal
for
that.
I
E
Question
yeah
sorry,
yeah
I,
miss
I,
missed
the
attraction
to
caps,
so
I
I'm,
just
learning
about
from
this
afternoon,
so
I
have
a
question
here.
I'm
looking
at
the
workflow
in
the
document
is
this
pep
/
Cobra
days
enhancement
the
cultural
process,
MD
graduation
criteria?
So
it
was
it
clear
to
me
exactly
what
graduation
means
and,
more
specifically
in
the
workflow
there's
a
distinction
between
provisional
and
implementable
and
I'm.
B
E
Clear
on,
what's
the
distinction,
I
think
the
first
question:
mmunity
is
technically:
when
is
the
first
PR
actually
get
merged?
Alright,
it
says
to
become
provisional.
The
sig
has
to
agree
that
this
is
work
that
needs
to
be
done,
but
that's
often
the
biggest
hurdle.
It
needs
to
love
most
discussions,
yeah.
E
I
I
mean
I,
haven't
I
mean
these
are
all
like
small
little
things,
but
I
think
we
can
flush
all
this
out.
Maybe
I
can
write
that
reference
cab
and
we
can
take
a
look
at
it
next
meeting
and
see
where
we
want
to
make
changes
because
there's
a
template,
we
can
make
modifications
to
the
template
as
we
see
fit
for
our
sig
and
what
it's
worth
we'll
eventually
have.
D
A
I
A
H
Hello,
so
I'm
to
talk
about
core
DNS
and
how
we
finish
integration
of
coordinates.
So
first
we
have.
We
are
good
for
betas.
Thank
you
very
much
to
everyone
that
collaborate
for
this
review
approval,
but
it
took
very
very
long
time.
So
it's
done
for
beta
and
version
1.10.
I
would
like
to
anticipate
the
plan
for
the
GA
and
I
guess
1.11.
H
So
my
so
my
question
here
is:
is
there?
Is
it
okay
for
1.11
or
do
we
wait
an
event
validation,
something
for
in
okay
GA
for
111?
What
do
we
do
once
we
go
GA?
What
do
we
do
is
cube.
Dns
I
mean
right
now.
If
we
make
coordinates
default,
you
can
still
install
she'll,
be
an
S.
If
you
say
I,
don't
want
Cody
Ennis.
You.
C
A
D
I
I
D
H
C
G
G
A
G
D
H
Okay,
sales,
so
that
is
the
execution.
I
still
have
two
other
question:
if
that's
possible,
one
is
as
soon
as
let's
say,
even
if
it's
not
one
that
11
that
will
happen
later
or
whatever.
As
soon
as
we
make
default
core
DNS,
then
bunch
of
a
bunch,
not
maybe
all
integration
tester
and
to
intersect,
runs
in
the
dashboard
or
visible
in
the
dashboard
will
when
we
score
DMS,
maybe
not
all.
H
How
do
we
synchronize
that
with
the
people
that
are
running
and
to
intest
or
do
we
advertise
the
core
DNS
is
a
default
I
mean
that
may
have
some
impact.
Some
people
and
some
other
will
have
to
translate
their
tool
to
I
mean
to
upgrade
their
tool.
Their
tool
of
deployment
to
use
code,
ENS
or
not.
I
have
to
deal
with
the
sig
testing,
for
example,
for
that.
D
Sorry
is
the
question:
just:
how
do
we
tell
people
that
it's
time
to
switch
from
QB
NSD
Cardenas?
Yes
and
yes,
the
good
news
is
that
they
don't
have
to
do
it
like
instantaneously
right.
They
could,
if
they,
if
they
lag
behind,
it's
fine,
they're,
just
gonna,
be
using
an
older
version
of
cube
DNS,
and
so
we
start
the
outreach
to
people
and
we
send
deprecation
notices
for
cube
dns,
and
we,
you
know,
let
let
everybody
that
we
can
find.
No,
you
scream
as
loudly
as
we
can
that
this
thing's
going
away.
D
D
I
mean
you've
got
to
be
good
for
the
for
the
cap
to
figure
out
exactly
where
should
we
be
advertising
this?
And
how
long
are
we
gonna?
Let
the
overlap
and
exist,
and
what
are
we
gonna
do
with
cube
DNS
at
the
end
or
you
delete
the
repo
or
we
gonna
move
it
to
the
Graveyard
or
we
gonna
something
like
this,
but
we
leave
three
there,
but
just
with
every
file,
you
know,
there's
ten
different
things
we
couldn't
do
I,
don't
think
we
have
any
real
established
precedent.
D
D
Hey
there's
there's
some
who
believes
that
as
part
of
the
kubernetes
release,
we
shouldn't
release
any
binary
or
container
that
we
didn't
actually
build
ourselves
and
which
would
imply,
probably
from
a
rail
ang
point
of
view.
We
should
keep
a
fork
in
the
sense
in
the
github
sense
of
core
DNS
in
a
repo
related
to
kubernetes
and
cut
releases
from
there.
We
can
have
that
discussion.
H
A
D
J
So
I
send
out
a
proposal
called
already
plus
plus
I,
presented
the
problem
statement
a
few
meetings
ago.
So
essentially
it
is
so
there's
a
disconnection
between
a
life
cycle
and
like
networking
constructs
like
service
policy,
ingress
and
examiner.
So
the
were
closed
essentially
are
an
Orchestrator
of
the
pods
is
ignorant
to
like
networking
and
constructs.
J
So
this
is
not
a
problem
when
the
system
is
stable
and
everything
is
programmed,
it
becomes
a
problem
during
transition
like,
for
instance,
rolling
update
or
like
there
are
disruptions,
or
things
like
that
system
like
cubelet
and
the
pod
has
to
restart
and
network
programming
has
to
be
done
like
during
that
care
of
time.
There's
no
proper,
keep
aloof
from
the
networking
side
to
feed
back
into
the
truth,
Nettie's
api
to
basically
tell
the
orchestrator
the
workloads
that
like
what
to
do
right
now,
the
orchestrator.
J
The
only
signal
for
the
orchestrator
is
readiness
and
the
pod
readiness
is
dictated
by
cubelet,
which
is
only
on
the
nose
only
inferences
on
a
No.
So
so
this
proposal
aims
to
add
extension
point
on
the
pot
readiness
to
allow
additional
feedback
either
from
networking
constructs
like
services.
Network
policy
and
etc
to
feed
back
into
readiness,
so
that
the
workloads
automatically
like
consider
consider
the
programming
and
the
readiness
of
no
importance.
J
So
we
went
through
a
lot
of
iterations
internally
at
Google
went
through
a
lot
of
discussions,
and
this
is
the
proposal
that
we
finally
came
out
with,
with
the
most
surgical
fix
to
the
kubernetes
api,
without,
like
redesigning
a
lot
of
core
objects,
so
feel
free
to
take
a
look
and
see.
If
there's
any
recommendations.
D
J
So
basically,
this
make
the
work
moves
very
limited
like,
for
instance,
someone
wanted
to
provide
quark
or
something
those
operators,
those
third-party
operators
they
don't
have
did
they
have
to
re-implement
everything
right.
They
have
to
really
implement
the
hooks
to
launch
services,
and
there
were
policy
maker
that
makes
it
makes
their
life
much
harder.
So,
with
this
proposal
will
basically
gives
the
extension
point
needed
for
specific
features,
that
feedback
into
readiness
and
then
influence
all
the
other
workloads.
B
So
reading
the
proposal
I
wondering
why
is
it
opt-in
like,
if
you're
going
to
to
agree
that
a
pod
isn't
really
ready
until
cube
proxy
and
network
policy
and
stuff
like
all
aware
of
it
and
ready
for
it,
then
there
what
circumstances
would
you
ever
want
Akkad
to
be
considered
ready
when
those
things
weren't
updated?
So.
J
So
that's
a
good
question,
so
it
is,
it
can
be
often
or
automatically
in
force
right
like,
for
instance,
using
mutating
webhooks
like,
for
instance,
the
user
doesn't
need
to
the
kubernetes
user.
Doesn't
need
to
be
aware
of
like
there's
a
extra
readiness
from
your
Tod.
They
just
write
on
it
like
their
own
prospect
and
then
when
they
create
the
pod
mutating
web
hook,
will
eject.
J
E
Reminded
me
of
two
questions:
I
want
to
ask
you
about
that.
One
is
why
a
web
hook
instead
of
something
in
process
and
the
other
question
or
observation
was
this,
makes
it
a
cluster
admin
decision.
Could
there
be
cases
where
it
is
more
of
DevOps?
You
know
the
guy
deploying
something
decides.
What
should
contribute
to
his
pods
readiness.
D
So
I
think
that
the
DevOps
person
can
see
those
fields
with
interesting
things
that
like
if
they
want
to
do
some
manual
verification
before
they
flip
the
ready
bit
they
can
put
the
field
with.
You
know,
wait
until
wait.
Till
Mike
says
it's
okay
and
if
they
are
running
in
Google
Cloud,
we
will
automatically
add
the
you
know,
whatever
fubar
or
frog,
Nicator
thing
that
we
need
for
our
stuff.
Okay,.
E
J
D
I
would
also
like
to
point
out
that
this
proposal
is
the
result
of
literally
months
of
work
going
through
not
kidding
no
less
than
10
different
designs
of
ways.
It
could
work
that
min
Han
kept
thinking.
We
had
even
I
thought
we
had
it
nailed
and
and
Brian
would
shoot
it
down
and-
and
we
thought
we
had
it
and
iñárritu
and
we
shoot
it
down.
It
was
back
and
forth
for
months,
so
I
really
appreciate
all
the
work
that
went
into
getting
the
design
to
a
place
where
it
is
this
simple,
yeah.
E
D
Yeah
they
could,
but
more
and
more
of
those
they
I
think
anything
that's
entry
that
would
want
to
set.
This
is
moving
out
of
tree
anyway,
specifically
the
cloud
provider
stuff.
So
at
the
end
of
the
day,
it
doesn't
matter
how
the
field
gets
there.
What
mutating
webhook
is
sort
of
the
extension
mechanism
for
third
parties.
If
we
had
an
internal
need
for
it,
then
it
can
be
a
real
in
process
administer
controller
to
wouldn't.
E
D
So
yes,
you're
right
q
proxy
could
be
what
it
q
proxies
interesting,
because
it's
not
one
it's
n
right!
It's
you
have
to
have
something
that
a
grenades
and
waits
for
all
nodes
to
check
in
before
it's
ready
or
some
preponderance
of
nodes
or
like
we'd,
have
to
work
through
the
details
of
that,
but
but
sure
I
can
see
how
that
work.
D
If
we
have
just
a
couple
minutes,
left
I'd
like
to
put
a
shout
out
for
the
dock
that
I
shared
this
morning,
it's
basically
it's
a
very
short
introduction,
but
I
promised
last
week
that
I
would
write
it.
So
I
did
it's
an
introduction
to
a
bunch
of
topics
that
are
sort
of
they
feel
related
to
me,
including
the
evolution
of
ingress
and
integration
with
this
do
and
service
topology
and
other
things,
I
shared
it
to
the
group
this
morning.
I
welcome
any
and
all
feedback
on
it.