►
From YouTube: sigs.k8s.io/kind 2019-09-09
A
B
Yeah
dual-stack
is
cool
I'm
gonna
do
some
review.
I'm
I
might
have
to
punt
it
another
day.
I
just
found
out,
apparently
my
like
perf
snippets
that
I
was
gonna
do
later
this
week
now
need
to
be
done
today.
So
I
will
also
be
out
at
the
end
of
this
week,
most
of
Thursday
and
should
be
out.
Friday
I
may
still
get
some
wine
some,
but
I
should
be
out,
but
done
just
for
like
a
like
a
day
or
two.
A
A
B
Yeah
and
I'm
also
I,
don't
know
so.
I
do
want
to
get
back
to
this
very
soon.
My
hope
is
still
to
see
if
we
can
catch
the
116
release
and
have
kind
ready
for
dual
stack
outdoor.
That
said,
looking
at
the
release,
I
think
dual
stack
is
gonna,
be
another.
One
of
these
things
were
like
it's
ready,
but
like
it's
not
quite
there
like,
like
you're,
like
your
host
port,
we're
still
first
yep
tall
stack,
but
IP
musics,
just
like
doesn't
have
host
ports.
You
know
a
small
thing:
yeah.
A
B
A
B
B
B
B
If
it's
going
to
do,
C
and
I
are
not
because
there's
a
bunch
of
other
stuff,
we
need
to
run
and
it's
something
of
a
hassle
and
possibly
wasted
to
like
have
multiple
Damon
sets
on
every
node.
When
we
control
the
whole
stack,
we
could
just
have
like
the
kind
operator
that
does
kind
cluster
stuff,
including
like
external,
like
all
the
external
stuff,
that
we'd
want
for
cloud
provider
and
just
just
one
thing.
B
Alternatively,
we
might
have
to,
and
we
have
one
Operator
that
kind
of
does
all
that
stuff
and
we
have
kind
net
D,
but
I
think
either
way.
That's
going
to
be
something
of
a
breaking
change.
If
we
want
to
do
anything
else
as
an
operator
or
we
have
to
kind
of
shoehorn
it
into
kind
net
D.
So
we
need
to
think
more
about
coupling
for
that.
B
A
B
A
This
is
the
thing:
I
was
thinking
a
lot
about
this
and
I
had
then,
then,
what
I
see
when
I
started
incremental
stack
is
that
we
are
mixing
a
lot
of
things
and
it's
that's
impossible
see
that's.
That
approach
is
not
going
to
scale
at
all
spaghetti.
So
the
thing
is,
you
should
have
a
loop
for
IP
tables
and
you
can
run
demos
it
with
IP
masks.
You
know,
do
the
couple
all
these
functionality,
you
create
another
theme
for
routing
and
you
do.
A
The
controller
is
the
one
that
has
to
gather
all
the
route,
all
the
IP
addresses
and
build
the
routers
and
then
with
a
custom
crv
they
original
they
publish
the
routers.
So
the
the
agent
in
that
node
should
we
have
to
install
the
routers
I
mean.
Do
the
copper,
everything
IP
tables
routers
and
the
controller
is
the
one
creating
the
node
API,
with
routers,
with
IP
masks
and
with
whatever
that
we
want
to
put
there
possibly.
B
But
that
also
doesn't
change
some
of
the
details
about
deployment
and
how
many
images
we
have
it
or
not.
Our
like,
for
example,
like,
for
example,
the
the
other
aspect
that
isn't
in
this
yet
is
some
of
the
things
we
do
with
an
external
cloud
provider
and
what
that
architecture
looks
like,
so
that
that
needs
that
doesn't
have
to
be
explored
today.
But
it
is
something
we
need
to
figure
out
before
we
go.
You
know
more
stable,
I
think
we
need
to
leave
our
souls
room
to
deploy
some
to
always
deploy
some
extra
stuff.
B
We
need
to
figure
out
how
to
best
manage
that
going
forward
so
that
we
can
cancel.
We
can
continue
to
further
a
couple.
The
kind
images
from
the
binary,
but
still
have
them
play
well
and
yeah
I
think
I
think
we're
gonna
wind
up
needing
something
like
a
cloud
controller
and
I.
Don't
know
if
that's
actually
gonna
be
demons,
that
or
not
there's
a
bunch
there's
a
bunch
of
interesting
ways.
You
could
approach
all
of
this
home.
A
B
B
Some
point
we
might
have
to
do
that,
but
I
just
mean
like
like
the
issue
you
were
talking
about,
where
we
can't
tell
cubelet
to
set
multiple
addresses
the
solution,
one
of
the
best
solutions,
so
that
is
to
do,
and
it
should
just
do
an
actual
external
cloud
provider
like
it
like.
If
we
were,
if
we
were
trying
to
ship
kubernetes,
it's
like
a
real
mystery,
not
on
docker
nodes.
That's
how
we
do
this
is
we
want
to.
B
B
Possibly,
we
could
implement
more
stuff
there,
but
the
question
is:
what
is
that
deployment
look
like
what
are
the
controls
for?
What's
the
architecture
and
I'm
on
the
fence
about
so
far
like
one
answer
to
that,
could
be
that,
like
time,
netd
could
just
become
kind
of
like
config.
For
that
I'm.
Sorry,
all
the
stuff
we're
doing
this
kind
of
knee
could,
like
probably
arguably
fits
under
the
cloud
integration
in
this
case,
that
being
the
like,
instead
of
doing
cloud
routes
or
something
we're
doing
the
nodes,
so
like
you
would
you
might
have
a
controller?
B
Do
this
or
or
you
might
or
for
both
of
these
things
you
might
have
it
part
of
like
you
know
the
thing
that
brings
up
the
cluster
like
stuff
like
merging
the
note
is
ready
could
be
done,
one
off
from
kind,
but
as
much
as
possible.
If
anything
might
ever
need
repeating
doing,
we
should
have
it
automated
in
the
cluster.
I
need
to
dig
more
into
that
and
we
need
to
figure
out
like
if
we
really
want
these
concerns
separate
or
not,
because,
for
example,
we
might
have
a
better
idea
that
the
node
is
ready.
B
A
B
B
B
A
A
B
So,
in
the
meantime,
what
I
have
been
trying
to
land
is
things
to
build
all
of
the
other
features
on
that
and
not
in
these
demons
that
are
actually
in
the
in
that,
like
the
kind
command
and
stuff,
and
I'm
hoping
that
by
the
time
we
release
0.6,
it
will
be
significantly
easier
to
debug
these
things
we're
so
the
focus
right
now
has
been
the
logging
stuff.
So
if
it
lands,
I
think
the
appear
actually
assigned
to
you,
there's
a
some
stuff
that
does
things
like
for
messages
that
are
at
increased
logging,
verbosity.
B
That
includes
the
line
number
and
the
file
like
debug
line
number
or
file
line
number.
We
have
stack
traces
from
most
errors
and
I'm
working
to
kind
of
clear
that
up.
Ideally,
what
we
want
is
for
things
that
are
internal
errors.
We
want
stack
traces
for
things
like
your
config
file,
but
your
config
is
bad.
We
don't
want
to
show
the
user
aspect
trace
for
that,
but
we
should
show
a
stack
trace
when,
like
we
failed
to
exact
or
something.
B
The
other
thing
on
that
note
that
I'm
currently
currently
working
on
this
morning
is
instead
of
just
saying,
like
exit
status,
one
when
a
command
fails
and
giving
a
stack
trace.
I
want
to
give
the
output
always
so
I'm
reworking
the
exact
code
to
have
a
customer
type.
That
includes
a
lot
of
information
about
what
failed.
B
That's
gonna
be
by
the
neural
interface,
I'm
hope,
I've,
helped
I
have
that
in
a
branch
somewhere
I've
held
that
until
we
have
all
these
other,
like,
like
the
exact
imprint
sand
logging
in
debug,
to
build
on
so
basically
I'm
building
all
of
our
own
debug
cooling
and
then
going
to
build
the
new
features
on
top
of
that.
So
they
have
it
from
the
get-go.
So,
but
it
it
works
pretty.
B
Well,
we
just
like
doing
things
where
you
need
to
doing
there's
a
provider
interface
for
nodes
and
then
nodes
are
an
interface
and
that
decouples
the
majority
of
the
code,
but
in
this
case
I'm
actually
talking
about
the
super
low
level
exec,
because
we
have
all
the
time
users
will
give
us
like.
You
know
it
ran
some
darker
command
and
we
got
some
like
air
and
it
just
says,
like
you
know,
the
the
stat
code
of
the
command
doesn't
really
tell
you
like
what
failed
so
now,
hopefully,
by
the
time.
B
B
Specific
output
and
command,
and
also
the
debug
logging,
if
you
have,
that,
enabled,
will
tell
us
like
what
file
what
line
was
running.
I
should
also
help
us
prove
like
what
version
of
the
code
was
being
used.
It
still
needs
work,
and
particularly
Shostak
trace
is
too
often
right
now
and
the
code
that's
currently
checked
in
doesn't
always
get
the
trace.
You
want
it
finds
it
doesn't.
It
doesn't
always
get
it.
I
have
some
new
code
in
a
PR.
B
That's
out
that
will
walk
the
error
chain
until
it
finds
the
and
retain
the
deepest
stack
trace,
annotated,
error.
It
can
find
and
we'll
print
that
one
so
that
you,
you
know
you
get
the
whole
stack
as
much
as
possible,
but
the
the
other
thing
we
need
is
more
more
work
on,
like,
like
I,
said
like
if
it's
a
configuration
error
that
should
not
spew
a
stack
trace
because
we're
just
gonna
get
bugs
filed
that
are
like
kind
broke,
and
it's
gonna
be
like
actually
your
config
files
and
values.
It's
not
yeah.
A
B
That's
in
the
parser
from
claim,
I
I
think
the
API
machinery
anyhow
that
I
am
also
going
to
change
what
same
thing.
I
want
to
have
a
better
error,
tooling
and
logging
to
go
with
that,
so
that
we
can
better
I'll.
Put
the
other
really
big
thing.
Is
we
now
truly
right?
It's
almost
done
that
I
think
this
PR
covers
the
rest
of
it
as
well.
We
write
two
standard
air
fresh
and
tie
agnostic.
So
actually-
and
you
can
silence
it
now
so-
hopefully
with
a
bit
more
work-
we'll
have
some
good
patterns
here.
A
With
this
pill
example,
this
is
Nicolas
inspect
at
Exeter
command,
docker
Network,
inspect
F.
How
is
going
to
trace
that?
How
is
you
going
to
log
an
error
on
that
on
that
comment?
I'm.
A
B
B
B
Well,
so
that's
another
thing
is
detecting
that
and
that
will
the
node
provider
interface
will
include
some
stuff
for
that.
But
but
this
is
for
like
so
that's
one
axis.
The
other
axis
is
we
know,
errors
are
going
to
happen,
make
them
debuggable
when
they
happen
and
so
like.
Even
if
we
write
some
code
that
raises
our
tech,
talkers
running
that
code
can
be
broken.
We
need
code
that
gives
us
useful
output
instead
of
exit
status,
one.
B
I
worked
on
the
other
actually
use
when
we
are
logging
that,
like
it's
a
little
bit
nicer
for
library
and
port,
but
it
also
means
for
us.
We
get
some
of
the
K
log
style
features
of
like
including
the
line
number,
but
we
don't
actually
have
that
dependency
because
we
just
have
our
little
implementation.
B
It's
pretty
small,
it's
slightly
larger
now,
because
I
made
a
few
minor,
optimizations
and
things
but
like
it
because
it
tries
to
do
less
like
it
only
logs.
The
standard
error.
There's
no
like
Kellogg,
has
all
this
crazy
logic
for
like
logging
to
a
bunch
of
files
and
stuff,
and
they
probably
make
more
sense
for
servers,
but
not
super
useful
for
kind.
So
we
have
one
that
gives
us
a
lot
of
the
standard.
Kubernetes
style
logging
tooling,
but
none
of
the
none
of
the
hefty
dependency.
A
B
B
A
B
B
So
it's
kind
of
debatable
like
how
much
are
the
CI
we
want
to
switch
to
this
I
think
we
probably
want
to
burn
a
nice
pretty
CIMMYT,
though,
because
if
we
don't
get
one
then
like
it's
gonna
break
so
I'm
thinking
we're
going
to
get
kind
to
blocking.
Anyhow,
we
might
want
to
have
one
of
those
blocking.
You
know
it.
Has
this
huge
time
gain
over
the
other
presum
it's
maybe
we
lose
it
using
that,
but
I
am
a
little
sensitive
that
because
one
of
our
main
arguments
is
that
kind
of
like
weight
faster.
B
So
none
of
the
other
priests
limits
are
using
are
not
using
the
basil
build,
so
we
kind
of
be
hurting
ourselves
there.
We
have
other
options
we
could.
We
could
like
just
have
a
thing
that
does
that
or
or
honestly
we
could
let
it
break
occasionally
and
manually
fix
it.
It
shouldn't
it's.
B
It's
only
gonna
happen
when
people
are
changing
the
provider
packages
which
they're
not
really
supposed
to
be
doing
much
of
anymore,
because
they're
moving
them
out
of
tree
they're
already
staged
so
like
those
those
they're
supposed
to
be
legacy
and
they're
writing
new
ones
and
other
repos
to
replace
them.
That
are
fully
external
to
begin
with.
So
we
can
definitely
ship
our
releases
with
these
images,
but
I'm
not
sure
if
we
want
it
in
CI,
just
because
of
the
like
the
throughput
cost
on
testing.
It
costs
us
at
least
a
couple
minutes,
but.
B
Like
I
can
I
can
kind
of
see
like
compare
a
normal
trying
to
PR.
Compare
that
one,
because
kind
pairs
have
no
effect
on
kubernetes
build
right.
They
have
an
effect
on
you
know,
kind,
gold
or
something,
but
like
that,
not
really
so
you,
you
can
see
the
on
average
its
trending
it
to
be
longer
we're
much
more
likely
to
go
north
of
20
minutes
now,
and
it's
going
to
get
worse
when
we
run
more
tests
so
I'm
on
so
I'm,
not
sure
if
we
want
this
to
be
the
default.
B
I
might
change
that
PR
so
that
it
makes
it
possible
to
do
both,
but
it
keeps
probably
keep.
Maybe
it
keeps
the
default
basil
and
CI,
or
something
or
maybe
C
I
explicitly
opts
into
it,
because
the
other
thing
is
locally.
It
would
be
easier
to
call
that
script
as
sort
of
a
you
know
until
we
have
better
tooling,
which
I'm
also
looking
at
hopefully
for
next
quarter
for
running
e
to
be
test
locally,
you're,
probably
less
likely
to
have
like
basil
and
whatnot,
but
with
kind.
B
We
know
you
have
darker
and
most
of
what
the
make
build
depends
on
is
docker,
so
it
would
like
free
to
e
de
SH.
It
probably
makes
more
sense
for
the
script
default
to
make,
but
we
might
want
to
keep
CI
pointed
at
basil,
or
at
least
most
of
CI
I'm
thinking
more
so
for
kubernetes,
pre,
submits
than
kind
I'm,
more
willing
to
pay
time
in
the
kind
repo
of
like
slower
priests
limits,
but
like
like
for
kind
for
kubernetes.
B
B
Paying
something
this
potentially
maybe
like
five
minutes
worth
of
time,
that
the
other
jobs
are
not
paying
is
like
a
pretty
poor
trade-off
for
like
well,
the
images
that
are
smaller
or
like
we
get
some
coverage
of
this.
That's
really
not
kinds
problem
to
solve
for
release.
Images
are
super
nice
when.
B
Want
to
that
so
1:16
is
like
almost
at
the
door,
so
it's
kind
of
not
really
an
option
right
now,
but
sometime
early
in
the
1:17
cycle
is
my
ideal
hope
we
don't
want
to
be.
You
don't
want
to
add
a
blocking
presubmit
at
the
very
beginning
of
the
cycle,
because
the
because
it's
going
to
be
such
a
heavy
PR
load
but
messing
around
is
bad.
You
also
don't
want
to
do
it
towards
the
end
of
the
cycle,
because
you
might
disrupt
the
release
and
that
will
look
bad
and
be
frowned
upon.
B
So
I'm
going
to
be
floating
a
couple
of
ideas
around
how
we
can
make
the
like
that
well
commands
and
things
easier
and,
depending
on
how
much
everyone
wants
to
invest
in
that,
I
might
even
want
to
build
that
first,
so
that
we
get
a
really
good
impression
and
maybe
just
get
like
some
limited
book.
You
know
like.
Maybe
we
get
ipv6
walking
or
something
because
it's
still
a
bit
on
it's
still.
It's
still
fairly
unwieldy
to
actually
run
these
yourself
and
it
shouldn't
be
so
I'm
slightly
considering
that
everybody's
first
introduction
to
that.
B
B
You
need
to
like
clone
the
repo
and
put
it
in
a
certain
position
relative
to
kubernetes
like
call
this
script
and
and
right
now
it
defaults
to
basil,
which
means
that
it
probably
will
brake
on
most
people's
machines
because
they
won't
have
it
set
up
and,
like
the
experience,
is
not
good
trying
to
run
the
e2b
test
and
that's
something
that
I
will
be
improving
before
we
do
a
workshop
on
it.
At
uconn.
B
A
B
I
need
a
follow
back
up
with
as
well
so
yeah,
but
those
are
like
slightly
backburner
this
week
in
between
I've
got
quite
a
lot
going
on
I'm
hoping
to
get
this
new
logging
and
errors,
particularly
around
exit
eating
things.
I'm,
hoping
to
like
kind
of
do.
Some
planning
of
figuring
out
things
I
might
want
to
work
on
and
then
I'll
discuss
that
one's
a
little
bit
more
confident.
B
I'm
also
hoping
to
configure
cubelet
to
be
less
error.
Spamming.
If
you
do
two
containers,
I
think
I
know
how
to
fix
that
and
then
into
next
week.
I'll
come
back
stronger
on
dual
stack
and
reef
and
factoring
out
kind.
So
we
can
do
tests
and
more
more
providers
and
things,
and
those
are
the
things
that
I
really
want
to
get
and
point.
Six
is
like
really
good
debugging.
Maybe
we
don't
have
like
diagnose
command
yet,
definitely
not
yet,
but
we
have
like
good.
B
We
have
much
improved
like
logging
in
controls
and
error
reporting
with
races
and
things.
We
have
dual
stack
hopefully,
and
we
start
to
have
like
test
coverage
and
like
Padma
and
friends
for
after
that,
I
think
we
can
start
doing
things
like
cloud
provider
really
good
experience
for
interacting
with
it
for
a
you
to
be
testing
kubernetes
and,
like
maybe
the
diagnosed
type
stuff
for
sort
of
like
Auto
debugging
like
just
run
this
command.
A
B
B
You
look
at
most
of
the
code
while
it's
keyed,
depending
on
a
concrete,
node
type
right
now.
It's
just
doing
things
like
executing
a
command
against
the
node,
so
my
goal
is
to
have
a
very
restricted
subset
of
things.
B
So
even
things
like
get
cube
config
does
not
actually
need
to
be
probably
a
node
command.
We're
gonna
have
to
diverge
those
things,
and
instead
it's
some
type.
On
top,
it's
like
we
get
given
a
node
interface.
We
can
get
the
cube
config
with
this
function
by
doing
exec
and
almost
everything
boils
down
to
exec.
So
basically,
that
is
a
much
more
tractable
problem
and
I
actually
have
a
really
fun
partial
implementation.
B
When
I
was
trying
to
flesh
out
what
all
this
could
do,
I
have
a
really
fun
attempt
at
kubernetes
pods
as
a
node
provider,
we'll
see,
but
I
stopped
I
stopped
working
on
all
that
because,
like
debugging
is
so
painful
right
now
and
I
do
think
like
if
I
ship,
all
these
and
people
are
running
in
some
weird
environment
with
pod
man,
that
I
don't
use
and
don't
know
it's
gonna
be
super
hard
to
keep
it
all.
Working
so
struck
like
strong
retake
on,
like
logging,
is
a
thing
and
we
need
lots
of.
B
Yeah
I
have
one
somewhere
but
I'm
like
not
as
like
a
super
not
be
with
it,
but
let
me
try
out
some
of
the
ideas,
because
that's
the
thing
it's
like
I
was
just
gonna
write
the
design
that
I'm
like
there's
too
many
little
like
you
need
to
type
on
this,
actually
prototype
it
right.
So
I
have
one
prototype.
B
Actually
I
should
figured,
let's
like
that,
might
be
still
like
a
big
project,
let's
at
least
ship
the
log
stuff
this
time,
because
also
that
is
a
user
facing
breaking
change
of
like
okay,
there's
a
quiet,
flagon
of
verbosity
thing
now
and
like
there's
no
more
like
you
can
silence
all
output
if
you
want,
but
we're
not
gonna
try
to
do
like
error,
silencing
or
whatever,
like.
Ideally,
you
don't
do
that.
In
fact,
most
users
I
think
we're
just
setting
really
high
log
levels,
so
they
could
debug
stuff.
So
instead,
I
want
finer.
B
Green
verbosity,
like
kubernetes,
does
for
all
of
its
binaries
with
kellogg
and
we're
gonna
start
ship
that
and
we'll
need
to
refine
some
practices
around.
Like
you
know,
zero
is
the
obvious
one
master
things
that
the
users
should
always
see
and
they're
not
really
logs
they're,
just
not
an
error
or
warning
and
they're
not
they're,
not
standard
out
they're,
not
like
you
know.
We
created
this
node
with
this
name.
B
They're
like
we
flick
the
spinner,
for
example,
that
that's
just
diagnostic
messages,
so
the
things
you
want
them
to
see
by
default
has
to
be
zero,
but
above
that
we
need
better
practices
for
like,
like
kubernetes,
has
spent
some
time
on
this
and
it's
kind
of
coltd
on
like
well.
If
you're
gonna
do
bug
all
of
the
state
of
iptables,
that's
probably
at
least
you
know
some
high
V,
because
otherwise
is
too
much
and
the
the
API
from
K
log.
B
That's
the
one
part,
that's
pretty
good
for
logging
site,
it's
designed
so
that
for
that
stuff
it
costs
almost
nothing.
If
you're
not
actually
writing
it,
it
fills
out
really
fast
and
it
doesn't
actually
do
any
formatting
or
anything.
So
we
have
that
same
pattern,
so
we
can
have
like
v
10,000.
If
we
really
want
fish
that,
like
I,
don't
know,
we
can
put
log
statements
at
like
every
line
or
something
if
we
wind
up
meeting
them,
it's
not
too
expensive.
A
A
A
B
Yes,
I
mean
that's
true
for
some
of
it,
but
for
things
like
you
know,
if
we
should
do
the
parallel
join
or
not
I
mean
we
just
have
to
check
the
version
and
what
that
one
actually
is
gonna
need
some
special
care
going
forward,
because
we
need
to
something
I
will
make
a
push
for
once.
I
start
making.
The
like
note.
Refactor
push
is
first-class
support
for
mixing
node
versions.
If
you
use
a
config
file
so
that
we
can.
A
B
B
Oh,
the
node
version
is
you
know,
criminality
is
high
enough.
Oh,
but
wait
all
the
nodes
that
are
gonna
join
or
not
like.
You
don't
want
to
be
probably
mixing
those
modes
because
it's
got
a
lock
that
sort
of
thing.
So
we
need
to
figure
out
that,
but
that's
another
one
things
that
I'm
like
it
probably
makes
more
sense
to
look
at
that
more
after
we
do
the
drastic,
like
node,
refactor
and
say:
okay,
this
is
the
new
node
API
and
here's
our
targets
there,
which
in
turn
like
I
said,
makes
more
sense.
B
B
Yeah
I
think
there's
a
lot
of
good
plans.
I,
just
don't
want
to
like
get
to
over
committed
to
all
of
them
yet
because,
like
they're
going
to
need
to
kind
of
come
one
after
the
other
end,
they
probably
need
some
more
idea,
refining
for
them,
but
the
the
other
one
that
I'm
really
thinking
about
is
how
you,
how
you
you
do
we
test
that
experience
is
subpar
right
now,
like
testing
your
own
thing
with
kind
sure,
fine,
you
right,
you
know,
create
a
cluster,
it's
a
kubernetes
cluster.
B
B
Have
a
couple
of
ideas:
have
you
seen
basis,
but
I
don't
want
to
take
that
more
right
now,
just
because
many
things,
I
think
will
kind,
should
probably
focus
on
cleaning
up
its
own
code
right
now,
and
that
includes
the
node
stuff,
because
that
will
also
let
us
do
mocks
and
unit
tests
and
things.
And
then,
when
we've
landed
all
of
our
own
internal
cleanups,
we
can
start
trying
to
uShip
fancy
features
again.
Reboot
is
another
one.
B
A
B
B
B
Those
are
going
to
make
more
sense
to
figure
out
what
we've
decoupled
from
only
docker
and
the
current
solution
yeah.
But
the
current
proposal
is
not
sufficient
of
like
we'll
just
copy
the
host
resolve
coffin,
modify
it
and
then
have
to
resolve
common
one
for
Cuba.
Then
one
for
the
rest
of
the
node.
B
B
I
mean
we
have
several:
we
should
still
make
kind
viable
on,
not
the
doctor
bridge.
We
should
make
reserved
viable
and
we
can
solve
that
without
even
doing
much.
I
think
but
I
want
to
explore
those
in
detail
and
I.
Think
that
again,
that
exploration
will
make
more
sense.
When
we
have
like
a
better
idea
of
what
things
like
pod
and
look
like
where
they're
not
gonna
have
a
docker
bridge
to
specify.
B
B
B
A
B
Check
so
the
problem
in
users
are
very
familiar
with,
with
with
bridge
networks
for
docker
and
I,
think
we
should.
You
should
leave
that
there,
because
they're
gonna
want
to
join
their
own
things
that
are
not
ODEs
to
it,
and
that
sort
of
thing
we
just
need
like
this
should
not
be
that
hard
to.
We
have
a
couple
options
for
dealing
with
resolve
confidence,
a
little
bit
more
exploration
now.
A
B
A
A
B
Took
a
look
at
it,
it
it
actually
is
it's
not
that
you
think
the
problem
is
that
we
hit
the
core
DNS
leap.
Stuff
like
the
problem,
is
that
it
serves
on
real
close
if
it
didn't
serve
on
localhost,
we
would
just
use
this
I
mean
everyone's
docker
containers
that
are
not
on
the
default
bridge
or
using
this
fine.
B
That
that
other
tre
is
like
almost
on
the
right
approach,
it's
just
instead
of
figuring
out
how
to
actually
do
this
inside
the
nodes
it
it
it
like
copies
the
hosts
config
itself
and
that
we
do
not
want
to
be
implementing,
and
it
also
isn't
portable
I
think
we
can
do
this
just
using
the
config.
We
can
do
this
entirely
inside
the
node
so
but
again,
that
stuff
will
make
more
sense
when
we
clear
up
that,
like
the
boundaries
on
nodes,
so
that's
coming.
A
A
B
To
come
back
to
focusing
harder
on
dual-stack
when
they've
kind
of
or
in
kind,
that
is
when
we
figured
out
some
more
of
these
things
upstream,
like
host
ports
and
cue
proxy
and
whatnot
I,
know,
there's
a
big
push
to
be
like
116
is
dual-stack,
but
I've
seen
this
before.
You
know
it's
dual-stack,
where,
like
it'll
robably,
really
be
dual-stack
in
117
yeah.
B
B
A
B
B
A
B
A
The
APB
s
is
working,
it
I,
didn't,
try.
I
know
that
the
the
guy
that
was
working
on
it
Majid
to
call
and
have
that
workaround
for
rather
know
what
type
it
in
blue.
That
is
missing,
but
he
said
that
he
tested
in
his
environment
and
is
working
aha,
but
can
we
use
Q
proceed
with
a
BBS
is
just
patching
the
configuration
oh.
B
A
B
It's
fine,
but
ya,
know
I'm,
also
only
I'm
also
of
the
opinion
that
something
like
psyllium
with
EB
PF
is
going
to
be
the
actual
in-game
for
all
of
this
stuff,
like
instead
of
hopping
between
modules,
we're
just
going
to
like
it's
just
how
we're
gonna
move
to
be
PF.
Q
proxy
is
just
gonna,
be
slow
about
it.
In
the
meantime,
iptables
works.
A
B
A
A
B
The
problem
is
that
we
haven't
been
shipping
a
userspace
binary
to
match
the
kernel,
module
and
iptables
is
weird
and
that,
like
normally,
you
know,
kernel,
doesn't
break
user
space
and
they
technically
didn't
here,
but
because
these
switched
modules
that
makes
user
space
really
confusing.
What.