►
From YouTube: Kubernetes SIG Network bi-weekly meeting for 20210415
Description
Kubernetes SIG Network bi-weekly meeting for 20210415
A
We're
recording
this
is
kubernetes
sig
network
meeting
from
april
15
2021.
I
didn't
ask
in
advance,
but
tim
or
bridget
do
you
have
triage
ready
to
go.
A
B
We
did
all
right
all
right,
so
in
reverse
chronological
order,
I
did
filter
out
a
few
before
so.
If
anybody
looked
more
than
an
hour
ago,
you
probably
will
miss
some
here's,
a
user
who's
reporting
a
problem
with
graceful
termination,
but
it
wasn't
exactly
clear
what
the
issue
was.
So
I
I
followed
up
asking
for
a
bit
more
explanation,
but
I
could
use
a
volunteer
to
follow
up
when
this
person
responds
to
figure
out.
If
this
is
actually
a
bug
or
just
a
misunderstanding,
what
happens.
A
B
All
right,
http
lifecycle,
hook
tests
flake
on
multi-node
clusters.
The
interesting
part
here
is
this
one
sig
windows,
so
it
sounds
like
there's
some
assumption.
There's
some
comments
on
here
from
antonio
and
discussion
about
the
networking
model,
which
does
not
require
that
a
host
network
be
able
to
reach
other
pods
unless
the
platform
supports
host
networking
mode
which
windows
does
not
so
there's
already
a
response.
So
what
is
the
right
answer
here?
B
D
D
From
what
someone
told
me
that
they've
tested
in
particular.
F
F
B
B
It
wasn't
clear
to
me
if
that
was
running
in
host
network
or
not.
My
memories
are
a
little
fuzzy
there
anyway,
I
don't
think
there's
anything
specifically
that
says:
cubelet
has
to
be
able,
or
has
to
run
in
the
root
namespace
just
that
it
has
to
be
able
to
reach
the
pods.
B
I
guess
that
is
the
definition
of
a
platform
yeah.
No,
no.
F
G
I
wanna
be
very
specific
because
people
might
have
pseudo-multi-tenancy
around
nodes
and
where
nodes
cannot
like
access,
each
other's
and
so
on.
Pods
needs
to
access
each
chat,
each
others
we're
allowed.
G
B
B
Okay,
so
let's
take
this
one
who's
assigned
to
this.
If
antonio,
are
you
assigned?
No,
I'm
assigned
okay
I'll,
look
at
it
later
and
see
if
there's
more
to
respond,
cue
cuddle
describe
endpoint
slice
panics
this
one's
a
fun
one.
I
just
kept
it
here,
so
I
could
stick
robin's.
J
H
K
L
I
see
so
this
is
a
coop
proxy
bug
somewhere,
where
for
some
reason
it
doesn't
actually
create
those
endpoints
on
a
certain
version
of
windows.
I
suspect
it's
probably
related
to
like
a
windows,
patch
version
or
something
I'm
not
sure,
but
I
think
it's
assigned
to
me
and
it
is
james
yeah
awesome.
Thank
you.
B
B
Lars
was
thinking
up
and
ricardo
thinking
up
ways
we
could
get
around
this.
Who
is
it
assigned
to
it's
assigned
to?
Nobody
can
assign
it
to
one
of
either
lars
or
ricardo
or
somebody
else
if
they
want
to
take
it.
K
B
K
Is
it
worth
the
effort
to
add
and
remove
the
ip
tables
rules
for
when
it's
empty,
we
add
a
rule
and
when
it
gets
an
end
point
locally,
we
remove.
K
B
K
B
K
B
Yeah,
it
might
be,
I
don't
know
what
else
there
are
other
other
ip
tables
rules
being
written
in
there.
So
if
it's,
if
it's
easy,
then
yes,
it's
worth
it.
If
it's
really
really
hard,
then
no,
it's
not,
and
the
truth
is
it's
probably
somewhere
in
between
yeah
flaky
test
services
for
should
function
for
service
endpoints
using
host
network.
L
J
N
Well
aware
of
it,
we
have
a
bunch
of
those
tests
that
are
currently
listed
as
linux
only
and
it
would
just
be
nice
if
they
were
more.
B
Sure,
okay,
cloud
provider,
service
controller
race
with
node
sync
and
service
updates:
this
is
a
particularly
fun
one,
because
I
don't
think
it's
actually
specific
to
this
case,
but
the
example
I
thought
was
interesting
to
read
it's
possible
that
in
the
process
of
deleting
a
load
balancer,
we
also
end
up
recreating
the
load
balancer
and
then
leaking
it,
because
the
delete
was
already
observed.
B
G
B
B
Kubernetes
bypasses
external
firewall
ports.
This
was
a
really
fun
one
to
read,
apparently
we're
magical
and
we
can
bypass
firewalls
there's
information
missing,
so
I've
already
asked
for
more
help.
There
jay
also
commented
a
little
bit
so
I'll.
Unless
somebody
really
wants
this
one
I'll
just
assign
myself,
I
mean,
I
think,
that
one's
probably
just
going
to
wind
up
getting
closed
right,
because
I
it
can't
possibly
be
another
one
right.
I
my
suspicion
is,
you
know
the
firewall
rule
is
insufficiently
precise
or
something.
B
Okay
last
one
then
doctor
shim
defecation.
I
think
this
is
mostly
about
documentation.
Every
time
we
get
to
the
end.
Yes,
we
need
to
write
up
somewhere,
how
an
end
user,
who
was
using
docker,
shim
or
using
cubenet
before,
can
recreate
that
behavior
with
bog
standard
cni.
Does
anybody
know?
Does
such
a
dock
exist.
J
Dan,
I'm
pretty
sure
it
doesn't,
but
you
can
assign
that
one
to
either
me
or
casey
calendrello,
squeed
or
just
assign
both
of
us.
Why
not.
G
Thank
you
tim.
Can
we
do
one
more.
The
there
is
a
probe
failure
like
you,
have
it
open
as
a
tap
and
I'm
just
waiting
for
some
feedback
and
either
you
look
at
it
and
we
provide
feedback
later
or
we
do
it
now,
but
I'll
look
at
it.
Yeah
thank.
A
A
So
I
was
next
I
wanted
to
share.
I
sent
an
email.
A
A
N
A
question
about
this:
I
see
that
we're
concerned
possibly
about
not
having
done
a
community-wide
update,
and
I
didn't
dig
in
far
enough
to
find
out
what
that
actually
is.
But
is
it
just
like
some
members
of
this
group
report
somewhere
else
in
some
form?
Yes,
if
that,
if
that's
the
case,
could
we
get
around
that
by
saying?
Well,
we
haven't
done
one
in
a
while,
but
we've
scheduled
it
and
I
volunteer
to
help
with
making
it
scheduled
and
happen.
A
Yeah,
that's
great,
I
mean
we
should
definitely
schedule
one.
It's
been
a
long
time
since
at
least
the
last
one
I
was
aware
of
so
I
I
think
this
is.
This
document
is
more
just
about
making
us
think
about
those
sort
of
things
you
know.
I.
B
Clearly,
I
think
that's
our
only
way.
That's
our
only
way
out
of
the
obligations
that
we
hold.
N
B
A
And
I
think
at
this
point
we
are
overdue,
like
the
the
date
to
have.
This
done
was
last
week,
yeah,
so
the
sooner
the
better.
But
you
know
we
should
make
sure
we're
getting
it
right.
So
if
it
we
think
it
takes
two
weeks
to
gather
up
that
information
and
well.
B
A
B
B
So
if
we
put
it
one
week
out,
maybe
they'll
do
it
a
week
instead
of
two
weeks,
I
was
out
the
first
half
of
this
week.
This
is
my
own
excuse,
but
it's
in
my
it's
in
my
box
to
to
do.
Should
we
set
like
a
week
from
today.
A
L
L
L
B
A
F
L
B
Maybe
we
can,
I
don't
know
how
about
how
does
tomorrow
look
for
people
regroup
in
the
we
have
your
usual
thing:
the
cube
proxy.
L
B
B
We
can
talk
about
it
more
there
and
goal
being
this
time
next
week.
We're
done.
B
L
Oh
yeah,
okay,
all
right!
Okay.
What
yeah
can
we
oh
yeah,
the
flag,
deprecation
thing,
so
we
were
talking
about
this
friday?
Is
it
okay
for
us
to
finally
start
deprecating
those
flags?
Does
anybody
we'll
do
all
the
details
of
it,
but
is
it
in
general?
Are
people
okay
with
that?
Even
if
so
can
we
take
some?
Is
there
anything?
Can
we
just
do
it,
sorry,
which
flags?
I
don't
know
the
agenda?
L
You
know
all
those
flags
and
coupe
proxies,
don't
work,
which
is
most
of
them.
They're,
like
no
ops.
B
Yeah,
the
no
ops
seem
fair,
like
we
haven't,
they
should
be
aged
out
by
now
right,
yeah.
Well,
you
can
use
them
to
generate
a
coupe
proxy
config.
L
L
Okay,
yeah,
so,
okay,
so
we'll
just
move
forward
with
that
on
this
cap
and
the
other
part
of
the
cap
is
making
sort
of
some
strict
like
failure
scenarios.
I
guess
where
you
know
somebody
puts
the
flag
in
the
flag,
doesn't
do
anything.
It's
gonna
break
right
right.
So
as
long
as
people
are
just
aware
of
that,
so
it
might
break
people
with
ansible
scripts
that
have
you
know
what
I
mean.
B
L
All
right
cool,
so
that's
good
enough!
I
think
I
think
that's
enough
for
us
to
move
forward
with.
P
A
Question
right
thanks:
jay
ricardo
and
you're
next
yeah.
O
I'm
gonna
be
really
fast.
First
of
all,
sorry
about
leaving
my
mic
open.
I
click
on
mute,
but
it
keeps
on
building
sorry,
then
so
it's
I
have
started
some
some
effort
about
moving
q
proxy
outside
repo.
I
was
chatting
a
little
bit
with
antonio
and
also
with
our
group
in
in
into
proxy,
and
I
have
opened
so
far.
Four
pr's
me
and
casey.
O
We
traveled
last
last
week
with
some
trying
to
trying
to
do
that:
q
proxy,
rendering
so
yeah,
even
if
we
are
doing
some
capping
thing,
I
think
it's
it's
worth
to
move
that
outside
of
the
ripple
right.
O
So
I've
opened
four
pr's,
I'm
mostly
doing
some
small
small
movements
here,
but
and
but
one
that
that's
pretty
big,
and
I
would
like
to
ask
you
folks
to
take
a
look
into
that
review.
The
big
one.
It's
probably
just
you
can
approve
team,
because
it's
it
messes
with
api
and
other
things,
mostly
moving
constants
outside
the
api
machinery
theme,
but
yeah.
I
guess
it
is
I've,
seen
also
an
email.
O
I'm
gonna
probably
keep
updating
you
about
that,
and
the
last
thing
is:
if
you
people
think
that
we
should
open
a
cab
for
the
last
part
which
is
moving
package
slash
proxy
to
outside
of
the
repo
or
not.
I
have
an
opinion
that
probably
we
shouldn't,
because
I
always
listen
that
if
you
are
rendering
kubernetes
kubernetes,
you
are
going
to
suffer
and
have
some
pain
and
we
are
not
responsible
for
that.
B
And
capture
how
we
communicate
even
within
the
broader
kubernetes
world
right,
there
are
other
folks
who
might
have
feelings
about
this.
Who
aren't
here?
Who
don't
come
to
our
regular
meetings
but
would
see
a
cap.
So
I
would
say
yes
and
specifically
to
discuss
the
point
about
whether
it
should
go
live
under
staging
or
whether
it
should
actually
really
move
out.
O
L
That's
it
so
are
folks,
okay
with
this
here
and
it's
just
a
matter
of
doing
the
cap
or
is
there
anyone
who's
like?
Oh,
this
is
a
bad
idea
because
we're
like
could
do
it.
You
know
what
I
mean
it,
but
it's
not
really
much
point
doing
it.
If
there's
gonna
be,
if
people
don't
want
us
to
do
it,
but
is
everybody
else?
Okay
with
it?
B
B
O
B
Yeah
we
have,
there
was
at
some
point
a
request
to
move
some
of
those
more
complicated
packages
off
to
kubernetes
utils.
B
I
remember
reading
a
pull
request,
for
I
don't
think
it
was
async,
but
it
was
something
similar
to
that
in
in
scope,
and
we
found
some
issues
that
were
like
actual
real
bugs
with
making
it
a
like
a
standalone
api
and
it
turned
out.
It
was
just
the
person
who
was
proposing
it
didn't
have
that
much
energy
and
had
walked
away
from
it
and
I
they're
still
bugs
those
bugs
still
exist,
but
they're
since
they're
entirely
internal
they're,
less
of
a
huge
deal.
O
Yeah,
the
async
one
cannot
be
moved
to
to
youtube
because
it
renders
another
kubernetes
repo.
Okay,
that's
client
go,
but
I
am
moving
one.
That's
called
slice
and
I
am
like
fighting
with
the
old
history
because
it's
a
pre-client
because
pre
license
agreement
and
pre
pre
merge
message.
So
when
I've
moved
with
all
of
the
history,
I've
got
like
a
lot
of
punches
from
pro
like
you
cannot
commit.
You
cannot
do
that.
So
my
question
about
that.
B
Yeah,
I
think
we
should
keep
the
history,
even
if
we
end
up
adding
one
commit
that
says,
remove
all
of
the
files
that
we
don't
need
to
move
this
between
repos
and
then
we
have
just.
E
A
bunch
of
you
know
not
so
useful.
The
bots
won't
let
the
pr
merge,
because
the
old
commits
are
are
not
up
to
snuff
we.
So
we
I
had
done
a
a
a
pr
to
move
bounded
frequency
runner,
which
is
probably
one
of
the
things
that
that
you
were
looking
at
in
async
ricardo
to
to
utils,
and
you
had
approved
it,
but
it
couldn't
merge.
And
then
we
were
like
oh
we'll
deal
with
this
after
freeze
and
then
I
keep
forgetting
and
vegetable
actually
just
closed
it.
This
morning,.
B
Well,
I
I
there
are
still
a
small
number
of
people
who
can
manually
approve
merges.
So
if
we
have
to
do
that,
we
we
can.
G
G
I,
the
problem
with
util
is
you're
telling
people
you
can
vendor
that
which
means
you're,
creating
a
contract
around
that
api
and
I'm
not
talking
about
api
rest
api,
I'm
talking
about
golang,
api,
yep
and,
let's
just
say
the
ground
at
least
the
apis
I've
looked
at,
is
not
exactly
the
cleanest
right
and
it
has
things
like
so
with
this.
These
are
things
we
can
always
observe
and
the
way
we
use
them.
B
So
the
the
line
I
mean,
unfortunately,
the
way
the
go
ecosystem
works
is,
if
you
can
see
it
and
it's
not
in
an
internal
directory.
It's
fair
game
which
we've
at
least
written
in
in
various
places
like
don't
vendor
kubernetes
kubernetes,
because
we
won't
support
you
right.
All
of
the
other
repos,
though,
are
less
well
defined.
Less
bright,
liney
util
was
explicitly
done
so
that
people
could
take
useful
stuff
out
of
the
kubernetes
ecosystem
and
use
it
and
like
part
of
the
the
meaning
there
was.
B
We
will
support
this
whenever
we
bring
something
into
utils,
we
hold
it
to
a
generally
higher
bar.
None
of
it
is
perfect.
There's
some
things
in
there
that
I
look
at
too
and
I
go.
I
wish
we
hadn't
done
that,
but
we
do
hold
it
to
a
higher
bar
for
testing
and
for
documentation
and
for
api
cleanliness
than
we
do
for
internal
stuff.
You
know
something
like
async
is
big
enough,
that
it
might
just
warrant
its
own
repo.
G
My
problem,
my
problem
with
offering
these
things
for
people
is,
you
have
no
control
on
what
these
things
can
vendor
in
so
let's
say
we
are
in
a
situation.
We
can
use
these
things
in
cooper
nets
kubernetes,
but
they
vendor
some
weird
thing
and
we're
okay
with
that,
and
we
understand
it
now.
If
we
offer
it
to
external
people,
then
it
becomes
a
shared
like
even
the
vent,
like
even
the
stuff
that
you
vendor
into
you.
The
stuff
that
you
build
into
your
tools
becomes
something
we
need
to
worry
about.
I'm
just
yeah,
not.
B
Utils
is
supposed
to
be
like
almost
import
free,
there's
supposed
to
be
very
few
imports
there
and
so
and
there's
a
rule
that
it
can't
import
other
kubernetes
repos
so
like.
If
we're
really
vendoring
client
go
which
boggles
my
mind,
I'd
have
to
go
figure
out
what
the
heck
we're
doing.
There
then
yeah.
That
is
not
a
candidate
right.
B
B
So
I
agree
with
you.
I'm
agreeing
something
like
async
is
big
enough
and
subtle
enough
that
we
don't
want
to
copy
it
internally.
We
want
to
find
a
place
where
we
can
share
it
amongst
each
other
and
vendoring.
Kubernetes
is
not
a
starter,
so
we
have
to
find
a
place
for
it.
Even
if
we
put
a
big
thing
in
the
readme
that
says
this
is
really
project
internal,
please
don't
mess
with
it.
B
B
A
Thanks
ricardo
dan
winship,
I
think
you're
next
on
dual
stereo
yeah.
A
E
Out
but
it
could
you
know
people
can
start
commenting.
I
realized
that,
since
it's
going
to
be
behind
a
feature
gate
that
means
that
it
talks
about
some
pain
and
flapping
connector
or
services
and
stuff
like
that,
but
that's
mostly
for
the
early
adopters
once
the
feature
goes
ga
it
should
all
work
pretty
smoothly.
B
I,
like
the
bug
you
filed
by
the
way
on
the
points
ownership
that
was
a
good
one.
H
Okay,
yeah
mine
is
really
simple.
It's
just.
We
have
a
fair
amount
of
people
for
gateway
api
in
lots
of
different
time
zones
and
so
we're
trying
to
find
times
that
work
for
as
many
people
as
possible.
H
S
Yeah,
so
this
is
a
quick
question
for
the
team.
This.
S
And-
and
you
know
there
are
two
options
one
is
you
know
it
is
part
of
the
networking
api
group,
or
you
know
we
host
them
into
a
kubernetes
sig
repo
specific
for
cluster
network
policy.
So
I
think
the
general
feeling
is
that
we
will
host
it
in
a
separate
repo
under
kubernetes.
S
The
question
is
that,
should
it
be
specific
to
cluster
network
policy
resources,
or
should
it
be
a
more
generic
network
policy
repo?
Because
there
are
discussions
around,
you
know
v2
network
policy
which
might
have
their
own
crds.
So
should
they
all
live
together,
or
should
it
be
specific.
S
There
would
be
some
issues
because
we
already
deprecated
and
removed
the
v1
alpha
1
for
network
policy
or
the
networking
group
so
yeah.
I
think
antonio
had
some
comments
around
that.
So
maybe
it
sounds
like
it's.
It
is
a
good
idea
to
have
it
as
a
separate
reaper.
S
B
B
Do
we
have
any
belief
or
our
understanding
of
what
the
other
things
would
be?
So
one.
S
One
example
that
comes
to
my
mind,
I
think
government
is
working
on
something
like
a
dns
policy
resource.
I
know
jay
ricardo.
We
are
all
talking
about
a
v2
for
network
policy,
which
may
be
a
different
resource
by
itself
kind
of
an
evolution
for
the
network
policy
v1.
Okay.
Now
they
are
all
not
necessarily
related
to
each
other.
Rather
I
mean
it's
not
something
that
will
you
know
graduate
together,
but
again
they
fall
under
the
umbrella
of
network
policy.
S
S
Okay,
so
maybe
we'll
start
the
work
on
getting
a
repo
for
for
a
standard,
kubernetes.
H
A
small
follow-up
question
here:
would
it
be
in
the
same
networking
dot
x,
dot
kxio
that
gateway
api
is
using
and
is
there
any
kind
of
you
know
we
we
have
previously
deprecated
versions
of
api
groups
it
for
core
types.
H
B
So
I
think,
there's
two
issues
one:
should
we
try
to
coordinate
one
group
across
multiple
efforts
and
repos
and
two
should
it
still
be
using
x
case,
like
I,
I
believe
at
some
point
in
gateway.
We
said
we
we
need
to
decide
at
some
point
in
the
future,
whether
it's
going
to
stay
as
xkates
or
kate's
right
yeah.
B
So
you
know
the
the
distinction
between
x,
kate's
and
kate's.
Being
things
under
x.
Kate's
are
not
subject
to
the
kubernetes
api
review
process
and
things
under
case
I
o
are
my
feeling
would
be
that
we
should
probably
move
it
under
kate's
that
I
owe
for
a
gateway.
I
don't
know
how
coordinating
across
different
repos
would
work.
We
need
to
have
some
coordination
scheme,
but
it
could
work.
B
B
B
So
then
the
question
is:
could
we
coordinate
one
namespace
like
that
across
multiple
projects
we
would
need.
We
need
something
somewhere
to
say
you
know,
hey,
I'm
using
the
fubar
name.
Nobody
else
can
use
it
right.
Some
some,
I
don't
say
a
spreadsheet,
but
like
a
git
repo
somewhere
that
lays
claim
to
names
within
that
namespace.
F
B
Storage
would
be
the
likely
if
anybody's
doing
it,
it's
probably
storage.
So
maybe
we
should
ask
them
what
they've
done
with
this
and
just
follow
the
pattern.
If
they
created
multiple
namespaces,
then
maybe
we
should
just
do
that.
B
Yes,
I
mean:
that's,
that's
one
way
to
do
it
right,
it
could
be,
it
could
be
gateway.networking.case.io
and
policy.networking.com
yeah,
but
within
within
policy
there
would
be
three
separate
projects
creating
different
crds
yeah.
Now,
there's
a
new
issue
that
just
popped
into
my
head.
There,
like,
I
know,
you're,
not
allowed
to
share
an
api
group
between
crds
and
built-ins,
but
I
don't
know
if
that
applies
to
suffix
sharing,
I'm
presuming
it
doesn't,
but
I've
never
tried.
B
So
we
should,
we
should
try
a
few
things
and
then
talk
to
storage,
so
who's
taking
the
action
there.
Do
you
want
to
reach
out
to
storage
folks.
S
B
You
do
you
know
any
of
the
storage
people.
Do
you
want
me,
I
can.
I
can
do
it.
Let
me
tell
you
if
you
can
point
me
to
someone
I'll
I'll
pick
up.
C
H
And-
and
I
I'm
happy
to
be
part
of
that
and
or
start
that
discussion,
okay,
it
could
affect
gateway
as
well.
B
C
E
A
Thanks
everybody,
I
think,
let's
see
from
sj.
M
Yes,
I'm
here
a
quick
note
about
the
cap
for
all
board
services
support.
This
is
a
work
in
progress.
I've
dropped
a
link
in
the
agenda,
but
thanks
to
everyone
that
already
reviewed
it
and
and
offered
alternative
approaches
as
well
it'll
be
great
to
get
some
more
comments
on
it,
mostly
about
the
use
cases
for
all
port
services
around.
M
A
And
then,
just
in
time,
we're
at
the
last
topic
from
clawdu.
F
L
Like
does
anybody
know
who
would
work
on
that
or
if
people
care
about
doing
that,
it
would
allow
you
know
coupe
proxy
and
everything
else
to
not
have
to
go
through
load
balancers
and
it's
an
issue
that
we
were
triaging
in
the
net
paul
sub
project
or
in
the
coop
proxy
sub
project,
and
I
was
like
I
asked
if
we
could
close
it
and
then
somebody
yelled
at
me.
So
I
figure.
L
B
It
right
I
mean
I,
I
I
don't
know
exactly
what
the
implications
are,
but
the
environment
variables
that
we've
always
used
are
singular,
and
so
anybody
who's,
parsing
those
environment
variables
today
would
break.
B
If
we
changed
the
format
of
it,
we
didn't
define
it
as
an
array,
so
we'd
have
to
like
there's
multiple
levels
of
discovery
that
happened
there
for
the
the
the
the
endpoint
ips
right
like
if
we're
talking
about
the
ips
that
come
out
of
a
cube,
config
versus
the
ips
that
you
find
in
the
what's,
it
called
anywhere
environment,
yeah,
any
yes
in
the
environment.
Thank.
G
F
Yeah
they
want
this
because
they
want
the
the
use
cases
I
want
to
to
use
this
with
for
two
process
example,
because
you
cannot
have
a
service
until
you
have
to
process
it.
Oh
and
I
don't
want
to
put
a
lot
of
water
in
the
control
plane.
You
know
that's
the
use
case,
so
I
I
just
take
the
cry
and
go
into
proxy
to
to
go
to
these
three
api
sets.
L
B
B
F
L
What
should
we
do
as
we're
going
through
issues
like
with
stuff
like
this?
That's
just
not
going
to
be
solved,
but
it's
not
an
invalid
request.
Like
is
there
a
label
for,
like
you
know
like
a
bike,
shed
label
or
something
like
where
you
could
say
like
okay?
Well,
we
can,
let's
just
keep
talking
about
this
forever.
B
Well,
no,
because
we
don't
I
mean,
there's,
I
think,
there's
two
categories
there
there's
the
ones
that
we
don't
know
quite
what
to
do
yet.
We
think
they're
valid.
Let's
leave
them
open
and
see
if
we
can
discuss
them
later
and
there's
the
ones
where
we
agree
are
valid,
but
we're
never
going
to
do
it.
So,
let's
just
close
it,
because
why
have
6000
open
bugs.
L
How
do
we
prove
to
someone
that
we're
never
gonna?
Do
it
like?
What
do
we
what's
the
bar
there
like?
Do
we
just
do
I
just
I'm
happy
to
just
bring
him
up
like
every
like
here,
every
once
in
a
while
and
just
be
like
so
for
this
one
specifically,
should
I
just
say
that,
because
I
feel
the
same
way
like
antonio
mentioned,
like
I
don't
feel
like
it's
ever
going
to
happen,
but.
E
B
S
N
S
G
It's
not
even
a
fix,
it's
a
feature
request
right.
Another
thing
I
need
you
to
like:
I
need
us
grounded
a
bit.
We
have
the
luxury
of
clouds,
so
low
balancers.
Are
there
yeah
bare
metal,
it's
harder
and
I'm
guessing
whoever
pushing
for
this?
It's
a
very
metal
deployment
because
that's
where
you
suffer
the
most.
B
Yeah
and
to
be
clear
in
in
this
case,
I
don't
object
to
leaving
it
open
if
we,
if
we
think
that
you
know
given
somebody
with
sufficient
incentive
that
they
would
actually
implement
it
in
a
way
that
we
could
convince
david
edens
to
accept,
then
I'm
cool
with
that.
G
I'm
not
saying
we
shouldn't,
we
should
say
we're,
not
gonna
fix,
I'm
just
saying:
let's
keep
it,
let's
think
about
it
all
right.
If
it's
one
of
those
things
that
will
require
a
large
service
area,
then
we
need.
We
need
to
schedule
it
in
a
way
and
people
understand
that
yeah
it's
a
corner
case
and
we
would
like
to
add
it,
but
it
will
take
time.
B
How
long
yeah,
unfortunately,
these
are
the
sorts
of
issues
where
we
ask
people
to
file
a
cap,
and
then
we
spend
three
months
pounding
on
the
cap
and
then
they
get
frustrated
and
walk
away
from
very
often,
and
it's
not
that
we
should
change
the
process.
It's
just
you
know.
This
is
what
happens
sometimes.
G
A
person
like
me
who
works
mostly
in
cloud
environment,
something
like
that
is
like
no
all
right.
A
vintage
moment
of
somebody
who's
using
this
on
edge
or
bare
metal
might
be
fundamentally
different
right,
so
I
don't
want
to
start.
L
L
O
Yeah
I
I
was
going
to
say
that
probably
this
could
be
something
as
like.
I
am
an
on-premises,
very
metal
user,
and
this
could
probably
be
like
best
practices
for
on
premises
like
you
can
run
an
aj
proxy
and
keep
alive,
and
this
is
the
way
that
we
suggest
you
doing
this.
It's
not
like
the
end
of
the
world.
I
know
that
that
there
is
a
case
of
of
the
edge
computing
thing,
but
for
mostly
the
bare
metal
one,
probably
we
should
say
hey
we
can.
O
B
B
You
know,
like
slaying
a
dragon,
do
we
close
the
issue
and
pretend
it
never
existed
or
do
we
leave
it
open
and
let
the
number
of
open
issues
just
continue
to
grow
over
time
right,
we're
at
2000,
open
issues
right
now,.
B
Fix
his
or
maybe
we
need
a
different
way
of
denoting
like
ideas
that
we
would
love
to
do,
but
we
used
to
have
a
we
used
to
have
a
version,
two
accumulator
bug
where
every
time
somebody
came
up
with
one
of
these
like
this
would
be
awesome,
but
it's
so
challenging.
It
requires
a
v2
and
just
go.
Add
your
ideas
to
that,
and
then
it
just
became
like
a
it's
like
screaming
into
the
wind.
It's
like
a
venting
place.
B
Where
did
that
go?
It's
gone.
I
think
the
issue
itself
might
still
be
around,
but
like
nobody
takes
it
seriously,
it's
not
it
like.
There
isn't,
there's
no
plans
to
make
a
v2
that
breaks
things
in
any
fundamental
way.
So
a
lot
of
those
ideas
are
just
non-starter
ideas,
they're
just
too
big
to
be
worthwhile.