►
From YouTube: Kubernetes SIG Network Meeting for 20230105
Description
Kubernetes SIG Network Meeting for 20230105
A
We
adhere
to.
We
adhere
to
the
cncf
code
of
contact
conduct,
which
boils
down
to
don't
be
mean.
We
record
these
meetings
and
post
them
up
on
YouTube
for
your
records,
and
this
is
my
first
time
running
this.
So
please
be
gentle.
A
A
pause
with
an
awkward
silence
all
right.
Well,
it
looks
like
so
we
have
a
few
items
on
the
agenda:
triage
new
Sig
chairs,
multi-network
cap
and
then
some
cap
review
and
then
more
caps.
Tim.
Do
you
want
to
kick
off
the
triage.
B
Sure
I
appreciate
everybody
who's
gone
through
and
done
some
pre-triage
to
the
extent
that
I've
only
got
three
worth
bringing
up.
Today,
everybody
see
my
screen:
okay,
yes,
yes,
all
right
number,
one
node
ports,
listening
on
external
IPS
that
are
also
used
as
load
balancer,
IPS
Lars,
is
all
over
this
one,
which
is
awesome.
It
does
seem
like
a
bit
of
a
breaking
change
that
we've
made
at
some
point.
B
It's
a
weird
use
case,
but
I
wanted
to
raise
it
here,
because
it
highlights
the
the
problem
that
you
know,
people
mix
and
match
the
way
things
work
and
every
aspect
of
our
implementation
eventually
becomes
part
of
the
API.
So.
B
Tables
I
didn't
confirm
yet
I
actually
had
to
rebuild
my
tree,
so
I
didn't
get
to
confirm
it
this
morning.
I
feel.
D
B
But
I
I
can't
remember
if
we
fixed
it
on
iptables,
so
the
use
case
here,
that's
interesting
is
it's
actually
the
nodes
IPS,
it's
not
an
allocated
load,
balancer
IP,
and
so
it's
reasonable
that
node
IP
would
have
a
node
Port,
but
they
also
use
that
node
IP
to
assign
a
load,
balancer
IP,
and
so
it
seems
like
it
would
be
reasonable
that
it
would
also
support
the
the
load
balancer
Port,
but
ipvs
detected
one
of
those
cases
and
broke
the
other
yeah.
E
B
Okay,
so
Lars
is
on
this
I
just
wanted
to
bring
it
up
here.
I
thought
it
was
a
very
interesting
case.
We
probably
should
look
at
whether
it's
possible
to
fix
the
node
ports
appearing
on
load,
balancer
IPS
in
IP
tables,
one.
G
F
B
But
Hiram's
law
says
that
if
it's
part
of
the
API
for
long
enough-
and
there
are
enough
users
that
eventually
it
becomes
part
of
your
contract,
whether
you
like
it
or
not,
and
so
what
we
need
to
figure
out
is
if
this
is
isolated.
If
this
is
just
one
person
doing
something
sort
of
edge
Casey-
and
we
can
say,
here's
the
here's,
the
workaround
for
you
or
they
can
adapt
somehow
or
whether
we
say
this
is
unfortunate.
Let's
put
a
test
on
it
and
make
sure
that
it
keeps
working
and.
G
B
I
mean
I,
try
to
put
myself
in
their
shoes
and
how
unhappy
I
would
be
if
I
upgraded
over
the
holidays
and
broke
my
cluster.
So,
okay,
it
looks
like
Lars.
Are
you
on
this
one?
Do
we
need
to
talk
more
about
it.
B
B
If
we
can
get
away
with
leaving
it
like
the
way
we
want
it
to
be,
as
opposed
to
the
way
it
has
been
yeah
what
Bridget
just
said.
We
should
document
somewhere
like
if
you're
in
this
situation,
where
you're
using
the
same
node
Port
as
a
load,
balancer
IP,
then
you
shouldn't
expect
both
to
work,
and
you
should
instead
just
use
the
node
ports
directly.
G
B
Okay,
next
ipvs
scheduler
Lars,
now
that
your
change
went
in,
should
we
close
this
issue.
G
No,
he
can
still
not
set
his
wanted.
Scheduler
the
the
maglev,
because
there
are
restrictions
in
in
the
validator
I,
don't
know
if
I
should
remove
them,
but
now
it's
possible
to
easily
support
any
scheduler,
even
even
if
they
write
their
own
right.
B
B
And
this
one
was
left
over
from
last
time.
We
don't
have
an
assignee
for
a
Windows
wind
kernel,
feature
request
that
would
be
Mike,
Mike
Z.
H
A
I
B
Right
so
we've
been
having
discussion
about
Changing
of
the
Guard
on
Sig
chairs,
Casey
and
Dan
have
been
doing
this
for
a
long
time.
It's
good
to
get
fresh
meat
into
the
grinder
I
mean
into
the
process,
so
we've
been
talking
about,
Michael
has
volunteered.
Shane
has
also
volunteered
we're
working
on
defining
what
the
process
actually
is
and
what
it
means
to
that
end.
B
I
started
on
a
dock
which
I
will
share,
hopefully
today
about
what
it
means
to
be
a
chair
in
a
little
bit
more
concrete
detail.
Hopefully
we
can
check.
Excuse
me
check
it
in
somewhere
in
a
in
a
yaml
file
or
not
a
yaml
file.
A
markdown
file
I
was
talking
with
fed
a
on
the
Sig
API
Machinery
side
and
Dawn
on
the
signode
side,
who
are
sort
of
going
through
the
same
thing
so
trying
to
see.
B
If
we
can
come
up
with
one
doc
that
solves
multiple
sigs
descriptions,
they
seem
to
operate
more
or
less
the
same
way.
We
do
so,
hopefully
I'll
get
that
doc
out
soon,
but
as
everybody
can
see
on
this
call
and
the
recording
that
Michael's
already
starting
to
run
the
show.
So
congratulations
we'll
make
it
we'll
get
the
dock
out
there
and
then
we'll
figure
out
what
the
sort
of
officialness
is
right.
A
F
D
Yeah,
definitely,
let's
can
you
hear
me:
okay,
yeah,
yes,
all
right
so
hi
everyone,
so
I'm,
not
sure
some
of
you
might
remember
some
are
not
back
in
around
September.
We
kicked
off
the
multi-networking
effort
here
in
Sig
networking
as
a
sub
project,
and
basically
today,
I
want
to
kind
of
talk
about
the
first
deliverable
that
we
managed
to
achieve,
which
is
a
form
of
a
this
PR
over
cap.
D
It's
a
bit
unconventional
to
what
usually
you
might
be
kind
of
used
to
and
the
reason
behind
this
is
because
multi-networking
the
whole
networking
story
is
wide
and
very
large.
So
we
decided
to
go
about
this
with
some
sort
of
structure
and
basically
our
first
idea
was
to
Define
frame
on
within
which
we
want
to
work
and
move
forward
towards.
D
Moving
on
then,
when
we're
gonna
kind
of
work
on
each
of
the
phases,
we
will
have
a
strict
scope
set
in
stone.
That's
the
goal
here
so
that
we
don't
get
distracted
by
other
folks
coming
in
at
some
any
point
and
kind
of
pushing
us
into
some
specific
direction.
That
we
said
is
not.
We
can
then
say
it's
not
in
the
scope.
That's
why
we
kind
of
kind
of
systematize
our
work,
and
we
will
be
able
to
to
to
kind
of
push
forward
any
comments
on
that.
D
I
assume
I'm,
taking
the
silence
as
a
agreement
and
compliance
everyone's
that
that's
a
good
idea,
so
I
think.
Thank
you,
then
do
we
want
to
go
through
the
cap
itself
or
or
should
we
just
leave
it
to
to
reviewing
not
sure.
C
D
Yeah
definitely
there
is
some
comments
going
on.
Yes,
definitely
and
I
I
always
comment
trying
to
be
active
there
as
well
as
we
go
I
think
so.
This
is
where,
where
I
can
I
can
stop,
if
we
don't
I,
don't
think
we
I
would
like
to,
because
I
will
take
the
whole
probably
meeting
if
I
go
over
the
whole
stories
and
and
and
the
phases
so
I
will
leave
that
up
for
you
to
read.
D
I
gave
you
some
general
overview
of
what's
the
kind
of
purpose
and
what's
the
story
behind
this
whole
cap,
and
as
we
go,
we
still
have
the
meetings
as
we
will
go.
We
are
going
to
kick
off
and
and
focus
on
our
phase,
one
of
this
cap,
of
course,
as
this
one
gets,
reviewed
and
and
and
hopefully
merged,
eventually
like
in
them,
maybe
hopefully
next
month,
then,
but
we
will
start
working
towards
the
first
phase
and
I
all
welcome
everyone
to
join
our
discussion.
D
If
everyone
that's
interested
in
this
topic
by
the
way
so
just
just
to
let
you
know,
rob
you
had
a
hand
yeah.
D
Requirements
and
so
no
design,
nothing,
no
exactly
so
right,
even
in
the
stories
and
that's
I
do
spell
that
out
in
the
in
the
text.
I
hope
I,
I,
I
drive
that
through,
but
in
the
stories
I'm
trying
to
make
the
stories
like
a
real
life
use
cases,
so
I'm
I'm,
using
like
a
real
life
components
like,
for
example,
Cube,
but
that
it
doesn't
mean
that
this
is
just
for
the
purpose
of
providing
a
real
life
use
case
to
why
we
do
it
this
way,
not
the
other
right.
D
J
D
D
I
didn't
even
think
about
like
aligning
myself
with
any
of
the
releases,
so
I
didn't
even
think
about
that
one
I
think
and
I
I,
don't
think
about
it,
because
it's
such
a
complicated
topic
I've
at
least
I.
Think
and
such
a
kind
of
spread
and
wide
that
I,
don't
even
think
at
what
point
we
can
get
them
done.
So
I
am
hoping
the
first
phase.
D
D
First,
H1
about
what
the
initial
API
will
look
like
and
endpoint,
but
here
to
what
you're
saying
I
think
it's
I
think
it
will
be
good
to
have
it
in.
There
is
no
implementation
behind
this
one.
It's
only
like
documentation,
mostly
so
so
it's
not
like
I'm
in
a
rush
to
get
it
in
and
get
it
merged
right.
I
do
would
like
to
have
this
done
by
the
end
of
the
month.
Hopefully,
so
I
would
appreciate
everyone's
reviewing
this,
but
if
I'm
not
sure
did
I
answer
your
question.
D
J
I
think
so
it
sounds
like
you're
not
looking
to
merge
code,
for
you
know
at
least
four
or
five
months,
yeah.
I
K
D
Any
other
questions:
if
not,
then:
okay,
thanks
everyone
Michael
back
to
you.
A
B
Sorry,
I
I
live
inserted
myself
I
wanted
to
throw
something
out:
real
quick
while
I
before
we
run
out
of
time
on
the
really
interesting
topics,
kubecon
EU
maintainers,
track
proposals
due
January
27th.
B
Historically,
we
have
a
kubernetes
Signet
Deep,
dive,
intro
and
deep
dive
session
that
is
led
by
somebody
from
the
community
at
large
last
time,
Andrew
and
Surya
and
and
Rob
and
others
did
the
work
looking
for
volunteers
to
coordinate
and
write
the
proposal
and
get
it
in.
We
have
approximately
three
weeks
to
get
that
proposal
in
so
I'll
put
that
out
here
if
anybody's
interested
ping
me
or
ping
Bowie
who
or
Rob
who
have
also
helped
coordinate
this
in
the
past.
B
Possible,
that's
true!
Yes
with
I,
don't
know
about
everybody
else,
but
with
the
economic
state
of
the
industry.
Right
now,
a
lot
of
places
I
hear
are
cutting
down
on
travel,
so
perhaps
an
awesome
opportunity
for
somebody
who's
in
Europe
and
has
a
very
high
probability
of
being
in
Amsterdam
to
lead
a
session.
It's
not
a
super
intense
session
and
we're
here
to
support
you
and
help
get
the
material
up
so
like
great
opportunity,
for
somebody
who
hasn't
maybe
in
the
past,
spoke
in
a
kubecon
to
get
on
a
stage.
A
Cool
well,
thank
you.
It
looks
like
you're
the
next
individual
on
the
agenda
there
Tim
okay,.
B
Caps,
so
I
started
going
through
caps,
I
created
myself,
a
project
board
which
I've
I
shared
of
all
the
Caps
that
I'm
paying
attention
to
sort
of
for
my
own
process
and
for
my
for
transparency
of
all
the
Caps
I'm
I'm,
watching
it's
currently
at
63
caps
and
I.
Have
we
have
about
a
month
to
get
those
in?
B
So
it's
gonna
be
a
bit
of
a
challenge
for
me
and
I
I
apologize
for
folks
as
I
time
slice
between
them
I
see
other
folks
have
added
themselves
here
about
specific
caps
that
they
wanted
to
talk
about.
So
why
don't
we
do
that
first,
and
if
we
have
time
we
can
look
at
the
the
KET
board
for
the
project
so
Dan
your
first
stop.
You
want
to
talk
about
your
new
one,
I.
E
B
H
The
documents,
then
okay,
more
or
less
yeah,
it's
only
that
within
that
kept
I,
might
have
a
topic
that
could
almost
be
a
new
kept
in
itself,
which
is
regarding
Cube,
proxy
restarts
or
upgrades
and
how
that
impacts,
load,
balance,
load,
balancer
Ingress
for
stateful
applications.
I
put
a
note
in
that
in
that
cap.
So
when
you
go
through
it,
yeah
okay.
C
H
B
I
will
I
will
do.
Yours
is
one
that
I'm
very
interested
in
I'm,
going
to
try
to
service
the
older
caps
first
and
I
would
ask
everybody
else:
who's
reviewing
caps
to
do
the
same
unless
a
newer
cap
is
very
urgent
or
there's
some
in
dependency
inversion
like
between
the
these
two.
There
may
be
a
dependency
they're,
both
touching
on
the
health
checking
area,
but
anybody
here
is
reviewing
caps.
Let's
try
to
get
the
older
ones
service
first,
because
you
know
fair
queuing
and
all
that
we
try.
B
I
started
using
the
new
project
interface
for,
for
my
own,
kept
tracking
stuff.
I
haven't
spent
a
lot
of
time
on
the
views
making
it
work,
but
anyway,
I
have
my
list
here.
So
Dan
was
number.
A
C
G
F
B
All
important
all
right,
let's
go
from
the
end
in
things
that
are
GA,
will
find
that's
gate
removed
already
GA
merged
gate
not
removed.
We
need
to
touch
all
of
these
and
figure
out
if
they're
updating
so
dual
stack.
Cal
are
we
removing
the
gate
in
27.
G
E
G
Yes,
awesome
no
hold
on
no
I.
B
Okay,
let
me
I'm
gonna
update
it
to
27
Oh,
there's,
not
a
milestone
for
20.,
so
there
it
is
I'm
gonna
update
it
to
27
the
goal
being
just
to
keep
track
of
the
touch,
but
it
looks
like
the
issue
is
closed,
so
maybe
we
did
remove
it.
B
There's
too
many
there's
too
many
steps
in
this
process
right
now.
This
is
one
of
my
personal
pet
peeves
like
we
go
and
remove
the
gate
from
the
code,
but
we
forget
to
update
the
issue
and
the
tracking
board.
B
Please
double
check
on
that.
It.
B
Awesome
internal
traffic
policy:
when
does
this
get
removed?
When
did
it?
When
did
it
GA?
Do
we
have
three
updates.
E
B
B
I
B
Sorry
I
just
remember
that
you
were
paying
attention
to
it.
Okay,
I
I
will
tag
it
for
28.
If
that's
wrong,
we
can
move
it.
Network
policy
Port
range.
We
when
did
we
release.
When
did
we
GA
this.
K
B
Okay,
I
should
do
a
better
job
of
updating
this
so.
B
B
We
need
to
figure
out
when,
should
we
actually
close
this?
Should
we
should
we
close
it
before
the
gate
is
removed,
or
should
we
close
it
after
the
gate
is
removed?
It
doesn't
matter
it's
in
the
tracking
board
either
way,
but
we
should
figure
out
what
we
want
to
do
in
terms
of
life
cycle.
I
keep
meaning
to
draw
a
state
diagram
for
this
all
right
and
that's
it
for
things
that
are
GA
awesome.
B
I
I
B
Oh,
what
do
you
remember
what
the
gate
was
called?
You
could
have.
The
word
make
the
mixed
protocol
lb
service
is
still
here.
It
says
it
was
GA
in
26,
so
it
will
be
so
the
gate
will
still
be
there
until
28..
Okay,
perfect
awesome.
Does
that
mean
we
didn't
GA
anything
in
25
which
would
have
been
coming
out
in
27,
interesting?
Okay,
that
doesn't
seem
right,
but
maybe
okay,
let's
look
at
these
expanded
DNS
config
is
currently
beta.
I
have
the
features
open.
B
Expand,
not
expansion
and
DNS
config
is
beta
in
26.
So
let
me
ping
it
here.
B
Okay,
proxy
terminating
endpoints,
it
went
beta
in
26,
so
I
will
ask
the
same
question.
B
B
Yeah,
please
please
do
it's
marked
as
27
is
the
goal
to
gain
27.
It
says
the
feature
gate
says
it
went
beta
and
24.
So,
no
reason
not
oh
it
needed
cubelet
support.
That's
what
it
was
right.
So
27
will
be
the
first
one
where
we
can
assert
version.
Sku
safety.
Yes,
not
sure,
okay.
Well,
it's
tag.
27,
I'll
ping.
It
just
to
be
sure.
J
Yeah
I
don't
know
what
to
do
with
this
one
there's
been
lots
and
lots
of
discussion
about
something
that
I
think
is
a
parallel
proposal
which
would
enable
users
to
specify
you
know
Force
traffic,
to
stay
in
zone
instead
of
this
kind
of
magic
or
attempted
automatic
approach
here,
I
feel
like
it
may
be
safer
to
track.
That
separately,
we've
been
having
all
of
that
discussion
in
the
prefer
local
cap
right
now,
but
until
that's
resolved
I,
don't
really
know
what
to
do
with
this
cap.
J
I
I've
gotten
feedback
that
it
is
working
well
for
a
a
number
of
people,
but
I.
Don't
you
know
that
that's
really
all
I
have
to
base
it
on.
At
this
point,.
J
J
So
so
there
are
some
complaints.
I,
don't
know
that
I
mean
that
I
think
that's
kind
of
working
as
intended.
Just
what's
intended
is
not
does
not
solve
everyone's
problem.
Let.
B
Me
let
me
re-ask
my
question:
I
I
agree
that
there's
a
separate
like
policy
proposal
and
and
Dan.
We
should
talk
about
whether
we
want
to
expand
your
cap
or
create
a
separate
cap
for
Zone
level
topology
to
discuss.
Let's
put
that
aside
here
on
this,
which
is
the
sort
of
automatic
heuristic
driven
implementation,
the
heuristics
can
always
be
better,
but
do
we
need
to
block
gaing
this
on
improving
the
heuristic
or
like
if
the
heuristic
isn't
hurting
anybody?
B
J
Yeah
I
mean
I,
think
that's
reasonable
to
me.
I
I,
honestly,
don't
know
what
else
we
could
do
in
the
scope
of
this
one
like
I,
think
there
is
legitimate
feedback
that
another
policy
another
way
to
kind
of
force.
This
on
is
valuable,
but
I
don't
know
that
that
needs
to
delay
this
specific
set
of
features
but
open
feedback.
B
J
Rob
I
personally
do
but
I
you
know
want
to
be.
You
know,
oh
open
to
others,
but.
B
Okay,
I'll
Milestone
it
for
27
Let's.
Do
me
the
favor,
then
Rob
try
to
collect
some
thoughts
and
feedback.
If
there's
any
reason
not
to
we
can
push
it
out,
but
I
haven't
really
heard
anybody
say.
Oh
my
God
topology
broke
me
just
oh
my
God.
It's
not
good
enough.
B
E
B
It's
true
I
have
a
separate
doc
that
I'm
working
on
that
I
want
to
share
soon,
rethinking
how
we
do
alpha
beta
GA
for
features,
because
I
don't
think
it's
working
very
well
for
us
as
a
as
a
project
overall
we're
not
getting
good
feedback
and
we're
sitting
in
in
beta
state
for
a
very
long
time.
So
I've
got
a
doc
that
I'm
hoping
to
share
soon.
B
That's
what
I
did
over
Christmas.
Okay,
then
your
Milestone
good,
and
none
of
these
okay
good.
Let's
go
back
to
Alpha
things
that
are
currently
Alpha
that
want
to
go
beta
in
27..
B
Okay,
Ricardo
are
we?
Are
we
gonna
end
of
life,
this
one.
K
K
E
K
B
Oh
yes,
good
idea.
Let
me
then
I'll
Milestone
you
for
27.,
okay,.
B
That
it
totally
is
I'll
throw
a
quick
note
here
and
honestly.
This
is
a
healthy
part
of
our
life
cycle.
We
should
be
able
to
try
things
decide.
It
was
a
bad
idea
and
not
do
it
right.
B
E
B
B
K
B
Okay,
node
ipam
for
multiple
cluster
ciders
I
have
sort
of
lost
the
context
on
this
one.
Anybody
here
who
can
speak
to
it.
B
I
can
reach
out
internally
see
who's
been
working
on
it.
B
C
Waiting
for
implementations
to
get
done,
I've
gotten
verbal
confirmations
from
Andrea,
oven,
kubernetes
and
Google
folks
for
psyllium,
but
I.
Don't
think
really
much
happened
last
year
on
those
so
just
keep
pushing
it
along.
Otherwise,
it's
just
more
community
outreach.
You
know
spreading
awareness
about
the
API.
We
submitted
a
talk
for
kubecon
this
year,
so
hoping
that
gets
accepted
and
yeah.
That's
where
we're
at
so
I.
Think
for
the
cap,
I
mean
we're
kind
of
tracking
it
out
of
tree,
but
once
we
get
some
implementations,
we
can
probably
keep
pushing
the
cat
forward.
C
E
Boytech
had
been
a
little
bit
hesitant
about
this,
for
whatever
reason
in
the
the
prrs.
Of
course,
you
know
you
could
just
declare
that
we'll
we'll
we
don't
need
a
real
cap
and
therefore
we
don't
need
a
real
prr
and
then
we
can
go
right
to
GA.
It's
working,
fine
in
the
the
5000
node
scale
test.
B
B
B
E
B
E
H
B
F
B
Also
had
GA
and
off
by
default
before
so
it's
rare,
but
we
have
done
it
anyway.
Take
a
look
at
it.
I'm
going
to
try
to
wrap
up
here,
real
quick
because
I
know
I'm
taking
up
the
whole
agenda.
Iptables
chain
ownership
same
same
situation.
This.
B
Excellent,
it's
already
milestones:
okay,
I'm
gonna
seed,
the
floor
back
because
I've
taken
up
a
lot
of
time.
There
are
a
bunch
of
things
that
are
not
yet
in
I'll,
spend
some
time
going
through
this.
B
Everybody
else
is
welcome
to
also
take
a
look
at
it
and
and
try
to
get
these
up
to
date,
and
if
you
have
caps
that
are
not
represented
here
at
all,
like
I've,
missed
tagging
it
with
our
our
cat
board,
please
let
me
know
on
slack
or
something
Michael
I'll
turn
the
floor
back
to
you,
cool.
A
Thanks
Tim
Rob:
do
you
want
to
take
your
agenda
item
yeah.
J
This
is
actually
very
related
to
that.
I
just
wanted
to
highlight-
and
it
looks
like
Tim-
has
already
covered
almost
all
the
cats,
but
I
just
wanted
to
highlight
that
we're
around
a
month
out
from
cat
freeze,
which
the
cycle
just
always
keeps
on
going.
Yeah
I,
don't
know
it
feels
like
we
just
went
through
this,
but
yeah
with
that
said.
I
had
a
couple
of
cups.
I
wanted
to
highlight
that
I
they
they
are
not
caps
right
now,
but
I
am
hoping
to
make
them
caps
in
this
cycle.
J
The
first
one
that
I
would
would
highlight
I
guess,
would
be
want
to
Upstream
reference
Grant.
This
is
something
that
has
actually
take
I,
don't
think
it's
actually
going
to
be
a
Sig
Network
cap
it'll
probably
be
a
Sega
Hoth
cap,
but
it's
related
to
Sig
Network,
because
Cygnet
currently
owns
reference.
Grant
I,
don't
know
that.
There's
much
I
think
that
discussion
has
already
happened.
It's
just
Sig
storage
is
also
using
the
the
resource.
Now
it's
not
really
a
networking
concept,
it
just
start
in
Gateway
API
because
we
needed
it.
J
The
other
thing
I
want
I,
think
I
want
to
start
Ace,
and
this
is
maybe
worth
more.
Discussion
is
app
protocol
I
added
app
protocol
a
while
ago
and
in
our
spec
for
Service
Port
app
protocol.
We
said
everything
in
here
that
isn't
domain
prefix
should
be
a
domain
should
be
a
Ina.
I
a
n,
a
service
name
which
is
a
really
long
list
of
things,
but
unfortunately
leaves
out
some
really
useful
protocols
that
people
want
to
specify.
J
I
would
like
to
create
just
a
single
cycle
cap
that
provides
some
standard
for
protocols
that
aren't
standard
service
names,
but
domain
prefixing
doesn't
really
make
sense
either
or
at
least
vendor-specific
domain.
Prefixing
I
wanted
to
raise
that
here
fairly
early,
just
because
I
wanted
to
see
if
anyone
else
had
run
into
a
pain
point
there
I
know
there's
a
few
implementations
that
are
using
App
protocol
and
using
different
values
to
represent
the
same
protocol
already.
J
B
I
guess
something
I've
been
thinking
about
for
a
long
time
just
haven't
made
any
time
for
was
to
extract
all
of
the
labels
and
annotations
and
things
that
we
publish
that
are
effectively
constants
into
a
like
staging
package
that
just
defines
constants
so
that
people
could
use
them
all
over
the
place.
Would
it
make
sense
here
to
retain
the
Iana
Rule
and
say
we're
going
to
define
a
kubernetes
DOT
IO
grpc
until
and
unless
Iana
puts
a
grpc
one
and
if
they,
if
they
do,
then
we
can
accept
both.
B
But
just
say
this
is
a
project-wide
pseudo
standard.
Yeah.
J
Exactly
so
we
basically
we
need
to
come
up
with
a
list
of
gaps
right
now
that
aren't
covered
by
those
service,
names
and
I.
Think
yeah,
kubernetes
or
case
IO.
Prefix
is
probably
what
we
need
to
standardize
on,
but
even
just
I
I
think
it's
a
handful,
maybe
somewhere
between
three
and
ten
protocols
that
we
want
to
do
this
for,
but
if
anyone
has
a
specific
protocol
that
is
not
covered,
that
we
should
cover
and
standardize
on,
be
good
to
get
it
all
in
one.
J
B
Sure
I
would
love
to
see
a
cap
for
that
and
and
the
implementation
being
literally
create
a
new
staging
project
create
a
repo
we
could
argue
over
the
naming
and
structure
of
it,
but
you
know
it
would
more
or
less
just
hold
constants.
K
J
A
C
I
have
a
small
like
a
really
small
thing.
The
network
policy
folks
have
agreed
to
move
our
meeting
to
be
more
EU
friendly
to
Tuesdays
at
nine
a.m.
Pacific
time
I
opened
a
PR
to
change
that,
but
I
do
not
have
the
admin
rights
to
edit
the
calendar,
invite
which
has
actually
been
really
frustrating
at
times
for
like
if
we
want
to
cancel
a
meeting
et
cetera,
et
cetera,
so
I'm
hoping
we
can
find
a
way
to
like,
maybe
give
more
people
the
rights
to
do
that.
C
B
Okay,
ping
me
on
slack
with
some
details
and
I'll.
Take
a
look:
I,
don't
know
how
to
delegate
access
to
calendar
ownership.
Okay,
maybe
there's
a
way
to
do
it,
but
if
there
is
I
I've
never
used
it.
Some.
J
Somehow
I
got
access,
I
can't
send
new
invites,
but
I
can
modify
existing
invites
on
the
Signet
calendar,
so
I
might
be
able
to
help
with
this
specific
thing,
and
that
seems
to
that
makes
me
think
that
we
can
also
get
you
access
the
same
way
but
I.
Don't
it's.
It's
been
a
while
and
I.
Don't
remember
how
it
worked.
I
think
it's
sharing
a
Google
Calendar
access
somewhere.
C
Okay,
that
would
be
perfect,
but
I'll
go
ahead
and
send
those
details
as
well.
Thank
you
all
right,
cool.
B
Happy
New
Year,
everybody
glad
to
see
everyone
back
and
I
will
try
to
get
to
all
these
caps.
It's
exciting
I'm
excited
about
all
this
stuff
and
I'm
super
happy
to
see
everything
proceeding.
You
know
like
two
or
three
releases
ago
we
looked
at
our
cat
board
and
we're
like
oh
my
God.
What's
going
on
here,
let's
start
draining
it
I.
Think
we've
been
wildly
successful
at
moving
things
to
the
right
on
that
cat
board.