►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20210318
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20210318
A
Okay,
now
we're
recording
to
the
cloud.
This
is
a
sick
network
meeting
for
thursday
march
18
2021.
try
to
kick
it
off
with
issue
triage
as
per
usual.
B
B
B
All
right
there
were
23
open,
but
actually
I
closed
a
couple,
so
there's
actually
17
open
now,
a
bunch
of
folks.
Thank
you
all
for
doing
your
pre-homework
we're
going
through
and
pinging
and
updating
and
and
circling
back.
So
I
have
one
two
three,
four,
five,
six,
seven,
eight
nine
that
we
should
look
at
and
I
have
a
feeling
that
today
is
antonio's
day
because
six
of
the
nine
are
flaky
tests.
D
But
that
is
sunny
sue.
Something
is
happening.
You
know
when,
when
this
freezes
start
so
there
was
an
issue
with
the
instrumentation.
This
logging
thing:
what's
the
name
this
login,
that
was
doing,
I
don't
know
the
name,
but
it
was
panicky
causing
issues
with
the
cubelet.
D
B
B
Should
we
be,
I
mean
I
hate
timeouts
as
a
solution
to
test
flakes,
but
honestly,
a
bunch
of
our
tests
are
do
something
and
wait
for
the
system
to
respond
and
then
prove
that
it
worked
eventually
like
do.
We
expand
our
definition
of
eventually.
D
No,
we,
I
was
working
with
white
with
voyage
a
lot
on
that
so
right
now,
all
the
the
tests
always
expect
a
pot
to
be
created
so
right.
Now,
all
the
tests,
at
least
the
test
that
I
found,
is,
are
waiting
for
the
pot
to
be
ready.
You
know
it's
not
running
it's
ready.
So
then,
when
that
condition
happens,
the
next
step
starts.
The
other
thing
that
we
had
out
of
sync
problems
when
was
with.
D
We
were
not
waiting
that
the
points
were
ready.
You
know
so
right.
Other
thing
that
we
fix
it,
but
there
are.
There
are
times
that
those
timeouts,
I
don't
know
if
it's
ip
tables
or
what
it
is.
You
know
those
statements
are
not
enough.
You
can
see
there
yeah
waiting
to
program
the
rules
or
whatever
thing
you
know
you
cannot.
B
D
D
One,
this
is
something,
and
I
see
that
here
that
we
we
should
talk
late,
okay,
okay,
so
go
ahead!
Sorry,
no!
The
thing
is
ipbs
is
something
that
it
was
a
long
time,
but
my
feeling
is
that
nobody
is
taking
care
of
it.
So,
as
a
community
community,
we
should
take
a
decision
on
what
we
want
to
do
with.
B
Is
I
know
you
you
raised
this
issue
in
in
slack?
Is
it
on
the
agenda
for
this
week.
E
B
This
was,
I
think,
the
the
most
damning
statement.
E
B
Whatever
here
we
have
this
probes
issue
from
last
cycle,
cal
you
signed
up
to
follow
up
on
it.
B
It's
okay
and
oh:
this
is
the
issue
report
of
having
wrong
iptables
config
under
churn
with
external
traffic
policy
and
lots
of
endpoints
it.
I
agree
with
casey's
assessment.
It
does
seem
like
it's
probably
it's
it's
plausibly
a
bug.
The
question,
then,
is:
what
can
we
do
about
it?
If
it's
so
hard
to
reproduce,
we
can
throw
it
into
the
hopper
with
all
the
other
bugs
or
if
somebody
feels
like
they
want
to
spend
a
little
time
poking
around.
B
So
before
I
throw
it
into
the
hopper,
I
thought
it
was
worth
just
putting
out
here.
Like
does
anybody
want
to
go,
take
a
stroll
through
cube
proxy
and
add
some
logs
and
or
metrics
good
first
project,
not
really.
D
B
Yeah,
I
don't
know
what
right
what
the
right,
logs
or
metrics
would
be
here.
I
I'm
more
like
understand
what
the
bug
purports
to
be
experiencing
and
then
see
if
there
were
some
strategically
placed
logs,
that
we
could
add.
That
would
say.
Aha,
that
list
is
not
supposed
to
be
empty
or
something
like
that.
D
B
D
E
E
B
B
E
B
B
B
E
Well,
I
had
one-
oh,
I
was
actually
doing
this
proxy
for
thank
you.
Let
me
see.
Let
me
open
this
dock
here.
Where
is
it?
Where?
Is
it
oh
yeah,
antonio?
What
should
we
do
about
this?
I'm
kind
of
the
middleman
here
I'm
just
like
looking
at
this
and
this
whole
thing
about
how
endpoints
are
reconciled-
is
pretty
pretty
confusing
to
me,
but
is
there
something
that
needs
to
be
done
here
or
is?
E
D
Let
me
add
contest
because
we
discussed
this
before.
I
don't
know
one
week
before
with
robert
scott
and
team
in
another
dish.
So
the
thing
is
the:
when
you
create
a
service
with
a
selector
with
pods,
it
creates
some
points
and
the
person
wants
to
remove
the
selector,
a
modified
endpoint
or
something
like
that
now
wants
to
modify
the
endpoint,
but
they
need
to
start
to
fight
with
the
within
points
controller.
B
D
The
controller
owns,
then
points
object,
so
there
is
no
problem,
nobody
can
modify
it
or
you
can
modify
it
and
after
a
few
seconds
it's
it
comes
back
to
the
in
point
right
control,
but
within
point
controller
you
can
modify
the
object
and
this
person
want
to
mod.
I
don't
want
one
to
want.
I
don't
know
exactly
what
he
wants
to
do,
but
he
wants
to
do
something
that
the
endpoints
controller
doesn't
allow.
B
E
I
just
joined
hey
yeah,
so
we
were
just
talking
about
you,
okay,
we're
looking
at
your
issue
here
with
the
endpoint.
What's
the
issue
number,
oh,
is
it?
Is
this
the
right
issue.
E
G
Yeah
so
the
use
case
is
a
selectivity
service
is
created
and
a
corresponding
endpoint
is
also
created
by
a
controller
and
and
the
controller
adds
a
proxy
label
on
the
service
and
endpoints.
But
when
the
controller
you
know
reacts
to
certain
events
and
removes
the
labels
from
the
endpoints,
the
iep
table
rule
that
jobs
traffic
is
still
stays
there.
G
Exactly
and
based
on
the
discussion
in
that
ticket
and
seems
like
it's
because
the
update
event
on
the
endpoint-
it's
not
like
reconciled
by
its
endpoint
controller
at
all.
So
if
you
update
the
endpoints
you
remove
label,
nothing
will
happen.
Basically
from
the
controller's
perspective.
The
service
doesn't
have
an
endpoint,
so
it
just
keeps
the
iv
table
root
there
to
java
traffic
for
that
service.
H
I'm
maybe
over
simplifying
this,
but
I
I
if
I,
if
I
read
through
this
correctly,
this
is
a
non-issue
for
endpoint
slice
and
because
endpoint
slice
on,
like
the
controller,
actually
clearly
owns
the
resource
and
updates,
and
everything
like
that
and
endpoint
slice,
I
think,
is
covering
most
of
our
supported
versions
now.
So
maybe
this
is
not
an
issue
with
supported
oss,
kubernetes
versions.
I
don't.
G
G
Okay,
that's
why
we
are
controlling
this
endpoint
and
the
service
is
created
without
the
selector
and
in
certain
scenarios
we
want
to
disable
proxy.
So
that's
why
we
are
using
this
table
on
the
service,
and
I
guess
that
controller
automatically
syncs
everything
from
a
service
to
the
endpoint.
I
mean
our
own
controller,
so
the
service
processing
name
label,
gas
added
on
the
endpoints
as
well,
but
as
that
label
is
removed
later,
the
service
is
still
like
unavailable
from
the
user's
perspective,
because
ib
table
rule.
B
B
Yes,
that's
true
yeah,
so
rob
in
the
endpoint
slice
mirroring
world.
Wouldn't
the
mirroring
controller
pick
this
up
and
copy
it
into
a
slice.
H
B
H
B
I'll
I'll
throw
a
comment
at
the
end
of
this.
This
bug,
with
a
little
bit
of
a
summary
of
this
discussion.
Thank
god
does
that
make
sense.
G
Yeah
yeah
and
if
my
understanding
is
crap,
we
are
also
trying
to
save
endpoint
slice
fixes
this
or
I
mean
it,
doesn't
have
this
issue
at
all.
Maybe
we
can
just
switch
to
endpoint
size.
Isn't
that
understanding,
correct.
B
Yes,
we
included
this
thing,
called
the
endpoint
mirroring
controller,
to
make
it
possible
for
people
who
were
doing
what
you're
doing
and
manually
writing
to
end
points
which
is
a
simpler
api
and
have
that
still
propagate
through
into
endpoint
slices.
But
if
you
go
straight
to
slices,
then
you
shouldn't
have
this
problem
at
all.
Gosh.
G
I
gotcha
well
evaluate
that
approach
as
well,
but
at
the
same
time
I
do
like
want
to
see
the
progress
on
the
endpoint
controller.
What's
our
conclusion
here.
A
Thank
you
guys,
so
we
had
a
few
things
tucked
on
here,
mid
meeting,
you're.
E
B
B
We're
already
at
that
point,
okay,
so
so
we
have
this
old
issue
from
2017
filed
by
I
closed
the
window
already,
but
I
think
brian
boreham
and
it
basically
comes
down
to
cluster
cider
as
a
property
of
the
cluster,
is
not
queryable
in
any
way,
and
it
would
be
nice
if
it
could
be
the
rebuttal.
B
That's
almost
certainly
correct:
okay,
the
the
rebuttal
for
primarily
for
the
cluster
cider,
but
service
cider,
well,
not
service,
side
or
less.
So
the
cluster
cider
is
an
optional
thing
right,
like
any
cni
or
network
implementation,
could
choose
not
to
use
it
at
all
right
and
in
fact
we
know
some
that
don't
and
or
we
know
some
that
have
multiple
cluster
ciders.
And
so
it's
it's
difficult
to
stick
that
somewhere
and
we
don't
really
have.
B
We
don't
have
any
obvious
place
to
stick
it
first
of
all,
and
even
if
we
did,
it
would
have
to
be
a
list
and
that
would
impose
on
all
these
existing
implementations
that
they
keep
that
list
up
to
date
right.
So
the
issue
sort
of
got
stuck
and
aged
out
and
was
going
to
be
auto
deleted
and
then
was
snatched
from
the
jaws
of
the
fatabot.
B
And
the
question
is:
do
we
want
to
reopen
that
consideration?
I
know
the
new
context
that
I
bring
was
we
know
that
last
year
there
were
some
changes
to
cube
proxy
to
decouple
it
from
cluster
cider
so
that
we
could
have
use
node
cider
instead.
B
So
we
have
less
things
aware
of
the
cluster
cider,
so
it
is
at
least
technically
possible
now
to
have
multiple
cluster
ciders
that
are
not
contiguous
to
each
other
right
and
the
the
question
is
still
do
we
want
to
expose
that
considering
there's
also
a
proposal
somewhere,
I
forget
if
it
was
in
a
cap
yet
to
add
multiple
ciders
to
the
range
allocator
and
if
we
were
going
to
add
them
to
the
range
allocator
it
sort
of
feels
like
it
shouldn't
be
a
flag
anymore,
because
flags
are
difficult
to
change
at
runtime.
B
So
we
would
probably
want
to
have
some
sort
of
resource
in
the
cluster
that
represented
an
ip
range,
and
antonio
tell
me
if
this
starts
to
sound
familiar
some
sort
of
resource
in
the
cluster
that
represents
an
ip
range
and
some
way
to
allocate
from
that
ip
range.
So,
given
that-
and
I
know
I'm
picking
an
antonio
for
anybody
who
hasn't
followed
along
because
he
spent
some
time
thinking
about
how
to
make
the
service
ips
follow
a
similar
pattern.
B
B
That
said,
I
have
no
idea
if
those
apis
actually
align
antonio,
we,
I
promised
you
we
will
circle
back
to
thinking
about
how
how
to
make
that
api
work.
So
I
don't
have
any
real
update
other
than
it
doesn't
seem
so
impossible.
F
Now
then,
we
need
to
really
destroy
and
change
from
multiple
from
exposure
of
the
configuration.
There
are
three
different
problems,
and
each
one
of
them
is
can
has
a
serious
amount
of
work
on
its
own.
I
understand
we
want
to
tackle
the
three
eventually
and
complete
them
all,
but
each
one
of
them
is
is
is
like
a
dragon
hidden
somewhere
and
it
will
come
out
and
eat
everything
alive
and
so
on.
F
So
it's
like
the
disillusion
of
smog
kind
of
so
I
want
to
disjoin
the
exposure
of
the
the
the
like
that
configuration
from
the
idea
of
multiple
disjoint
ranges
from
the
idea
of
dynamically
changing
fighters.
So
and-
and
I
think
in
sequence,
I
think
they
are
pretty
much
the
same
way.
I
said
like
the
multiples
and
then
exposure
or
exposures
in
multiple
and
then
at
the
end,
the
detail
end
of
this,
as
they
will
change
for
the
record.
F
I
think
cloud
providers
and
anybody
who's
working
on
dynamically,
creating
clusters
in
response
to
events
will
love
a
feature
like
that,
because
currently
it's
a
mess,
every
cloud
provider
and
everybody
who's.
Providing
an
api
to
create
cluster
is
asking
the
user
to
do
the
due
diligence
of
non-overlapping
ciders
and
all
that
crap.
So
if
we
can
expose
this
for
them,
they
would
they
would
love
it
because
it
makes
their
life
a
lot
easier.
B
So,
in
an
ideal
world,
nobody
would
ever
ask
this
question
because
they
wouldn't
like
we
should
position
it
as
a
as
a
non-meaningful
thing,
and
I
think
we've
successfully
done
that
for
cube
proxy
right.
It
doesn't
need
to
know
anymore
right.
So
if
the,
if
the
goal
here
was
just
to
publish,
I
don't
think
I
think
we
can
hand
wave
and
say
you
don't
need
to
know
that
it's
the
wrong
question
to
be
asking
right.
So
I'm
not
sure
that
your
step
one
is
useful
without
step.
Two
and
three.
F
D
J
J
J
B
F
Want
to
highlight
a
statement
that,
yes,
people
shouldn't
worry
about
that,
but
the
fact
is,
people
worry
about
that
I'll.
Tell
you
why
most
of
the
people
at
least
on
on
on
the
cloud
environment
they
try
to
to
configure
with
the
native
cloud
networking
cni's
kind
of
stuff
like
provide
ips
support
from
the
v-net
and
all
of
that
and
this
information
is
not
saved
anywhere
so
having
a
cluster
having
this
as
information
might
be
useful.
The
argument
I'm
trying
to
make
here
is
this
scenario
is
where
this
data
is
used
is
outside
a
single
cluster.
F
So
that's
that's
where
it's
tricky
to
say:
oh,
yes,
people
need
that
or
not
again,
I
will.
I
will
like
the
hell
of
change
is
going
to
be
a
hell.
I
die
on,
I'm
not
saying
we
shouldn't
do
it,
I'm
just
saying:
let's
get
the
things
out
of
the
door
first
and
then
we
can
work
on
the
change
part,
because
the
change
mode
is
updated.
B
So
so
there
is,
I
believe,
there's
a
cap
we'll
make
this
the
the
end
of
this
discussion.
Soon
there
is
a
cap
to
add,
there's
two
caps
open
around
the
ipam
node
allocator,
one
to
allow
different
sized
ciders
and
one
to
allow
discontiguous
multiple
ciders.
B
They
both
are
interesting
and
useful
for
people
to
manage
this
subsystem,
even
though
I
wish
we
didn't
have
to
the
there's
open
questions
about
whether
we
should
try
to
fix
the
node
allocator
in
place
or
whether
we
should
actually
add
a
newer,
more
modern
one
with
a
different
representation
and
evolve
that
way.
B
B
Cool,
so
next,
hopefully
in
the
next
round
of
furious
revising
of
caps
in
order
to
get
it
into
the
kept
freeze.
This
one
one
of
these
will
be
in
there.
F
Yeah
also,
I
vote
yes
on
modern
controller
for
the
ipam.
The
reason
why
that
the
controller
is
convoluted-
and
there
is
big
f
statement
around
certain
cloud
providers
that
shouldn't
be
there
so
we'll
define
this
and
testing.
This
might
not
be
as
easy
as
it
looks.
So
if
we
can
leave
this
as
if
just
what
we've
done
with
input
and
then
eventually
retired,
although.
A
Thank
you
tim.
Our
agenda
has
a
few
more
things
on
it
that
popped
up.
I
think
antonio,
wanted
to
talk
about
ipvs
q,
proxy
and
the
future
of
that.
Yes,.
D
Because,
usually
larson's
in
the
corner
labs
and
you
one
guy
from
datadock,
they
were
active
and
you
know
working
and
fixing
bars.
And
I
I
think
that
in
the
late
releases
the
the
working
ipvs
is
slowing
down,
and
I
don't
know
if,
because
people
is
stopping
to
use
it
or
not.
But
I
want
to
raise
this
topic
before
you
know
in
two
releases
nobody's
using
it
and
we
are
having
one
one
component
that
is
not
working.
D
B
B
Yeah,
like
do
you
do
if
I
can
be
a
little
bit
more
dramatic
than
antonio,
do
you
feel
it's
in
good
repair
and
good
maintainership,
or
do
you
feel
like
it's
decaying.
D
We
have
that
is
only
one
job
and
it
has
a
failing
test
for
I
don't
know,
I
cannot
say
one
month
two
months,
I
don't
remember,
and
I
was
doing
some
theaters
this
weekend
and
and
I
see
a
lot
of
new
open
issues
and
nobody
replying
on
that.
So
I
don't
and
I
see
that
some
people
that
used
to
go
there
is
moving
to
ebtf.
So
I
I
just
want
to
raise
this
before
we
went
with
a
component
that
that
nobody
cared
just
one
person
wants
to
want
it.
K
B
Can
antonio,
can
you
can
you
provide
a
link
so
that
lars
can
take
a
look,
and
maybe
if,
if
we
still
think
this
is
a
risk,
then
we
can
put
it
on
the
agenda
with
some
lead
time,
so
we
can
get
other
people
to
to
who
care
about
this
to
come.
E
B
Well,
the
user
space
I'm
pretty
confident,
not
a
lot
of
people
use
it
and
the
people
who
do
use
it
use
it
for
a
particularly
niche
case
right
dan
yeah
ipvs,
I
think,
is,
is
much
more
broadly
used.
B
I
mean,
let's
say
we
have
a
larger
question
of:
do
we
have
any
path
to
removing
anything
from
these
at
all
ever
or
do?
Are
we
going
to
maintain
them
forever
right.
F
K
Separate
can
agree
with
that
and,
and
there
is
a
cap,
I
believe,
or
some
proposal
to
break
out,
cube
proxy
things
as
a
library
and
a
known
repository
even
and
that
would.
C
K
B
So
I
think
the
the
fun
part
there
is.
If
we
want
to
do
one,
we
need
to
do
all
right.
We,
I
think
we
can
make
a
reasonable
statement
of
cube
proxy
as
a
monolithic
thing
is
effectively
frozen,
like
we'll
take
security
issues
and
bug
fixes
and
those
sorts
of
things,
but
we're
not
going
to
augment
it
in
any
major
ways
and
there's
this
other
thing
and
it's
the
new
way
of
doing
it
and
that's
maybe
out
of
tree.
B
Maybe
it's
like
independent
by
method,
so
there's
one
for
ip
tables,
one
for
ips,
one
for
user
space
and
and
that's
the
new
way
of
doing
things.
B
The
problem
is:
if
we
kick
one
out
but
leave
the
rest
in
then,
then
I
think
it's
where
we
have
troubles,
saying
exactly
how
we're
going
to
support
and
justify
that,
but
moving
to
a
bunch
of
disparate
projects
we
have
to
think
through,
and
I
honestly
I
just
don't,
have
the
answer
for
it
of
what
does
that
mean
for
users
who
assemble
a
cluster
today,
we'll
have
to
think
about
the
the
life
cycle
and
turn
up
and
maintenance.
E
E
D
The
the
problem
here
is
not
a
number
of
lines
of
code,
it's
a
problem
of
maintenance,
you
know
it's.
If
people
use
it,
then
it
has
bugs
and
we
don't
solve
it.
You
know
yes
or
if
it
doesn't
have
any
tests
or
if
you
have
to
add
a
feature,
and
you
have
to
add
to
ipvs
too,
you
know
and
you
really
don't
care
so
that
that's
the
thing
you
know
it's
having
2000
line
of
code
is
really
not
the
biggest
problem
is
what
do
we
want
to
do
with
our
components?
D
You
know
and
are
we
going
to
have
now
topology?
We
need
to
add
it
exactly
yes,
too,
and
you
know
if
nobody's
used
it,
it's
a
lot.
It's
you
know
it's
complicated
and
we
don't
have
too
much
test.
So
maybe
you
add
the
future
and
it's
not
working
and
somebody's
going
to
use
it.
What
are
you
going
to
do?
I
really
want
to
support
it
or
not.
That's
the
question
that
you
know
we
should
do.
B
B
B
So
I
I'm
excited
about
the
the
idea
of
the
v2
cap
and
I
think
it's
a
good
opportunity
for
us
to
reevaluate
how
we
distribute
cube
proxy
right
and
it
may
be
the
answer
to
my
biggest
concerns.
I
really
work
together
with
the
cube
atom
team
to
make
sure
that
they
know
where
to
pull
different
most
popular
options
from.
E
B
F
Redefine
and
interface
and
keep
the
heavy
lifting
heavy
lifting
inside
the
proxier
itself,
and
then
the
topology
becomes
a
function
of
filtering
endpoints
and
services
and
all
of
that
stuff
yeah
and
we
shouldn't
worry
about
something
outside
the
tree
because
their
function
would
be.
Oh,
this
is
a
new
service,
new
endpoint
which
pretty
much.
What
we
do
today
is
just
not
not
clean
clean
cut
right
so
because
this
this
problem
will
happen.
F
Let's
say
we
wanted
a
new
bbf
proxy
right
and
you're
smiling
antonio.
I
know
because
it's
it
will
happen
right.
Let's
face
it,
it's
just
a
matter
of
time,
I'm
positive,
somebody's
writing
it
somewhere
and
all
right
and
just
testing
it
before
they
come
come
out
with
it.
So
does
this
mean
we're
going
to
maintain
three
to
four
different
proxies.
B
J
B
Somebody
did
ask
you
know
if
I
wrote
an
nf
tables
version,
will
you
take
it
and
the
answer
was
we
can't
we
can't
ban?
We
don't
have
the
bandwidth
to
maintain
yet
another
version
that
we
don't
understand.
Deeply.
F
B
B
E
Option
the
question
is
like,
regardless
of
how
we
do
it,
because
that,
maybe
might
be
the
easy
part,
the
question
is:
is
somebody
going
to
own?
Is
somebody
actively
doing
this
work,
or
is
this
because
that's
why
I
feel
like
deleting
it
is
kind
of
like
for
a
specific
case
is
not
that
that's
a
better
solution,
but
it's
like.
I
know
that
I
could
justify
that
in
my
day
job,
because
I
have
a
very
specific
reason
to
want
to
do
that.
Question.
B
I
think
that
is
a
wonderful
topic
for
perhaps
our
next
meeting.
You
know
the
mikael
was
working
on
it,
but
you
know
he's
just
a
contributor
he's
just
one
person
we
haven't
really
stepped
up
to
help
him
sort
of
organizationally.
Maybe
we
should
maybe
we
should
make
this
a
sig
level
priority
for
22
or
23
and
see
how
many
people
we
can
get
to
pitch
in
on
that.
B
B
E
D
B
L
A
We
got
eight
minutes
left
bridget
you
were
next
looks
like
maybe
you
resolved
it
in
the
minutes.
C
C
M
Hopefully
it
should
be
quick,
so
I
wanted
to
check
if
it
would
be
okay
to
bring
issues
that
we
get
at
kubernetes
dns
people
for
discussion
in
issue
triage
that
we
do
in
the
same
network
meeting
as
a
quick
overview.
The
kubernetes
dns
houses,
the
source
code
for
cube
dns,
which
used
to
be
the
default
several
releases
back
but,
more
importantly,
also
node,
local
dns,
which
is
still
widely
used
and
most
questions
we
get
are
related
to
node
local
dns
on
that
repo.
B
I
think
it's
fair,
especially
if
we're
opening
this
can
of
worms
of
moving
stuff
like
cube
proxy
out
into
separate
repos
like
we
will
have
to
deal
with.
There
is
more
than
kk
to
triage.
C
I
have
one
question
for
other
repos
that
aren't
kk.
Are
we
going
to
need
to
either
add
bots?
Add
labels?
You
know
anything
structurally
in
order
to
make
issue
triage,
feasible,.
B
I
think
all
repos
in
the
org
get
the
triage
label
automatically
now
right.
So
at
least
for
my
process
of
triage.
That's
really
the
only
input
signal
that
I'm
using.
M
Okay,
yeah,
that's
a
good
question.
I
actually
don't
see
those
labels
automatically
applied
on
the
dns
one
I'll
check
if
there's
some
setup.
I
need
to
do
for
that
interesting,
but
I
can
do
that.
C
Once
you
have
the
link
of
isolating
the
issues
you
want
triage,
then
I
think
we
would
need
them
in
the
headers
of
the
sick
network
meeting
doc.
M
Got
it
okay,
okay,
sure,
yeah
yeah
yeah
I
can.
I
can
follow
up
on
the
process
to
do
that
that
this
was
more
to
just
find
out.
If
the
idea
is
okay
with
the
with
the
group
here,
but
yeah
I'll,
do
the
I'll
do
the
game
to
get
the
links
up
there
and
yeah?
It's
not
like.
We
have
the
shoes
every
meeting,
but
yeah
I
will.
I
will
be
involved
in
bringing
those
lists
up
whenever
we
do
the
triage.
B
Awesome
I'll
go
I'll,
go
read
an
email;
instead
thanks
everybody
for
a
wonderful
meeting,
always
nice
to
see
you
thanks.
Bye.