►
From YouTube: [SIG-Network] Ingress NGINX meeting 20210931
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Afternoon,
everyone
today
is
august
31st
2020.
This
is
the
sig
network,
ingress
engine
x,
sub
project
meeting.
This
is
a
community
meeting,
and
that
means
that
we
comply
with
the
code
of
conduct
of
the
cncf,
which
basically
means
be
excellent
with
each
other
there
any
violations
that
need
to
be
reported,
please
let
the
sig
chairs
or
me
no
or
ricardo
with
that
go
ahead
and
get
started.
We
have
a
lot
to
discuss
today
is
a
interesting.
We
got
a
lot
of
interesting
conversations,
so
the
top
three
things
really
are
obviously
triaging
issues.
A
So
I
think
we
can
go
around
the
room
with
some
of
the
new
folks
that
are
here
because
it
looks
like
we've
got
a
lot
of
a
lot
of
new
joiners,
which
is
great.
So
if
you
would
like
just
introduce
yourself
name
company
and
what
you'd
like
to
help
or
see
accomplished
with
the
project.
B
C
I'll
go
again
yeah.
My
name
is
raj.
I'm
working
for
one
of
the
consultancy
in
uk,
I'm
here
to
be
part
of
nginx
community
and
looking
to
see
if
I
can
support
on
devops
or
cis
admin
activities,
also,
probably
with
learning.
I
would
pick
up
some
of
the
coding
activities,
but
yeah
that's
a
long.
A
D
B
A
Alrighty
so
go
ahead
and
get
started.
Should
we
what
what
should
we
discuss?
First
folks,
I
know
we've
got
a
few
of
the
v1
issue.
Topics
on
there
and
I
know
we
want
to
I
don't
know
this
will
probably
be
a
continuing
discussion
as
well
with
the
the
project
naming
confusion
that
we
have
between
the
nginx
inc,
f5
project
versus
the
ingress
engine
x,
kubernetes
community,
supported
project
and
then
carol
has
been
having
conversations
with
me
and
noah
noah's
on
vacation
on
what
the
triage
process
would
look
like.
So
we
can.
A
A
So
there's
been
a
lot
of
discussion
and
I
know
we've
seen
it
in
issues
before
where
folks
will
ask
questions
in
the
ingress
engine.
Kubernetes
supported
version,
they'll
ask
issues
and
we
can
just
tell
by
like
versioning
and
the
types
of
questions
that
they
ask,
that
they're
asking
about
nginx
inc
and
not
ingress
engine
x,
so
we've
had
issues
with
that
looks
like
there's
been
some
discussion
on
reddit.
I
know
there's
issues
in
slack.
That's
come
up
around
the
confusion
and
the
naming-
and
I
think
officially,
someone
has
opened
up
an
issue.
A
A
So
the
question
was
that
you
know
we
need
to
start
having
this
discussion
both
with
the
nginx
team
and
the
broader
community,
and
I
know
there's
some
technical
issues.
If
we
wanted
to
rename
the
project
so
just
wanted
to
start
dialogue
there.
Brian
did
I
miss
anything.
Is
there
anything
you
wanted
to
add.
F
F
They
find
that
they
get
returns
for
both
projects
right
and
they
and
they're
not
always
paying
precise
attention
to
to
which
to
which
project,
because
we
we
our
project,
doesn't
implement
all
the
annotations
that
that
the
community
project
supports
and
the
community
project
doesn't
support
our
customer
resources
that
that
we
have
on
our
side
and
that's
really
the
the
nutshell
of
of
where
we
see
a
lot
of
a
lot
of.
F
You
know
folks,
on
folks
on
both
sides
actually
running
into
the
more
problems.
I'm
not
sure
that
the
project
name
names
itself
as
far
as
the
repo
names
are
that
are
that
big
of
a
deal
so
much
as
how
we
represent
the
name
of
the
project
so
kind
of
out
in
public
and
through
documentation
in
the
way
that
manifests
itself
through
search.
A
Yeah,
I
noticed
in
the
even
just
in
the
releases.
I
know
that
that
causes
issues
because
we
say
nginx,
inc
or
nginx
ingress
controller,
which
we
were
using.
The
differentiation
nginx
ingress
for
the
engine
x
supported
version
versus
ingress
in
genetics.
So
I
definitely
I
was
doing
some
google
searches
even
looking
at
stack
overflow
questions.
So
there
are
the
two
tags
for
questions
and
yeah.
It
definitely
does
cause
some
issues.
F
Yeah
I
and
like
I
say
I
don't
I
don't
expect
to
resolve
this
today
I
mean
we
we
we
want
to.
We
want
to
be
we
we,
as
nginx,
want
to
be
open
about
the
conversation.
We
want
to
make
sure
that
that
we
involve
the
community
here.
We
don't
want
to
strong-arm
any
any
anything
here
we
want
to.
We
want
to
come
up
to
something
amicable,
that's
that's
potentially
low
impact
on
on
everybody,
and
you
know
everybody
involved
and
helps
resolve
the
the
situations
for
the
community
moving
forward.
F
E
E
I
I
don't
know
if
nabarron
is
here
on
on
behalf
of
I
don't
think
so
steering
committee,
but
I
think
this
is
something
that
we
should
probably
discuss
with
with
kubernetes
steering
committee
as
well,
because,
probably
cmcf
or
or
or
even
the
kubernetes
community,
they
they
may
have
a
direction,
a
good
direction.
Sorry
folks
for
us
as
well
so,
but
I
see
that
this
might
be.
I
I've
seen
that
the
thread
between
you
and
brian
as
well.
E
I
just
didn't,
got
time
to
actually
proper,
read
and
understand
the
problems,
but
yeah
this
confusion.
We
need.
We
need
to
somehow
fake,
take
some
action
because
it's
confusing
it's
confusing,
even
for
us
sometimes
like
which,
which
naming
we
should
use,
if
that's
in
giant
extinguishers,
controller,
engine,
ingressing,
jynx,
controller,
etc.
Right
so.
F
Yeah
yeah
we've,
we've
we've
tried
referring
to
the
product
name
to
the
project,
names
and
internally
that
that
that
doesn't
always
work.
That's
not
that's
not
always
clear.
I
I
know
on
our
side
we
tend
to.
We
tend
to
pre-pend
with
community
with
the
word
community,
quite
quite
a
bit
to
make
it
more
apparent,
but
even
even
that's
even
that's
not
always
clear
to
folks
on
our
side
either.
So.
A
Yeah
and
this
shouldn't-
I
I'm
just
thinking
off
top
of
my
head-
there's
no
way
that
this
isn't
the
first
time
this
has
happened.
So
I'm
sure
the
steering
committee
would
probably
have
some
helpful
guidelines
for
us.
I
mean
I
even
noticed
that
you
guys
have
an
ingress
controller
as
well,
and
then
you
have
the
nginx
kubernetes
ingress
controller.
So
that's
got
to
be
even
confusing
just
internally
as
well.
F
Yeah
it's
it's
actually
been
evolutions
of
the
same
product
name
over
over
time,
so
that
I
I
know
that
I
know
that
that
that
doesn't
help.
I
think
that
was
that
was
part.
I
I
don't
think
it
helped.
I
don't
think
it
helped
the
situation
in
either.
Right
I
mean
it's
just
like
you
could
look
at
the
project
names
and
make
an
argument
that
the
two
project
names
could
could
be
in
inverted
right.
F
A
Yeah,
okay,
I
think
I
can
take
it
on
having
the
discussion
with
the
steering
committee
and
going
from
there
and
seeing
what
our
options
are
to
help
with
the
clarification
because,
like
you
said,
I
think,
search
is
the
biggest
problem,
the
landing
zone
for
people
getting
help
and
making
sure
that
we
direct
them
and,
as
I
see
questions,
I
think
that's
one
of
the
other
bigger
things.
A
A
A
It
looks
like
carol
you,
you
dropped
yours
off
the
agenda.
I.
A
Okay,
moved
for
the
next
one.
With
that,
I
guess
we
probably
should
I
should
we
prioritize
the
v1
issues
that
we're
seeing
I
know
of
the
one
with
kind.
I
don't
know
if
it's
on
here
on
this
list,
do
we
want
to
start
there
carter
with
the
with
the
v1
issues
that
we're
seeing.
A
A
There
wasn't
a
lot
of
follow-up
on
sunday,
so
I've
got
to
find
some
time
to
follow
up
with
alvin.
So
we
got
to
be
cool
really
crisp
when
we
go
to
do
our
next
release
to
make
sure
that
we're
on
the
upgraded
version,
I
think
there
was
just
some
issues
again
with
the
docker
build
with
a
mainframe
like
we
saw
and
we
didn't
get
a
chance
to
triage
and
update
it
or
fix
it.
So.
E
About
the
car
done
by
I've,
seen
alvin
suggesting
something
about
changing
the
glibc
or
even
downgrading
in
giant
x4
version
119
because
of
open
rasty.
So
did
something
of
that
happen?
There.
A
The
the
first
change
had
a
bunch
of
those,
so
we
did,
we
did
the
upgrade
we
did
the
upgrade
to
alpine.
We
did
the
downgrade
and
the
patching
for
119
and
that
the
the
cve
that
we
patch
the
reason
we
upgraded
to
19120
all
of
those
failed.
So
we
rolled
it
back
when
we
rolled
it
back.
We
just
tried
the
alpine
upgrade,
so
we
haven't
rolled
back
everything
back
to
119.
The
version
of
alpine
that
we
were
using.
We
haven't,
got
back
to
a
steady
state,
yet,
okay.
E
I
will
try
to
take
a
look
into
the
build
thing,
probably
this
afternoon
or
tomorrow,
to
see
if
I
can
help
you
unlock
that,
and
maybe
we
can
make
another
round
of
tests,
because
by
my
side
I
couldn't
reproduce
that
I
have
created
a
cluster
here
with
six
six
hundred
ingresses
objects
and
a
bunch
of
services,
and
I
couldn't
actually
got
that
card
done
so
so,
but
but
as
as
alvin
is,
is
properly
the
the
parent
of
of
the
all
of
the
implementation
of
lua
things
and
the
dynamic
reloading.
E
So
when
I,
when
I
was
kidding
about
hey
alvin
knows
what
actually
he's
doing
is
because
he
knows
all
of
that
code
and
and
all
of
that
blue
internal.
So
we
should
probably
try
to
sync
with
him
after
those
tests
and
and
see
if
he
can,
he
can
at
least
create
some
some
reproduction
scenario
with
with
us
right
agreed.
E
Okay,
yeah
and
by
the
way
I
am
also
I'll-
all
comes
up
about
moving
from
alpine
to
to
debian
or
having
two
images
viewed
at
least
for
making
some
tests.
So
I
know
that
raj
is
taking
a
look
into
that
I
can.
I
can
provide
some
help.
I
just
needed
this
weekend.
Specifically,
I
needed
to
like
take
some
time
and
sleep
like
for
14
hours,
because
I
was
pretty
tired,
but
I
I'm
gonna,
I'm
gonna
try
to
catch
up
with
you
and
and
see.
A
Yeah,
I
think
if
we
could
just
again
diagnose
it's,
it's
the
mainframe
build,
and
I
think,
if
we
have
testing
builds
where
we're
testing
out
changes
like
this.
If
we
could
do
something
to
exclude
the
mainframe
build,
that
might
be
helpful
because
we
know
we
know
you
know
nine
times
out
of
ten
when
the
build
fails.
It's
the
mainframe
build.
E
Yeah,
we
should
probably
think
about
that,
and
maybe
removing
mod
security
from
from
mainframe
view
unless
folks
from
ibm,
they
wanna
spawn
sponsor
that
build
and
be
responsible
for
that
right,
because
I
know
that
ibm
has
been
they've
been
like
the
maintainers
of
of
that
builds,
putting
some
exceptions
in
in
into
the
like
the
s390.
E
Yeah,
I
think
so,
but
I
I
need
to
draw
that
because
I
don't
wanna,
I
don't
wanna.
I
don't
wanna
start
to
complicate
more
our
already
complicated
build
process
right.
So
I
I
need
to
figure
out
if
building
mod
security
as
a
full
static
library
works
for
nginx
or
if
we
should
think
about
another
build
process.
E
I
need
to
sit
down
that
that's
been
a
while,
since
I
didn't
mess
it
up
with
my
security,
and
I
know
that
there
are
some
details
that
we
need
to
take
care
about
when,
when
building
and
the
decoupling
so
yeah-
and
I
really
want
to
take
a
look
also
at
probably
curie
offense
in
a
future
and
give
two
users
some
alternative
to
web
application
firewall
or
at
least
release
a
a
formulary
asking
for
for
our
users,
feedback
about
which
features
they
use,
because
sometimes
we
are
like
spending
a
huge
amount
of
effort
to
maintain
something.
A
F
A
A
A
Raphael
you
want
to
go
ahead
and
get
started.
You
want
to
discuss
the
issue
that
you
have
here,
since
we
have
you
on
the
call,
yeah
sure.
B
So
basically,
this
is
the
chart
deploying
a
call
to
gke
that
results
in
load
balancer
being
created,
and
the
problem
here
is
that,
while
it
is
expected
for
the
load
bouncer
front
end
to
listen
on,
for
example,
80
and
443
parts,
it
listens
on
a
range
of
ports,
so
80
through
443,
and
the
reason
of
this
is
because
for
public
ip,
a
public,
gcp
load,
balancer,
tcp,
load,
balancer
is
being
created,
and
the
case
here
is
that
gke
cannot
create
this
particular
type
of
front
end
forwarding
ip
frontend
ip
that
allows
a
set
of
ports.
B
A
So
this
would
be
a
a
an
issue
with
the
external
load
balancer
from
from
google
from
gke's
perspective,
that
something
is
configurable
or
is
it
just
something
that
we
can't
control
from
that
perspective,
what
I'm
trying
to
get
at
is:
is
this
something
that's
inside
the
ingress,
helm,
charts
power
to
fix,
or
is
this
something
that
we'll
have
to
raise
with
the
the
gcp
team.
B
Quite
honestly,
I
don't
know
I
didn't
see
any
code
that
would
control
the
type
of
load
balancer
being
created
by
the
chart.
So
my
belief
is
that
this
would
need
to
be
to
be
discussed
deeply
with
the
gke
or
gcp
team.
A
Yeah,
I
don't
think
because
I
know
from
the
aws
perspective
it
generates
an
nlb,
but
from
a
from
a
gcp
perspective
I
wouldn't
know
longer
ricardo.
Do
you
have
any
insight
into
this
one?
I
don't
have
much
gcp
experience
either.
So.
E
I
can't
get
too
much
help
I
can
we
can.
We
can
think
maybe
someone
from
from
google
already
on
the
absolute
community,
usually
tim
hawkins
knows
that
answer
or
are
like
rob.
Scott
bowie
bowie
may
have
the
answer.
But
honestly
I
I
have
no
answer
for
that.
A
Okay
might
be
helpful
to
just
ask
in
the
in
the
comments.
E
Like
excuse
me,
sorry
folks,
we
can
we
can.
We
can
think
about
adding
something
in
in
the
helm
chart,
but
we
need
to
know
how
how
actually
to
this
needs
to
be
created
from
the
kubernetes
perspective,
because
this
is
this
is
actually
how
the
load
balancer
object
is
created
on
the
kubernetes
api
right.
E
So
I
I
have
no
idea
would
need
to
read
the
documentation
from
from
gc
gce,
gcp,
sorry
and
and
understand
how
this
works.
So
what
I
can
understand
here
is
that
it
it's
not
creating
it's
creating
a
port
range
and
not
not
a
load
bouncer
with
two
parts
right.
A
B
One
more
hint
at
the
moment
when
I
was
creating
the
ticket
with
you
guys,
gke
didn't
have
this
issue
documented,
so
there
is
a
limitation
with
gke
based
load
balancers
that
results
in
this
exact
issue
occurring,
and
they
didn't
have
this
documented,
and
I
asked
them
to
document
this
one.
A
B
B
Community
yeah,
but
what
I
mean
here
is
that,
at
least
when
I
was
talking
to
gke,
they
didn't-
I
mean
gcp,
maybe
not
gke.
They
didn't
consider
this
to
be
like
an
issue
rather
than
the
chart
meeting
a
limitation
of
gke,
because
it
is
a
limitation
and
the
limitation
was
not
documented.
A
This
is
one
of
the,
so
we've
got
some
notes
here
and
we
can
follow
up
on
that.
This
looks
like
to
be
one
of
the
issues
with
v1.
This
is
a
new
one
to
me.
I
haven't
seen
this
one,
yet
the
one
I
was
thinking
about
was
the
back
end
response
on
kind,
but
I
think
we've
already
talked
about
that
ricardo
right.
A
C
A
Looks
like
there
it's
just
it's
taking
too
long
to
get
the
new
endpoints,
and
so
it's
getting
504s
for
a
while
until
the
aws
elb
idle
timeout
hits.
So
I'm
just
thinking
it's
just
taking
it's
taking
a
while
for
them
to
register.
E
But
this
is
this
like
an
aws
issue,
or
is
this
like
an
engine
x,
increasing
giant,
x,
issue.
A
E
A
A
Flash
out
we
need
somebody
to
be
able
to
reproduce
this
as
well.
D
A
But
it's
definitely,
it
seems
like
it
has
been
an
issue
for
a
while.
You
want
to
put
this
as
on
the
long
term,
for
someone
to
look
at.
A
A
This
is
a
feature
request
as
far
as
future
requests
are
coming
along
cardo.
I
know
we're
looking
at
101
and
102
already
do
we
want
to
keep
101
and
102
to
bug
fixes
for
v1.
E
F
E
E
A
E
Do
it
slash
three
I
just
accepted
and
and
put
that
as
a
backlog,
maybe
because
it
makes
sense,
but
I
I
honestly
I
I
want
to
focus
at
least
on
on
like
on
this
flow
of
version,
one
and
then
start
from
my
side.
If
someone
want
to
pick
this
up
and
make
a
feature,
the
future
implementation.
For
this
thing,
I
don't
I
don't,
I
really
don't
care,
I
think
it's
gonna
be
good,
but
from
my
from
my
side
I
want
to
focus
on
on
the
version.
One
stabilization.
A
A
A
A
This
is
the
I
think,
one
of
the
other
issues
with
the
the
admission
controller
for
the
asserts.
E
E
E
A
E
If,
at
least
I
can,
I
can
have
like
a
way
to
reproduce
that
as
well,
but
but
I
can't
I
can
take
a
look
into
that
or
if
someone
knows
a
way
to
reproduce
this,
it's
gonna
be
usually
it's
helpful
when,
when
usually
it's
helpful,
when
when
someone
knows
how
to
reproduce
that,
because
because
then
I
I
it
can
spare
me
some
time
so.
E
A
A
A
I
don't
think
it
was
the
question
of
what
he
was
looking
at,
because
I
I
removed
the
sorry
pulled
up
in
my
another
browser
when
I
was
testing
this
one
over
the
weekend.
The
logic
in
there,
the
the
controller
always
removes
when
it
does
a
shutdown.
It
removes
the
status
of
that
nginx
service,
that's
being
shut
down.
That's
part
of
the
shutdown
process
in
the
loop
is
to
remove
it
from
the
ingress
status,
so
just
trying
to
figure
it
out
and
remove
it.
A
A
Should
we
bring
up
the
new
replica
before
we
do
a
shutdown,
because
I
tried
doing
what
as
part
of
this
refactor
when
we
removed
the
labels,
I've
changed
it
so
that
it
removes
the
template
hash
and
it
still
happens.
It
still
removes
it
and
I
was
doing
some
more
investigation.
It's
part
of
the
shutdown
process
when
it
gets
that
shutdown
signal.
A
E
Or
maybe
maybe
add
the
leader
release,
I
don't
know
if
we
are
doing
already.
E
There
is
an
option
that
we
can.
We
can
move
the
leader
election
from
the
config
map
to
a
leader
object
and
we
can
do
a
little
a
little
release.
E
So
at
least
we
we
may
know
that
okay,
this
was
like
if
this
was
a
graceful
shutdown
and
not
like
a
cq,
we
are
gonna,
we
are
gonna,
let
kubernetes
know
that
we
are
releasing
the
the
leader
for
other
other
deployment,
for
example,
right.
A
A
Okay,
just
want
to
make
sure
that
we
covered
that
that
that
can
be
something
that
I
can
look
into
because
I'm
still
assigned
out
to
that
one
and
I'll
look
to
see
when
we
actually
do
the
leader
release
or
if
it's
just
we're
letting
the
the
new
replica
set
take
over.
So
we
can
add
that
to
the
shutdown
process,
and
I
can
test
it
out.
C
E
Okay,
can
can
we
maybe
skip
the
feature,
requests.
A
E
A
E
I
I
am,
I
am
out
to
adding
the
label
for
feature
requests,
so
we
can
know
what
we
need
to
implement
really,
but
I
would
like
at
least
like
the
next
month
to
to
we,
we've
made
a
huge
change
and
I
think
that
we
should
be
probably
looking
in
my
opinion.
Okay
folks,
that's
we
should
probably
be
looking
more
into
the
bugs
instead
of
the
features
just
just
just
for
this.
This
specific
specific
cycle.
C
A
E
A
A
E
A
D
A
D
G
Okay,
how
about
now
yeah
yeah?
So
there
is
a
lot
I
was
just
saying.
There
is
a
lot
of.
This
is
several.
There
are
several
issues
that
talk
about
performance,
so
this
one
is
talking
about
losing
connections
on
back
end
reload
and
the
ones
that
you
were
trying
to
fix
with
elvin
to
build
a
new
image
there
is
bunch
of.
There
are
several
issues
where
they
have
talked
about
same
thing,
having
a
lot
of
ingresses
and
that's
what
ricardo
said
that
he's
not
able
to
reproduce.
G
G
One
user
said
that
in
production
they
had
a
fairly
large
number
of
very
large
number
of
ingresses,
but
in
development
they
had
like
a
humongous
compared
to
the
production,
and
that
was
a
problem
all
the
way
since
46
may
46,
but
in
v-49
they
have
had
stability
for
at
least
two
days.
On
the
day
he
commented
with
the
same
humongous
number
of
ingresses.
G
So
the
relation
being
reload
la
or
or
crash
dump.
G
E
Actually,
actually,
the
startup
probe,
they
have
added
that,
so
they
wait
until
the
ingress
is
properly
ready.
Right,
probably,
it
takes
like
10
seconds
or
more
to
reconcile
that
that
huge
amount
of
of
data.
E
Yeah,
I
was,
I
was
thinking
actually
about
splitting
the
nginx
configuration
files,
but
I
remember
I,
I
guess
folks
from
ninja
next
day
they.
They
have
said
on
past
that
in
this
meeting
that
splitting
the
the
files
wouldn't
improve
the
performancy
side
right.
E
Yeah,
but
this
is
a
workaround,
the
the
real
problem
is:
how
can
we,
how
can
we
not
only
generate
the
file,
but
when
we
have
to
regenerate
the
enginex
configuration
template?
E
We
don't
we
don't
have
this
this
kind
of
downtime
right
so
those
down
time
they
they.
I
I
guess
that
they
happen
not
when
you
add
or
remove
an
ink
point,
but
they
happen
when
you
add
or
remove
a
new
virtual
host,
because
you
actually
need
to
to
tell
in
ginex
that
you
are
serving
that
new
virtual
host
right.
So
when
you,
when
you
change
the
nginx
configuration
it
needs
to
reload
nginx.
E
So
if
you
have
a
bunch
of
users
with
like
three
thousand
three
thousand
ingresses
objects
and
each
of
these
ingresses
objects,
they
got
the
their
own
virtual
host.
For
example,
you
have
reloads
every
time,
someone
changes
something
that's
related,
not
to
the
end
point,
but
with
the
configuration
or
something
like
that,
you
you
have
like
everything
all
of
the
nginx
configuration
file
being
regenerated
again
and
been
reloaded
right.
So
maybe
one
one
solution
that
we
could
discuss
is
it's
the
same
approach
that
I
know
that
he
proxy
they
do,
which
is.
E
They
have
a
front
end
that
that
accepts
everything
and
from
that
front
and
they
send
the
traffic
via,
for
example,
a
unique
socket
to
the
back
end
right.
So
you
don't
have
all
of
the
reloads
and
and
the
the
same
back
end
is
running
on
the
same
engine
x
instance
and
from
that
same
to
the
to
the
endpoints.
But
this
would
be
a
a
really
big
refactoring
yeah.
It
sounds
like
a
major.
A
E
Yeah
that
we
should
probably
stop
and
think
and
grow
and
say
hey.
We
can
improve
this
way
and
reduce
the
the
the
and
reduce
the
downtime,
but
I
don't
think
that
right
now
we
have
a
solution
other
than
maybe
trying
to
improve
the
performance
or
adding
something
else
that
can
consolidate
all
of
the
configurations
in
some
place.
E
B
E
G
Regardless,
were
you
working
on
a
go
based
reloading
of
the
config
file.
G
E
Was
somebody
else
that
was
actually
working
on?
He
writing
the
template
generation
from
shoutscreen
to
goal
and
that
reduces
the
reloading.
Actually,
the
template
generation
like
for
85
percent
of
time
was
javascript
and
they
they
have
migrated
to
to
go.
A
Yeah,
I
mean
that's,
not
gonna
change.
Okay,
so
we've
got
about
eight
minutes
left.
I
know
I've
got
a
couple
things
to
follow
up
so
I'll,
follow
up
with
alvin
on
the
the
testing
that
we
were
doing
for
the
core
dump
issue,
and
then
we
can
discuss
his
thoughts
on
the
optimizing
for
the
reloads,
with
some
of
the
other
issues
that
we've
seen
and
then
I'll
take
on
following
up
with
and
seeing
if
there's
an
issue
with
the
gcp
gke
and
I'll
continue
looking
at
the
status
changes
to
the
shutdown
as
well.
A
Is
there
anything
else,
oh
and
the
steering
committee
stuff?
So
we've
got
about
three
or
four
action
items
from
this
one.
E
I'm
gonna,
I'm
gonna
if
someone
is,
is
interested
in
I'm,
I'm
gonna
work
on
issues
and
pull
requests
for
version,
one
zero
one
and
they
are
on
the
milestone
version
one
zero
one,
but
I
need
some
help,
folks,
because
I
am
not
having
enough
time
for
that.
E
So
if
you
go
to
milestone,
go
back,
go
back
yeah
or
that
we
have
this,
the
the
the
fix
that
that,
for
the
permission
level,
that
I
think
there
are
some
someone
is
working
on
that
that
I
need
to
review,
but
we
still
have
some
stuff
that
that
should
be
working,
so
even
reviewing
prs
or
or
are
triaging
the
issues
4101
and
helping
reproducing.
E
So
I
just
need
some
help,
because
I
wanted
at
least
to
make
to
give
a
response
for
the
community
that
we
are
fixing
the
bugs
that
they
are
finding.
We
are
not
like
giving
no
attention
for
that.
So
just
if
you
can
just
please
prioritize
the
version
101
with
me.
Take
a
look
test,
take
a
look
into
the
pr's
and
and
what
they
do.
Recompile
run
in
your
own
environment
and
say:
yeah,
hey
this
generates
impact
or
not.
So,
let's,
let's
move
forward
with
that,
okay.
C
Yeah,
I
can
take
part
in
that
ricardo.
D
A
Awesome,
well,
I'm
just
writing
up
what
the
action
items
are
to
make
sure
that
they're
a
little
bit
more
clear
unless
we
have
anything
else
to
discuss.
I
think
that's
it.
For
this
week,.