►
From YouTube: App Runtime Platform Working Group [April 6, 2022]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Thanks
for
attending
our
working
group
meeting,
I'm
sorry
very
sorry
for
the
time,
confusion.
A
A
Let's
see,
I
guess
first,
I
just
wanted
to
thank
jeff
and
dominique's,
not
here
for
being
very
on
top
of
issues
in
prs.
I
was
reviewing
a
bunch
of
them
recently
just
going
through
them
and
I
feel
like
jeff
is
responding
to
half
of
them
and
dominic's
responding
to
the
other
half.
So
it's
nice
having
such
dedicated
community
members.
So
thank
you.
A
Oh,
yes,
please
feel
free
to
add
any
items
to
the
agenda
that
you
would
like
to
talk
about
jeff.
I
was
wondering
if
you
would
be
okay
with
it,
sharing
a
dynamic
asg
update
about
like
where
we
are
now
what
we're
thinking
about
doing
next
sure.
B
B
3.3.0
should
be
great
to
use
ish
we're
looking
into
some
performance
issues,
there's
an
extremely
high
scale
and
situations
it
does
end
up,
causing
or
exacerbating
a
memory
leak
in
cappy.
B
C
B
B
I'm
curious
if
anyone
is
able
to
share
metrics
about
the
number
of
ip
tables
roles,
the
highest
number
of
iptables
rules,
they've
seen
on
a
cell
in
any
of
their
environments.
If
that's
something
you'd
be
able
to
provide,
I
can
show
you
some
metrics
to
find
it
or
give
you
some
commands
to
run
on
a
cell.
If
that's
the
thing,
but
yeah.
D
A
I
thought
I
could
give
the
gnats
update.
That's
version
two
update
all
right,
so
we're
trying
to
upgrade
to
nance
v2
for
some
performance
enhancements
and,
of
course
be
on
a
supported
line
of
the
major
issue
is
that
v1
gnats
nodes
do
not
talk
to
v2,
gnats
nodes
and
so
during
a
deploy.
This
causes,
split,
brain
and
split
brain
can
cause
all
types
of
failures
that
you
don't
want.
A
So
I
guess
where
we're
at
now
is
that
we've
decided
the
path
we
want
to
take
and
we're
starting
to
execute
on
it,
so
nothing's
released
yet.
But
the
path
that
we're
taking
is
basically
deploying
not
release
with
the
code
for
both
v1
and
v2
and
then
on
some
kind
of
trigger,
like
restarting
them
all
as
v2,
so
starting
them
always
v1
and
then
monitor.
A
Next
on
the
agenda,
I
had
this
pr
about
returning
a
503
and
not
a
404
when
an
app
is
down.
This
has
been
such
a
common
complaint
and
I'm
so
excited
that
someone
submitted
this
pr
so
hopefully
it'll
be
in
since
I
just
wanted
to
highlight
it
right.
I'm
sure
you've
all
had
an
app
that,
like
you
know,
something
went
wrong,
a
go,
router
pruned
it
and
because
it
pruned
it
now,
there's
no
more
apps
running,
and
so
it
deleted
the
pool.
A
And
so
you
know
you
might
get
a
404
once
and
then
you
get
a
five.
No
you
get
a
503
once
and
then
you
get
a
404
and
then,
as
diego
starts
it
up,
it
goes
back
to
the
five
of
you
know.
Maybe
it's
crash
anyway.
It
goes
like
back
and
forth
between
these
status
codes
in
a
confusing
way,
and
so
someone
who
I
don't
recognize
load
notify
has
worked
on
adding
some
properties
yeah.
That
basically
sets
a
timer
that
says:
hey,
you
know,
leave
it
returning.
A
You
know
that
503
for
a
little
bit.
You
know
for
some
time
and
only
prune
an
empty
pool
after
this
amount
of
time,
and
so
hopefully
this
will
help
decrease
that
kind
of
flip-flopping
back
and
forth
between
the
status
codes.
A
And
I
see
some
people
have
added
something
to
the
bottom
of
the
agenda.
So
carson
was
that
you.
C
We
some
some
folks
have
been
working
on
adding
more
comprehensive,
github
actions
to
some
of
the
repos
or
like
experimenting
with
it.
I
guess
to
try
and
get
more
visibility
into
unit
tests
on
on
the
repo
since
we're
an
open
source
organization,
all
our
repos
are
open
source
and
it's
totally
free
to
do
this.
So
most
of
the
logging
and
metrics
repos
have
now
been
updated
to
run
unit
tests
and
go
vet
on
prs
and
on
main
branch
pushes,
which
is
pretty
cool,
turns
out.
C
There
were
actually
a
few
race
conditions
that
we
weren't
catching
previously,
because
you
know
most
of
our
test
code
was
old.
There's
a
lot
of
govet
stuff
that
we
fixed,
and
there
was
a
few
of
our
unit
tests,
actually
depended
upon
like
a
certain
ram
size.
Interestingly,
github
actions
requires,
like
only
gives
you
a.
I
think
it's
like
a
two
gigabyte,
four
gigabyte
memory,
two
gigabyte
ram
like
container
to
work
with,
so
it's
pretty
small
relative
to
concourse
that's
been
interesting.
C
We
also
tried
to
standardize
our
like
repo
settings
and
pull
pull
request
templates
across
repos
to
try
and
bring
like
a
little
familiarity
to
all
the
all
the
various
repos.
So
anyone
going
to
a
different
one,
oh
and
branches,
so
that
you
can
go
to
a
different
one
and
like
know
that
the
main
branch
is
the
main
one.
There's
no
develop
branch,
there's
no
like
release
candidate
branch.
It's
all
main
and
release
branches
start
with
v
pull
request.
Template
gives
us
a
little
more
information
when
we're
evaluating
pull
requests,
which
is
nice.
C
There
pipelines
have
automatically
bumped
most
of
the
repos
to
actually
start
using
go
on
18..
It
doesn't
seem
like
there's
any
problems
with
that
so
far,
except
for
some
weirdness
around
go
getting
broke.
Some
stuff
like
you,
can't
go,
get
anymore
and
weirdly.
If
you
don't
have
the
git
cli
available,
but
there's
a
dot
get
folder
that
can
break
your
go
build
now
that
so
that's
been
a
whole
thing,
but
it
actually
is
going
pretty
smoothly
and
we
might
have
most
logging
and
metrics
repos
at
go118.
Soon
it's
just
it's
moving.
C
C
I
think
that
the
big
concern
is
maybe
log
cash
with
the
memory
management
changes
in
go118.
We'll
know
we'll
know
about
that
sooner.
Looking
into
it
doing
some
testing.
F
Sap
we
started
analyzing
some
some
ripples
about
security
issues.
Most
of
them
will
be
solved
by
by
bumping
the
the
go
version.
I
guess
so
thanks
carson
for
thanks
for
doing
that
work,
and
what
we
did
next
was
that
we
are
we're
planning
to
to
revive
the
the
postgresql
is
so
that
we'll
be
up
to
date.
F
F
Let's
go
carson,
you
said
that
go
version
1.8
might
might
cause
some
problems
with
low
cash.
How
come
like?
What?
What
do
you
expect
there
like.
C
Well,
fingers
crossed
nothing
but
or
I
guess
in
the
best
situation,
we
actually
get
better
memory
management
in
log
cash.
There
was
a
major
change
to
garbage
collection
in
go
with
118,
where
it
where
it
monitors
the
stack
as
well
as
the
heap.
I
think
it
was
the
stack
as
well
as
the
heap
and
like
triggers
collection
more
frequently
since,
since
log
cache
is
like
a
very
memory,
heavy
application
with
some
history
of
memory
problems,
we
were
a
bit
worried
about
it.
C
So
we're
having
a
deeper
look
before
we,
we
actually
manually,
pinned
it
back
to
117
for
now
to
make
sure
it
doesn't
auto
bump
before
we
can
have
a
look
at
it,
but
most
most
likely,
we
find
that
there's
nothing,
nothing
wrong,
just
being
being
a
little
overly
cautious.
F
At
the
location
by
itself,
it's
an
interesting
piece
of
software
because
it
does
lots
of
things
and,
apart
from
the
from
the
memory
usage,
it
also
needs
like
more
more
cpu.
Now,
let's
move
to
to
to
syslog.
F
F
C
Yep,
absolutely,
I
think
I
think
that
was
called
out
in
in
some
of
the
releases,
maybe
not
as
well
as
we
should
have
but
yeah.
That
is
part
of
the
expectation
I
didn't
know
about
the
cpu
raise
in
log
cash.
That's
interesting!
C
There
inch
someone,
someone
actually
did
a
p
prof
review
of
forwarder
agent
fairly
recently
and
discovered
that
I
think
it
was
for
agent
may
have
been
one
of
the
other
agents.
The
like
major
drain
on
cpu
was
actually
marshalling
and
unmarshaling
our
the
protobuf
files,
and
it
was
happening
like
twice
and
in
a
in
like
a
not
so
great
way.
C
So
I
think
there's
some
investigation
to
be
done
about
whether
we
could
reduce
that
time.