►
From YouTube: App Runtime Platform Working Group [Aug 2, 2023]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
well,
I
guess
we
can
get
started.
This
is
a
good
amount
of
people.
Welcome
to
the
airport
group,
app
runtime
platform
working
group
I.
Think
I
got
that
right.
A
Okay,
cool
so
I
guess
we'll
start
with
Joshua
and
Joshua.
B
B
Today's
topic
from
Joshua
and
me
is
about
what
a
metric
that
we
introduced
as
a
feature
which
is
about
getting
insights
into
the
traffic
of
a
container
of
an
application,
in
particular
what
I
mean
by
that
is
yeah
getting
insights
into
the
bytes
received
and
the
bytes
transmitted,
so
to
say,
which
is
helpful
for
various
reasons
and
So.
B
The
plan
today
is
that
I
will
give
a
little
intro
to
a
little
live
demo
and
then
Joshua
will
walk
you
through
the
the
code
changes
the
most
important
ones
that
we
have
done
and
where
we
submitted
pull
requests.
So
the
goal
of
our
appearance
here
is
to
raise
a
little
bit
awareness
about
that
feature
and
maybe
get
some
feedback
or
get
some
people
yeah.
B
Wanting
to
view
our
pull
requests,
yep
so
I'm
going
to
share
my
screen.
I
can't
you
are
still
sharing
go
for
it
can
I.
Okay,
perfect!
Thank
you.
B
So
can
you
see
it
okay,
perfect?
So,
as
I
said
already,
it's
all
about
container
metrics
and
let
me
start
with
the
documentation
about
the
lock
regulator
container
metrics
here
you
see
that
we
have
various
metrics
for
CPU
memory,
etc,
etc,
but
yeah
as
you
can
see.
Obviously
there
is
nothing
regarding
networking
and
basically
we
didn't
start
a
journey
ourselves.
B
Someone
else
started
it.
At
least
this
is
one
of
the
very
first
issues
that
we
found
related
to
exactly
that
topic.
Where
someone
asked
hey:
yeah
man,
CPU
memory,
disk
Etc,
everything
is
cool,
but
why
don't
we
have
something
for
Network
for
network
and
we
we
thought
so
too.
So
we
took
the
time
to
come
up
with
something
and
yeah.
Let
me
just
jump
right
into
the
demo.
B
So,
on
the
left
side,
you
can
see
that
I'm
querying
constantly
the
metrics
for
CPU
of
a
particular
app,
and
here
shows
me
the
metrics
for
container
one
and
container
container
one
and
two
and
on
the
right
side
you
can
basically
see
that
I
SSH
into
a
container
and
what
I
will
do
now
is
I.
Will
just
yeah
abort
that
and
show
you
the
very
first
metric
that
we
introduced.
B
It's
called
we
it's
called
RX
bytes
means
for
yeah
receiving
bytes
and
on
the
right
side
just
to
let
me
file
it
up.
So
you
see
here,
you
see
the
very
you
see
already
that
the
metric
is
being
submitted
and
it's
being
visible
here
and
what
I'm
going
to
do
is
I'm
going
to
cause
some
Network
traffic
by
downloading
a
random
10
megabyte
file
and
what
you
can.
What
you
will
see
after
the
download
has
happened
that
from
container
one
default
here
from
container
zero.
B
Sorry
before
here
will
jump
to
a
five,
which
is
basically
the
proof
that
something
has
happened
end
to
end.
So
let
me
fire
up
the
download
and
yeah.
B
Yeah,
give
it
a
little
more.
What
you
will
see
is,
as
I
said
already
from
these
four
here
will,
will
turn
into
a
five
very
soon,
so
the
download
has
happened
and
from
container
zero
sits
through
four
and
very
soon.
Yes,
there
you
go.
It
jumped
to
five
means
like
the
receiving
bytes
was
around
40
megabytes
so
far
and
after
the
download
it's
around
50
megabytes
and
we
have
the
same
for
the
transmitted
byte
means
like
whenever
we
uploaded
stuff
or
something.
B
So
let
me
query
that,
and
you
will
see
a
similar,
a
very
similar
behavior
when
the
metric
from
container
0
here
will
jump
from
two
to
three.
B
B
Yes,
yeah
just
uploading
that
file
now
and
what
you
can
see
is
that
very
soon
it
will
jump.
B
Yeah
you
see
here,
it
went
from
24
to
34
means
like
again
we
uploaded
10,
megabyte
and
yeah.
This
is
basically
already
all
the
stuff
that
I
wanted
to
show.
You
give
you
the
intro
and
show
you
what
we
achieve.
What
kind
of
feature
how
it
looks
like
in
real
world
and
I
will
hand
over
now
to
Joshua
that
and
who
will
walk
you
through
the
the
most
important
changes
that
we
have
done
in
order
to
achieve
that.
D
All
right,
yeah,
as
Josh
stated,
we
also
quickly
wanted
to
go
over
the
code
changes
we
did
to
implement
the
network
metrics.
We
created
Five
PR's
in
total.
Today
we
want
to
go
over
quickly
over
three
of
them
and
where
we
actually
had
to
implement
some
logic
and
add
something
new
in
the
other
PRS.
D
It's
it's
mostly
just
forwarding
information
that
that
we're
reading
here
so
they're
less
exciting
to
look
at
and
the
first
one
or
the
first
bit
we
created
is
in
a
CF
networking
release
and
the
first
change
we
did
here
was
that
we
wanted
to
forward
the
interface
name
of
the
container.
D
So
the
networking
interface
name
of
the
container,
together
with
the
other
information
that
is
forwarded,
select
the
container
IP
and
the
reason
to
do
that
was
that
we
want
to
read
the
network
metrics
in
the
garden
component,
together
with
the
other
metrics
that
are
already
there,
and
for
that
we
needed
the
interface
name,
which
was
not
there
yet.
So
that's
why
we
added
here
and
to
to
expose
this
information.
D
We
adjusted
the
app
method,
which
is
called
when
the
container
started,
and
there
are
not
many
changes
in
here,
but
the
first
change
we
did
and
first
important
change
we
did
was
that
we
had
to
bump
the
expected
cni
result
version
to
0.4.0
and
reason
for
that.
One
is
that
the
previous
versions
did
not
expose
the
network
interfaces,
and
this
one
does
and
this
one
works
fine
with
the
current
circuit
list,
as
we
just
saw
in
the
demon
and
after
doing
that,
we
get
the
first
ipv4
IP
address.
D
D
What
this
allows
us
not
to
do
is
that
we
can
get
the
interface
name
that
matches
this
container
IP
and
just
attach
it
to
the
to
the
up
outputs,
which
is
then
forwarded
to
to
the
Garden
component,
which
can
then
read
the
network
metrics,
and
so
this
is
the
the
first
I
would
say
important
change
and
this
information
is
then
used
in
the
garden
components.
So
here
we
checked
where
the
other
network
or
where
the
other
container
metrics
are
read,
and
we
just
appended
our
code
here
to
to
get
the
network
stats.
D
Here
we
were
able
to
reuse
the
network
stack
property,
which
was
already
there
from
a
previous
implementation.
So
it
seems
that
this
was
a
leftover
and
we
were
also
able
to
reuse
the
container
networks
that
struck,
which
was
already
there,
and
just
attach
our
information
that
we're
reading
here
and
so
what
we
do
here
is
we
get
the
interface
name
from
the
previously
exposed
information
and
use
this
one
right
here.
D
So
what
we
do
here
is
that
we
call
the
Run
methods,
which
was
already
there,
which
allows
us
to
to
run
the
process
and
run
a
container,
and
this
allows
us
to
get
the
received
and
transmitted
bytes
for
that
specific
container
network
interface,
and
we
simply
do
this
by
by
using
the
cisfs
file
system.
So
this
allows
us
now
to
to
read
the
the
outgoing
and
incoming
bytes
for
that
specific
container.
D
After
that,
nothing
exciting
is
happening.
So
it's
just
some
pausing,
some
validation
of
the
values
we're
getting
and,
in
the
end,
we're
just
returning
the
container
Network
stats,
which
are
then
appended
to
the
already
existing
Garden
Network
structure
and
the
last
step
of
this
journey
is,
is
then
the
Diego
logging
client.
D
So
in
the
end,
the
executor
will
fetch
the
metrics
and
use
the
send
app
Matrix
method
to
to
send
all
the
other
metrics,
which
you
already
see
here,
and
we
just
appended
our
RX
by
CTX
bikes,
metrics,
which
we
also
just
saw
on
the
demo
and
yeah,
and
this
basically
ends
since
the
the
new
metric
data
to
local
data
and
and
that's
it
yeah.
That's
that's
pretty
much
it
from
our
sites,
so
yeah,
as
Josh
already
said,
it
would
be
great
to
to
get
some
feedback
on
this
pass
and
yeah.
A
Awesome,
thank
you
that
looks
really
cool
I'll
make
sure
someone
takes
a
look
at
it
in
the
near
future.
Hopefully
this
week
or
next
week,.
A
Yeah,
that's
really
cool
to
see.
Next
up
it
looks
like
where
is
today
there
we
go
Patrick.
You
wanted
to
talk
about
load,
balancing
algorithms.
C
Yeah,
big
footsteps
to
fail
actually
so
I'm
a
way
away
before
such
the
state
of
the
the
topic
I
want
to
discuss
so
I'm,
more
looking
for
feedback
before
we
actually
start
any
implementation
or
even
an
RFC
for
the
topic.
I
want
to
bring
so
it's
more
like
conducting
feedback
and
what
experiences
you
made
in
the
past
or
if
you
had
similar
discussions
and
decided
against
or
for
something
by
intent
and
maybe
can
add
or
stop
our
ideas
before
we
waste
efforts.
C
C
As
probably
most
of
you
know,
we
support
to
load
the
lensing
algorithms
in
the
routing
stack
on
go
router.
This
is
list
connection
and
Round
Robin.
You
have
to
configure
on
platform
level
or
on
cloud
Foundry
installation
level
which
load
the
lens
algorithm
you
want
to
choose
and
it
is
then
applied
to
all
applications.
As
you
also
know,
we
have
a
multi-tenant
platform
with
tons
of
customers
and
very
varying
range
of
use
cases
for
us
there's
every
now
and
then
someone
asking
for
lease
connections.
C
We
are
using
round
robin
as
a
default
setting
for
all
our
distributions,
but
every
now
and
then
someone
asks
for
at
least
connection
and
of
course,
then
we
can
say
only
say
it
will
affect
others.
We
won't
change
this
Global
thing
yeah,
but
it
would
be
nice
if
a
customer
could
choose
their
own
load,
balancing
algorithm
on
their
own,
so
just
one
customer
for
their
route
or
their
org
or
space
whatever
we
can
Define
and
can
select
the
load
balance
algorithm
of
their
choice.
C
So
this
is
something
we
would
propose
as
an
RC,
maybe
to
to
make
the
changes.
I
know
it
affects
quite
some
areas,
but
yeah
I
first
wanted
to
see
if
you're
open
for
it
and
the
other
motivation
I
want
to
give
is
that
it
also
opens
the
door
for
more
fine
granular
features
so
like
we
could
add
new
load,
balancing
algorithms
for
some
people
to
use.
We
have
in
mind
to
add
more
AC,
aware
metrics
on
the
Gruta
or
like
metadata,
on
which
AC
is
this
app
instance
living
in?
C
Is
it
the
same
like
the
go
router,
then
we
could
build
some
like
weighted
round
robin
or
weighted
load
balancing
algorithm
to
choose
or
prioritize
app
instances
in
the
same
AZ,
and
we
know
there's
some
discussion.
We
also
have
an
issue
I
linked.
It
I
think
there
is
some
pros
and
cons
for
this,
but
with
this
step
one
that
I
just
like
summarized,
we
could
leave
the
choice
to
an
app
developer
or
have
someone
playing
around
with
it
even
and
don't
have
to
set
a
platform
white
setting.
C
Also,
we
just
received
a
request
of
another
load,
balancing
algorithm
with
some
ring
based
hashing,
and
it
sounds
very
great
and
we
would
like
to
to
support.
But
for
us
we
could
not
even
support
the
stakeholder
with
this
load
balancing
algorithm
because
we
can't
change
it
for
all.
So
we
are
just
blocked
without
like
making
it
configurable
per
app
instance.
So
my
question
is:
was
this
ever
discussed
or
were
there
some
arguments
against
it
or
is
it
just
not
implemented
and
everyone
would
say
yeah
it's
nice
to
have.
A
Yeah,
it's
definitely
something
that
we've
discussed
in
the
past
and
it
just
hasn't.
We
just
haven't,
had
the
resources
to
to
get
it
out
the
door
or
dedicate
time
to
it.
A
F
Yeah
I
think
you
know,
I
I
can
see
it
being
valuable
to
be
able
to
experiment
more
with
other
routing
algorithms
at
a
granular
level
than
just
switching
the
entire
environment
over
to
something
different
I
think
my
main
concern
would
be
making
sure
that
we're
not
making
the
developer
experience
too
complicated
or
cluttered
people
don't
care
about
that
and
are
happy
with
the
default
and
then
maybe
there's
some
edge
cases
around
like
what
level
of
granularity
would
make
sense
for
the
configurability.
F
You
know,
maybe
the
easiest
thing
to
do
would
be
per
route
now
that
we
can
share
routes
across
space
just
and
things
like
that,
because
that
could
potentially
conflict
with
the
space
hierarchy.
That
we
generally
assume
is
an
effect,
but
yeah
I
mean
otherwise,
like
I
think
those
all
seem
like
things
that
could
be
resolved
and
I
agree
with
Jeff.
You
know
people
have
had
ideas
about
this,
but
there
hasn't
been
enough
urgency
or
momentum
to
implement
anything
to
do.
C
A
Cool,
thank
you
Max.
You
wanted
to
talk
about
stale
issues.
E
Yeah,
it's
I
hope
you
can
hear
me
yep
yep,
perfect,
it's
more
a
question
than
really
a
talk,
so
I've
from
time
to
time.
I've
stumbled
across
issues
that
feel
like
they
have
some
importance
or
where
we're
waiting
on
feedback,
and
they
have
been
open
and
inactive
for,
say
two
months
or
more
and
feels
somewhat
wrong
to
just
keep
them
open
for
forever
and
just
not
take
care
of
them.
E
So
the
question
is:
if,
if
we
are
interested
in
a
race
that
previously
on
slack
in
something
that
could
potentially
close
sales
issues
or
at
least
have
a
reminder
for
people
who,
where
we
wait
on
feedback
to
to
have
like
hey
your
issues,
still
open,
please
check
if
you
can
provide
some
further
input
and
then
ultimately
close
them.
If,
if
there's
no
input
and
yeah
the
reporter
just
disappears
or
if
we
have
strong
feelings,
why
we
wouldn't
want
to
do
that?
F
You
know
recording
automation,
automated
nudges,
you
sound
like
a
good
Next
Step
I'm,
a
little
cautious
of
automating
closing
issues,
I
think
the
Bosch
or
foundational
infrastructure
working
group
I've
been
thinking
about
that
for
a
while
and
there's.
Maybe
some
all
right
concerns
around
being
a
little
too
rude
to
people
in
the
community,
especially
if
we
chose
aggressive
time
frames
for
closing
issues
that
aren't
responsive.
F
But
you
know,
maybe
being
a
little
more
aggressive
about
manual,
closing
issues
that
seem
stable,
so
as
another
step
towards
reducing
the
EU
volume
of
Steel
cheese
that
we
have
across
repositories.
A
I
wonder
if
this
is
something
that
the
TOC
would
be
interested
in
just
for
providing
some
mechanism.
Opt-In
mechanism
like
they
did
with
Branch
protection
of
of
adding
some
sort
of
automation,
that'll,
add
GitHub
actions
to
do
this
for
us
and
for
other
teams
if
they
want.
E
E
Right
here
it
is
okay,
where's
the
chat,
so
there
it's,
the
Reuben
at
some
point,
opened
an
RFC
for
common
stay,
a
lot
it
wasn't
really
accepted,
but
the
thing
is
that
I'm,
the
the
end
state
is
kind
of
weird,
because
a
lot
of
people
agreed
that
nudges
or
like
reminders
are
a
good
thing,
but
then
closing
them
is
not,
and
instead
of
doing
at
least
something
it
was
just
discarded.
I
guess.
A
E
Kind
of
ironic
kind
of
proves
my
point:
yeah
yeah
I,
don't
know,
maybe
just
reactivate
the
RFC
and
like
put
it
into
a.
G
Don't
know,
but
if,
if
nothing
happened
for
the
past
two
years,
why
would
others
do
do
something?
Now,
it's
also
a
bit.
Usually
every
every
project
on
GitHub
has
some
kind
of
automation
about
it.
We
can.
We
can
take
a
look,
what
what
the
others
do,
but
I
think
I,
don't
know
one
or
two
months
would
be
enough
time
for
doing
something.
In
the
past
time,
I
was
also
working
on
some
some
issues
on.
D
G
G
We
will
suggest
some
things
how
they
can
fix
the
things
and
then
you
you
sit
there
and
wait
for
for
for
a
few
weeks
for
a
for
an
answer,
but
yeah
I
don't
know,
maybe
the
people
don't
have
that
time
or
they
do
it
when
they,
when
they
have
to
do
things.
So
it's
okay
for
you.
If
you
do
some
some
housekeeping
and
clean
the
things
that
we
don't
want
to
to
have
them
in
our
backyard
technique.
E
E
So
one
thing
we
could
also
do
is,
we
could
just
say
I-
think
most
issues
have
at
least
someone
assigned
that
we
just
say
if
you're
assigned
to
an
issue,
make
sure
to
close
it
if
it's
been
inactive
for
one
or
two
months.
So
we
could
make
this
a
manual
automation.
E
Basically,
we
could
try
that
first
I
don't
know,
but
it
feels
like
a
lot
of
work
and
I
tend
to
lose
sight
of
issues
as
well
if
they
don't
regularly
Bubble
Up
in
my
notifications,
because
someone
wrote
a
comment
and
having
like
a
stable,
that
just
comments
hey,
this
is
just
becoming
sale,
would
make
sure
that
it
regularly
comes
up
and
I,
see
it
and
get
notified
about
it.
That
was
kind
of
the
whole
idea.
F
Yeah,
you
know
maybe,
rather
than
trying
to
solve
this
at
a
CF
wide
level.
If
there's
some
experiments
in
the
working
group
to
do
first,
that
could
even
inform
what
might
be
an
effective,
community-wide
policy.
F
I
think
you
know
I
I,
like
the
idea
of
you
know
if
we
decided
to
have
maybe
a
fairly
consistent
policy
about
issue
responsiveness
that
still
seems
reasonable
enough.
You
know
like
a
couple
months
or
something
like
that
and
then
making
that
clear
in
in
any
of
those
like
stale
nudges,
then
like
that.
That
seems
like
a
reasonable
thing
that
we
could
try
doing
to
start
cleaning
up
all
these
still
issues.
G
G
A
E
Yeah
and
the
message
repeatedly
State,
something
like
if
you
still
have
the
need
for
this
issue
feel
free
to
reopen.
I
hope
that
everyone
can
do
that.
In
that
case,
this
will
be
an
easy,
easy
fix.
E
So,
if
you're
interested
GitHub
has
a
new
search
for
issues
and
I
quickly
built
this
query,
which
surges
all
open
issues
on
the
cloud
Foundry
Arc,
which
haven't
been
updated
in
the
past
two
months,
and
it's
like
1.7
K
issues,.
A
That'll
be
fun,
would
you
would
you
want
to
volunteer
for
proof
of
concept
of
the
automation
for
this
Max.
E
Yeah,
so
the
one
thing,
that's
pretty
obvious
is
there's
a
stale
action
from
GitHub
directly,
but
I
think
that
only
works
with
repository
level,
so
I'll
probably
be
looking
into
something
that
runs
on
an
orc
on
a
limited
set
of
repositories.
E
A
Awesome
does
anyone
have
other
things
to
bring
up
or
talk
about.
A
All
right,
well,
I,
guess:
that's
our
meeting
this
month,
good
to
see
new
faces
and
I
will
see
you
next
month.