►
From YouTube: App Runtime Platform Working Group [May 3, 2023]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Everyone
taking
over
for
Amelia
this
this
month,
while
she's
on
PTO,
as
that's
the
beat
for
this
working
group,
even
though
I
have
no
official
power
whatsoever.
A
A
However,
Ben
over
here
can
answer
all
the
logging
in
metrics
questions
that
are
raised
or
like
hold
core
on
those
topics
and
Jeff
I
believe
should
be
able
to
work,
hold
core
on
everything
else,
Jeff,
sorry
for
putting
it
on
the
spot,
the
let's.
Let's
go
ahead
and
just
kick
it
off,
I
guess.
A
The
first
item
on
the
board
is
everywhere:
I
guess
is
everyone
I'll
just
share
my
screen
just
in
case
the
first
item
on
the
board?
Is
the
proposal
to
change
the
algorithm
for
a
byte
based
log
rate,
limiting
I
think
we've
all
been?
Hopefully,
we've
all
had
a
look
at
this
by
now,
and
we've
all
had
a
chance
to
to
talk
about
it.
A
I
believe
we
there
was.
We
talked
about
discussing
one
thing
in
particular
in
this
meeting
that
was
Jovan
I
believe
he
proposed
the
idea
of
going
back
to
a
timeout
so
rather
than
like,
rather
than
in
this
proposed
solution.
Where
we
introduce
a
penalty
box,
you
propose
we
could
switch
back
to
what
we
used
to
do
in
the
log
line,
limiting
which
was
every
for
every
second,
that
we
experience
log
log
rate
limiting
we
emit
One
log
line,
saying
error,
you're
exceeding
your
limit
or
something
along
those
lines.
A
I
think
we've.
We've
we've
had
some
discussions
on
our
end
about
this
and
I
think
we're
leaning
towards
keeping
the
Penalty
Box.
Although
you
did
raise
some
valid
ideas
and
some
good
points
about
why
the
timeout
system
was
good
any
do
you
want
to
jump
in
and
talk
talk
more
about
your
counter
proposal.
B
I,
don't
have
counter
proposal,
but
goes
into
sharing.
Our
experience
would
be
what
we
had
so
far,
because
from
time
to
time
where
we
have
some
baking
services
that
that
change,
something
in
the
apps
cannot
catch
up,
and
we
have
problems
that
The
Depths
are
are
getting
rate
limited
and
in
duplication,
logarith
limit
the
application
logic
limit
count.
Metrics
also.
B
We
can
we
can
keep
that,
depending
on
the
order,
dropping
all
the
locks.
I'm
I'm
also
well
for
that,
because
you
also
notice
that
the
the
previous
Behavior,
where,
where
it
looks,
were,
were
buffered
in
the
executor.
It
was
also
interesting
that
when
we
were,
we
were
seeing
some
some
behavior
that
dip,
which
was
rate
limited,
was
was
top
10
and
for
because
all
of
this
buffered
locks
were
stayed
somewhere
in
the
space
between
the
executor
and
thegregator.
B
B
I
would
say,
and
the
other
side,
if
we
drop
for
a
second
or
or
not
I'm,
not
against
I,
was
thinking
maybe
like
that
bit
like
bursting
mode,
so
that
we
say
we
have
rate
limit,
for
example,
400
bytes
per
per
second,
and
then
we
say
that
you
know
in
a
minute
an
up
gets
like
maybe
10
10
10
10
bites
per
second
burst.
So
so
we
give
it
like
a
yeah
quota
capacity
that
that
the
depth
can
can
use
this
logarithmic
texture
inside
of
some
some
time
interval.
But.
C
B
If
we
say
that
we,
where
we
keep
it
simple,
and
we
say
that
we're
we're
dropping
everything
but
Penalty
Box
for
for
a
defined
predefined
time
interval
that
could
be
also
okay,.
A
Interesting
because
introducing
the
idea
of
bursting
doesn't
necessarily
like
that,
we
could
do
that
and
still
do
the
penalty
box
or
still
switch
to
the
time
bursting
would
be
a
separate
trying
to
think
of
the
word.
It'd
be
a
separate
parameter.
I
guess
we
did
experiment
a
little
bit
with
the
idea
of
bursting
when
we
first
were
were
changing
from
log
line
limiting
to
byte-based
limiting
I,
don't
think
we
saw
some
I,
don't
think
we
saw
a
huge
benefits
from
it.
Obviously,
we
weren't
running
in
a
production
environment.
A
D
No
I
think
we
talked
about
bursting
as
well
locally.
It's
it's
hard
to
say
without
knowing
kind
of
what
what
you're
setting
your
limit
to
and
what
what
kind
of
logs
you're
trying
to
get
through
I
think
there
was
definitely
some
difficulty
in
terms
of
like
if
we
allow
bursting
it
it,
it
leaves
room
for
people
to
want
to
adjust
like
how
burstable
things
are
and
I
think,
rather
than
making
a
decision
about
what
the
default
burstiness
is.
We
just
decided
to
not
allow
a
burst.
B
That's
totally
fine,
also,
if
you
from
a
practical
point
of
view,
even
if
you
allowed
bursting
it,
doesn't
when
an
application
does
something
something
wrong
or
something
something
that
happens
with
the
application
and
we
have
like
flooding
a
lot
of
stick
traces
or
something
similar.
It
happens
for
a
first
for
a
longer
period
of
time
until
the
application
owners
do
do
something
about
it.
B
So,
even
if,
if
you
burst
or
not
or
or
if
we
dropped
drop,
some
some
part
of
the
looks
and
and
let's
untrue,
it
doesn't
really
make
a
difference.
I
think
so.
Maybe
I
can
start
with
a
simple
solution
like
with
the
Penalty
Box
and
then
we'll
see
how
it
goes.
Sounds.
A
Good
cool
yeah-
hopefully
this
introduces
more
like
contiguous
blocks
into
the
logging
output
and
does
away
with
that
issue
of
too
many
error,
log
lines
being
being
output.
B
Oh
sorry,
no
no
I
wanted
to
to
mention
that.
It's
also
interesting
that
that
most
of
the
people
are
are
using
like
default
settings
for
their
for
their
loggers
application
loggers.
So
nobody
cares
about
adjusting
some
things
like
like.
We
had,
for
example,
in
the
in
in
the
line
base
log
rate,
limiting
that
that
everything
is
packed
as
a
single
line,
because
we
can
have
it
lines
up
to
64,
kilobytes
and
then
they're
they're
divided
an
Executor
in
in
in
multiple
log
messages,
so
I
guess
so
doing.
B
A
Yeah
I
think
down
the
line.
There's
some
other
ways.
We
can
do
some
interesting
things
with
byte
based
stuff
I
know.
Renee
Chu
had
proposed
the
idea
of
truncation
on
on
the
issue,
which
is
another
possibility
for
the
future
actually
trying
to
figure
out.
If
maybe
we
could
be
truncating
the
logs
in
some
way,
although
that
comes
with
its
own,
its
own
difficulties,
but
by
face
versus
long
based
line
based
also
opens
the
possibility
of
like
truncation
right
still
getting
some
of
the
stuff.
A
I,
don't
know
it's
possibilities,
anything
else.
Anyone
wants
to
say
about
that
proposal.
A
I
should
have
been
taking
notes,
mainly
so
good
at
that
I'll
try
and
write
something
down
right
now,
yoga.
You
had
the
next
item,
it's
be
before
you
go
into
that.
It's
possible
that
I'm
gonna
have
to
leave
somewhere
in
the
middle
of
this.
So
if
I'll
post
like
a
little
Zoom
message
when
I
when
I
gotta
go
and
I'll,
let
y'all
continue
talking.
If,
if
someone
is
down
to
take
notes,
please
do
so
otherwise
the
session
is
recorded.
A
So
there's
always
something
to
look
at
yoga
hit
us
with
how
to
send
custom
application.
Metrics.
B
Yeah
literally
we're
getting
some
some
requests
from
application
developers,
do
they
want
to
send
custom,
metrics
and
and
to
get
them
through
through
login
or
or
syslog
dreams,
and
what,
if
I
was
I
was
searching
through
through
the
real
person
to
to
find
some
things?
What
we
have
it
at
the
end
at
the
moment,.
B
And
from
what
I've
seen
is
that
we
have
to
the
gold,
aggregator
library
and
and
the
logulator
tools?
B
Yes,
that
has
nice
examples,
but
then
we
have.
But
then
in
in
such
cases
we
we
have
to
provide
certificates,
so
certificates.
B
So
the
the
application
can
be
one
application
can
can
send
loans
to
to
to
to
log
regulator
yeah
the
API.
So
it's
thinking
like
do.
We
have
any
other
ways
how
to
how
to
provide
the
way
to
to
application
that
custom
custom
metrics
can
be,
can
be
injecting
ingested
by
a
logregator
sorted
and
during
the
search,
but
if
you're
only
a
few
moments
before
starting
the
the
meeting.
You
also
have
this
in
pivoters
here.
This
metric
register
register
CLI
to
find
it
a
bit
interesting,
but
I
haven't
taken
a
look
at
it.
D
Yeah
so
I
I,
guess
I
think
there's
a
little
bit
of
interesting
history
around
the
custom,
metrics
I
know:
we've
had
trouble
getting
some
adoption
around
it
ourselves.
I
know
there's
kind
of
two
awkwardly
parallel
efforts
there,
the
app
Auto
scaler
has
its
own
kind
of
implementation
of
metrics.
It
looks
very
similar.
So
sorry,
let
me
pull
back
again.
D
I've
been
on
something
like
two
or
three
efforts
to
enable
custom
metrics
on
the
platform
and
the
original
effort
was
a
service
broker
that
you
would
attach
to
your
application
and
you
could
push
metrics
to,
and
we
saw
that
we
we
were
like
okay,
this
is
a
custom
interface.
It
requires
making
push,
HTTP
calls
to
implement
customers,
weren't
really
implementing
it,
and
it
was
kind
of
sketchy
to
do
a
push
to
a
service
broker
and
also
I,
don't
think
necessarily
scaled.
That
well
I.
D
Think
that
looks
a
little
bit
similar
to
what
we
were
doing.
What
app
Auto
scaler
has
at
the
moment
we
also
have,
and
both
of
those
were
closed,
Source
efforts.
Of
course,
you
can
look
up
documentation
on
all
the
server
stuff
or
how
this
sort
of
stuff
works,
not
that
hard
to
access,
but
the
second
effort
we
did
was
syslog
drains,
but
not
really.
D
The
only
reason
we
use
syslog
drains
is
because
it's
an
easy
place
in
the
API
to
put
stuff,
but
more
or
less
that
also
exists
kind
of
off
platform.
There's
some
wish
I've
had
at
times
to
have
something
like
a
Prometheus,
scraper
or
something
else
handle
app
metrics
I,
don't
think
bye,
Carson
I,
don't
think
I've
gotten
much
traction
necessarily
on
that
effort.
D
Of
course,
there's
internal
pressures
to
like,
or
at
least
lack
of
pressures,
to
try
and
prioritize
introducing
this
into
Prime
scraper.
But
it's
definitely
something
I
feel
like
could
be
a
a
a
a
way
to
do
things.
The
awkward
thing
about
the
logger
Creator
tools,
reference
is
I,
don't
think.
Oh,
no,
it
feels
very
I
would
not
very
much
not
suggest
lighting
your
application
sent
directly
to
Ford
or
agent
right,
something
that
I
think.
D
Arguably
speaking,
is
we
try
and
keep
as
important
is
kind
of
like
your
tenant
tenancy
boundaries
right
and
if
you
have
access
to
logger
Gator
agents,
Mutual
TLS
certificates,
you
can
emit
logs
and
metrics
as
any
application
on
the
platform.
D
It's
definitely
something
like
when
we're
when
we
talk
about
like
enabling
custom
metrics,
whether
it's
through
metrics
Gateway
or
whether
it's
through
something
like
metric
registrar,
one
of
the
goals
at
some
point
is
maybe
to
actually
limit
the
amount
of
metrics.
You
can
omit
that's
something
that
we've
started
to
look
at,
but
one
of
the
goals
is
to
make
sure
that
the
application
can't
cross
those
Tennessee
boundaries.
D
Broadly
speaking,
I
think
we've
also
pushed
more
towards
Open
Standards.
To
like
something
like
standardizing
Rod
Prometheus
metrics
is
something
we've
thought
yeah.
This
would
be
cool
if
applications
could
do
right.
B
When
the
applications
would
would
expose
a
Prometheus,
creepable
input,
slash,
metrics
or
whatever,
then
we
need
the
configuration
on
on
the
Chrome,
scraper
or
somewhere
else,
so
that
it
can
scrape
this
Matrix
tester.
B
D
It
won't
it's
kind
of
awkward
that
we
have
that
in
like
accessible
repo.
D
Think
it's
it's
a
it's
I,
don't
think
it's
actually
very
valuable
in
and
of
itself.
The
only
thing
the
metric
registrar
CLI
does
is
make
a
syslog
drain
with
a
defined
format
and
like
hide
that
behind
uh-huh.
D
C
B
B
D
Definitely
try
and
start
some
conversations
internally
about
this
I
think
this
is.
D
We
haven't
heard
a
lot
of
interest
from
this
in,
like
this
fear,
so
I'll
definitely
reach
out
and
talk
talk
to
my
team
about
it.
D
B
But
but
we
can
also
tell
to
to
the
app
owners
that
they
can
I
don't
know,
simply
provide
a
parameter,
scrapable
endpoint
in
in
their
application
with
their
custom
apps
and
and
they
can
scrape
it
with
with
something
else,
because
the
the
application
has
is
a
public
hearing
anyhow,
so
it
will
be
available
for
such
things
right.
B
D
Yeah,
we've
definitely
heard
people
doing
that
as
well,
setting
up
like
parallel
infrastructures
for
their
custom,
app
metrics
and
they
wouldn't
end
up
in
your
assist
log
drains.
They
wouldn't
end
up
in
your
law,
cache
if
you're
doing
various
like
org
space,
wide
auditing
or
larger
efforts.
It's
it's
a
lot
more
work
to
set
it
up,
but
it's
definitely
something.
We've
we've
heard
of
no
they're
they're
using
public
endpoints,
which
I
don't
know.
D
Generally
speaking,
I
I'd
say
you
should
be
considering
your
metrics
as
just
as
confidential
and
secure
as
as
your
logs.
It's
not
something
I
think
everyone
feels
the
same
way
about
and
depending
on
how
you
set
up
your
environment,
they
might
be
not
very
publicly
accessible
right,
but.
D
And
so
there's
definitely,
if
you
have
something
within
the
platform,
you
can
ensure
that
there's
both
consistency
in
terms
of
like
from
a
app
operator
to
developer,
to
like
workspace
operator
perspective,
various
consistency
parameters
in
terms
of
like
how
you
eat
gross
locks
and
metrics.
But
you
can
also
there's
hypothetically
room
to
move
that
from
a
space
where
it's
public
to
moving
it
to
a
space
where
it's
more
more
securely
transmitted.
D
Well,
I'll
start
some
discussions
about
it,
so
you
can
see
how
things
go.
B
I,
don't
know
if,
theoretically
speaking,
if
the
application
exposes,
for
example,
Prometheus
endpoint
with
over
mtls,
that
would
be
enough
so
that
they
have
some
something
on
on
their
end,
usually
everyone
who
who
uses
these
Loop
dreams.
He
has
some
kind
of
infrastructure,
so
so
so
packing
another
service.
B
B
B
Talks.Cloud
Foundry
yeah
I've
noticed
that
in
some
ways
the
disintegrations
are
are
changed
by
the
service
providers
and
and
we
don't
don't
catch
up
with
the
with
the
current
versions
and
I,
don't
know
what
the
what
would
be
the
the
process
there
to
pull
up
the
documentation.
Do
it
do
we
need
to
to
verify
that
and
that
everything
works
is
is
described
in
the
documentation
or
can
we
simply
link
some
things
like
configure?
This
is
described
on
the
service
provider
site
and
then
do
something
locally.
B
D
We're
done
so.
This
is
the
the
streaming
logs
to
third-party
Platforms
in
the
okay,
so
I
recently
updated
the
documentation
for
sorry
can
I
say
recently:
that's
not
that
recently
is
it.
Let
me
check.
D
Back
in
May,
I,
updated
the
documentation.
Sorry
back
in
May
of
last
year,
I
updated
the
docs
for
streaming
app
logs
in
fluent
D,
so
that's
probably
the
most
up-to-date
in
the
streaming
app
logs
kind
of
section,
I,
don't
think
the
other
side
sections
have
gotten
updated
in
a
long
time,
I've
tried
to
in
some
ways
I
think.
D
Historically,
the
lawgregator
team
has
tried
to
keep
somewhat
of
an
arms
distance
away
from
trying
to
support
specific
Integrations
I
think
sometimes
we've
had
teams
go
in
and
work
hard
at
a
one-on-off
effort
to
either
create
some
docs
or
in
the
past,
create
nozzles
for
specific
Integrations,
but
they
tend
not
to
be
very
long,
lived
or
long
supported
efforts,
so
to
speak.
If
that
makes
sense.
D
So
so
all
of
these
stocks
are
almost
certainly
way
behind
right.
I
know,
for
example,
the
I
don't
think
the
the
syslog
plugin
for
Splunk
is
very
well
supported
or,
as
you're
saying,
support
it
at
all.
My
understanding
is:
we've
gone
through
kind
of
two
iterations
of
Splunk
support.
D
The
second
iteration
was
the
Splunk
nozzle
I
believe
I
might
be
mixing
up
the
order
there.
There's
also
Splunk
connect
for
syslog
is
probably
is
maybe
what
Splunk
would
tell
you
to
use
these
days,
which
is
just
a
fluent?
D
No
sorry,
it's
a
syslog
NG
kind
of
integration
streaming
app
logs
to
Azure
OMS
log
Analytics
still
technically
exists,
but
let
me
double
check
something
nope,
not
this.
D
It
links
to
I
think
an
old
version
of
the
repository
and
I,
don't
believe
even
OMS,
OMS
analytics
itself
I
think
is
a
somewhat
deprecated
product.
I
think
I
can't
remember
the
new
name
for
it.
D
D
Maybe
the
streaming
app
logs
to
Management
Services
is
up
to
date,
but
if
you
have
see
some
room
to
like
update
it,
verifying
it
and
then
changing
the
docs
feel
feel
free.
If
you
want
me
to
take
a
look
to
give
me
a
ping,
I'll
probably
take
a
look,
although
I
might
not
go
through
the
full
verification
stuff
for
doing
that.
If
that
makes
sense,.
B
Yeah
we,
what
we
usually
do
is
to
turn
on
the
configure
system,
train
and
but
for
mixing
them.
The
Euro
should
be
publicly
available
and
exist
from
1200.
That's
it,
but
sometimes
people
read
through
the
docs
and
say
yeah,
but
this
this
thing
doesn't
work.
This
integration
is
not
there
anymore,
but
they're,
sometimes
they're,
they're,
simply
lazy
to
to
to
Google
the
things
and
to
see
what's
the
actual
State
Patrol.
D
I
know
in
the
past
I
had
this
belief
that
we
had
this
kind
of
pull
on
at
some
degree
of
pull
on
like
observability
tools
to
like
integrate
with
us.
I
think
I've
come
to
the
understanding
that
that
isn't
necessarily
the
case.
D
I
think
most
of
the
nozzles
that
exist
existed
because
we
at
some
point
created
them
and
not
because
we
got
people
interested
in
like
integrating
with
Cloud
Foundry
in
the
same
sense,
I
think
there
was
this
hope
at
some
point
on
the
team
that
if
we
support
syslog,
then
eventually
people
will
be
like
oh
well.
We
should
support
syslage
Splunk
or
we
should
support
suslog
as
some
other
external
integration
and
that
hasn't
necessarily
followed
I.
D
Think
I
brought
up
in
in
the
logging
and
metrics
channel
that
maybe
there's
some
interest
some
point
to
expanding
some
of
the
subset
of
like
push
egress
formats
that
we
allow
right
if
we
supported
two
or
three
wider
supported
protocols
for
sending
metrics
and
logs.
Maybe
that
gives
us
more
room
to
just
say.
Well,
if
you
support
these
these
protocols,
you
can
send
it
to
whatever
you're
receiving
to
right.
B
E
E
E
Long
time
ago,
in
case,
we
introduced
some
sort
of
a
turbulence
in
some
of
the
availability
zones
configured
that
we
are
losing
not
only
Diego
sales
from
the
so-called
Home
Health
Zone,
but
we
are
also
using
Diego
sales
presence
from
the
healthy
zones
as
well
begin,
depending
on
how
the
system
is
distributed,
where
actually
the
actual
the
the
main
rocket
server
is,
which
is
the
active
geography,
API,
etc,
etc.
What
of
a
lot
of
key
factors
which
at
some
point
in
time,
lead
to
a
weird
Behavior?
E
But
anyway,
thanks
for
supporting
us
well
next
topic
is
another
issue
which
we
identified.
E
E
So
it's
more
or
less
related
to
the
use
case
when
we
have
a
thousands
of
applications,
security
groups
or
applications
in
a
single
space
and
at
some
point
in
time
this
generates
enormous
Diego,
API
outgoing
bandwidth.
So
because
a
lot
of
data
is
sent
over
the
wire
to
the
r
alt
emitter,
which
is
actually
not
used
by
the
altimeter.
E
So
right
now
we
proposed
an
improvement
in
this
Behavior,
so
yeah
the
pull
requests
I
just
gave
you
a
link
to
them,
so
they
are
filed
from
a
colleague
from
our
team.
So
to
be
honest,
I'm
not
sure
what
what
the
polit
policy
with
the
reviewing
them
is
so
I'm,
not
sure
if
it's
fair,
if
I,
do
some
sort
of
a
review
as
I'm
reviewer,
but
we
are
just.
It
was
interested
in
I'm
asking.
C
E
I
I
just
want
to
add
a
short
disclaimer
here
right
now
we
reuse
existing
desert,
everp
structure,
but
I
mean
we
do
not
feel
all
the
elements
in
the
structure.
Just
yeah
I
want
to
know
what
do
you
think
about
this,
because
otherwise,
if
we
introduce
a
completely
new
structure,
this
will
introduce
a
big
change
in
the
routimeter.
E
So
this
is
a
short
disclaimer
just
to
let
sure,
okay,
yep,
okay,
so
yeah
I
will
do
initial
review
and,
let's
see
where
we
end
up
okay
and
final
topic
again
from
my
side.
What?
Actually
the
policy
is
for
the
access
to
Diego
Concourse
Pi
points?
We
often
receive
some
sort
of
requests
either
from
yours
from
someone
else
who
is
doing
the
interrupt
stuff
that
actually
some
pipelines
are
currently
broken
until,
unfortunately,
we
cannot
see
them.
Do
we
have
some
sort
of
policy
for
accessing
those
pythons.