►
From YouTube: App Runtime Platform Working Group [June 7, 2023]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yeah
we're
looking
at
turning
on
the
the
RFC
for
branch
protections
using
automation
that
is
set
up
for
everyone
to
make
use
of
it'll
I
guess
require
PR's
to
all
the
repos
and,
if
there's
anything
that
we
need
to
whitelist
or
or
opt
out
of
that
can
be
done.
Just
let
me
know:
I
didn't
hear
much
feedback
in
the
slack
channels
about
this.
I
just
wanted
to
bring
it
up
and
give
everyone
a
last
chance
to
object.
A
So
Friday
probably
we
might
see
things
break
and
then
people
might
yell
at
you
and
say
actually
I
want
this
repo
opted
out
and
that's
going
to
be
possible.
Well.
B
Any
workarounds,
actually
so
I,
don't
know
how
the
automation,
undoes
things.
B
It
might
require
the
fixing
Automation
and
then
also
someone
with
admin
privileges.
A
C
A
C
C
D
Cool
one
thing
about
the
code
owners
that
people
should
be
aware
of
is
we've
realized
that
it
means
we
get
pinged
on
every
single
TR,
an
issue.
If
your
GitHub
notifications
aren't
set
up
correctly
oof,
it's
definitely
turn
off
that,
like
email
setting
or
else
depend
about
this
kind
of
blow.
C
C
B
Yeah
in
response
to
the
recent
issue,
with
with
routing
algorithm
changes
and
causing
a
cve
for
thinking
of
requiring
any
new
changes
like
that
or
changes
to
the
routing
algorithms
go
router
or
maybe
expanding
this
out
even
further
to
just
like
all.
The
features
in
these
things
should
be
like
opt-in
off
by
default,
configurable
Curious.
What
everyone
else's
thoughts
were
on
something
like
that.
A
B
No,
so
the
there
was
a
new
feature
in
the
the
router
go
routers
retrylogic,
so
that
it
could
retry
certain
post
requests
if
the
requests
hadn't
actually
been
sent
to
backends
and
as
a
result,
it
also
got
requests
that
were
canceled
tied
up
in
that
and
those
could
retried
and
then
also
because
it
was
canceled.
They
immediately
failed
and
and
then
brought
the
back-end
apps
offline.
B
It
would
have
been
easier
to
turn
off
and
would
have
been
something
that
didn't
necessarily
it
like.
It
wasn't
immediately
affecting
anyone
who
upgraded
they
would
have
had
the
option
to
do
it.
A
A
I
get
the
sentiment,
but
on
one
hand
I
am
worried
about
just
so
many
feature
flags
and
then
like.
When
do
we
get
rid
of
them?
Because
you
can
imagine?
Oh
there
would
be
a
feature
flag
for
this.
Oh
and
then
we
add
a
feature
flag
for
azaware
and
then
we
had
a
feature
flag
for
we
want
to
change
retryable,
you
know,
I,
don't
know,
there's
some
thing
for
that
and
then
we
want
to
add
a
feature.
I
don't
know.
I
can
just
imagine
that
there's
like
okay.
C
A
I
feel
like
one
thing:
we're
not
very
good
at
is
getting
rid
of
these
feature.
Flags
I,
wonder
if
we're
gonna
require
this,
if
there
could
be
something
that's
like
like,
for
example,
this
this
feature
that
was
put
in
that
ended
up
causing
the
cve
like
if
we
said
okay,
we
think
we
want
this
on,
for
everyone.
We'll
add
a
feature
flag
put
a
date
in
the
like
the
description
for
the
future
flag
and
be
like
after
six
months
turn
this
on
for
everyone,
or
something
like
that.
Yeah.
D
D
Yeah
I,
like
I,
like
the
idea
of
having
some
methodology
for
doing
stuff.
That's
less!
That's
not!
Security
related
off
by
default,
delete
the
feature
flag
after
some
point
once
we
turn
it
on
that
sounds
nice
I,
don't
know
if
I'm
as
enthused
by
the
idea
of
like
every
routing,
everything
should
be
off
by
default
and
we
should
go
down
that.
A
Jeff,
where
were
you
thinking
about,
or
were
you
thinking
about
where
we
should
propagate
this
idea
or
record?
This
idea.
B
A
A
C
A
You
share
it
with
the
group
to
make
sure
that
we
agree
on
like
the
wording
and
everything
but
I
think
generally
in
favor.
E
E
E
A
Yeah
is
there
a
PR
that
you
have
to
make
to
the
Community
repo
for
that
to
happen.
E
To
be
honest,
I'm
not
aware
what
actually
needs
to
be
done
so
to
be
done.
We
haven't
done
that
so
far.
So
if
there
is
some
sort
of
process,
I
can
do
so.
A
Yeah
I'm
definitely
in
favor
of
anyone
becoming
a
contributor,
especially
with
the
work
it
looks
like
he's
done
so
far.
Let
me
offline
get
figure
out
what
the
process
is
and.
E
E
C
D
Yeah
been
considering
our
aggregate
metric
egress
options
for
some
time
now.
This
is
just
sort
of
a
heads
up
that
will
probably
look
to
publish
some
kind
of
proposal
doc
soon
otel
stands
for
open
Telemetry,
the
probably
should
have
on
whatever.
The
word
is
shortened
that
open
there's
a
bunch
of
competing
metric
and
log
egress
agents.
D
Right
now,
some
notable
ones
are
fluent
D
fluent
bit
Telegraph
open,
telemetry,
I
guess
file
beat
is
one
that
apparently
people
are
very
In
Love
With
Or
Hate
we
looking
looking,
we've
been
looking
at
a
bunch
of
them
recently
and
considering
ways
that
we
could
start
incorporating
off
the
shelf
kind
of
agents
that
we
could
configure
to
provide
a
lot
of
different
metric,
egress
options
right
off
the
bat,
the
history
of
metric
and
log
egress
and
Cloud.
D
Foundry
is
usually
we
like
to
build
our
own
agents
so
that
we
can
control
them
completely,
which
has
been
great
for
a
lot
of
reasons
and
bad
for
one
very,
very
big
reason,
and
that's
that
we
can
basically
limit
ourselves
to
only
one
protocol
or
two
protocols
at
any
given
time.
It's
very
hard
for
us
to
manually,
build
and
manually
maintain
agents
that
can
speak
in
a
lot
of
different
metric
or
log
protocols,
and
nozzles
is
one
way
to
solve
that
in
the
past.
D
But
with
the
log
reader
M
times
n
scaling
problem,
we've
been
looking
for
an
alternative
on
the
or
you
know,
Gateway
or
sidecar
type
solution
for
some
time,
and
we
seem
to
be
aligning
on
the
open,
Telemetry
collector,
primarily
because
it's
written
in
go.
It
has
a
more
active
Community
than
than
Telegraph
it's
a
cncf
project
and
yeah
I,
guess
just
to
compare
and
contrast
real
quickly.
Telegraph
was
actually
spelled
with
an
F
instead
of
a
pH.
At
the
end,
I
know
tech
companies
and.
D
Yeah
yeah,
okay,
the
yeah
I
guess
fluent
fluent
D
is
the
old
version
of
fluent
bit
like
fluent
bit
is
the
new
thing,
so
we
wouldn't
use
fluent
D
in
favor
of
fluent
bit,
but
fluent
bit
is
written
in
C
and
we
find
it
damn
near
impossible
to
parse
or
like
do
anything
with,
so
that
was
kind
of
off
the
table
pretty
quickly.
Even
though
it's
potentially
the
most
lightweight
option
of
the
of
the
group
Telegraph
is
written
in
go
and
it's
been
well
supported
for
a
while.
D
Now
the
big
downside
seems
to
be
that
there
doesn't
appear
to
be
much
open
source
commitment
from
the
closed
Source
company
that
supports
it.
So
it's
not
a
guarantee
that
we
would
be
able
to
kind
of
contribute
and
change
things
if
need
be
or
get
involved
in
that
community
in
general,
hotel
is
cncf.
D
A
lot
of
cncf
projects
are
starting
to
align
behind
it.
It
seems
like
there's
a
lot
of
excitement
there
and
a
lot
of
activity
and
it's
written
in
go
so
that's
kind
of
where
we're
heading
filebeat
has
some
nice
features
and
people
seem
to
either
love
it
or
hate
it,
but
we
didn't
really
seriously
consider
it
as
far
as
I
know
it's
not
written
in
go,
and
it's
really
really
old.
D
You
really
you.
You
only
use
it
if
you
were
already
using
it.
D
So,
ideally,
we
would
look
for
a
Bosch
release
solution
where
we
could
drop
a
new
agent
onto
the
VM
with
that
agent
being
an
Hotel
collector
and
now
you'd
have
the
ability
to
configure,
instead
of
using
your
old
nozzles,
to
subscribe
to
the
logger,
Gator
and
then
forwarding
logs
from
there.
You
could
instead
plug
in
an
aggregate
metric
drain
in
the
same
vein
as
how
operators
plug
in
aggregate
syslog
drains
and
have
that
aggregate
metric
drain,
basically
push
from
every
VM
towards
your
unified,
whatever
metric
solution.
C
D
But
yeah
I
think
I've,
given
a
couple
Community
talks
now
talking
about
the
varying
degrees
of
versions
in
the
metric
and
log
architecture
of
cloud
Foundry,
and
it's
just
like
version
one
after
version
two
after
version
three
and
getting
rid
of
one
and
two,
the
log
Creator
stuff
necessitates
a
new
metric
egress
option.
D
And
we,
when
we
would
ideally
like
to
bring
in
an
open
source
standard
agent,
rather.
E
C
E
There
such
a
osis
agent
available-
or
we
should
somehow
instrument
the
code
internally
so
that
we
provide
some
sort
of
metrics.
D
E
I
was
also
interested
about
the
open
Telemetry
and
we
had
an
idea
somehow
to
instru
to
try
to
instrument
the
the
BBS
code
itself
so
that
it
provides
some
sort
of
open,
Telemetry,
metric,
metrics
or
whatever,
and
try
to
collect
those
metrics
with
Dyna
trace,
on
top,
for
example,
and
build
some
dashboards,
etc,
etc.
And
I
was
more
or
less
interested
about
whether
if
there
is
already
a
kind
of
an
agent
available.
Because
currently
we
use
dinotrace
agent
running
on
the
VMS.
But
it's
flake.
It's
actually
flaky,
sometimes.
D
Sure
there
there
is
an
agent
available,
it's
it's
called.
They
call
in
open
Telemetry
land,
it's
called
The
Collector
and
it
effectively.
D
Is
this
go
core
that
you
can
set
up
a
configuration
that
basically
looks
like
a
bunch
of
little
pipelines
based
off
of
what
they
call
receivers
or
import
vectors
and
exporters
export
like
egress
vectors
and
each
of
those
receivers
or
exporters,
along
with
processors
and
a
bunch
of
other
Concepts,
that
they
have
provide
the
ability
to
listen
and
send
on
any
protocol
with
the
core
sort
of
translating
between
all
of
them
using
otlp,
which
provides
for
some
really
nice
use
cases
where,
in
like
an
ideal,
long-term
world,
you
know
you
could
have
an
operator
who's
like
I,
have
an
aggregate
metric
drain.
D
That
I
need
to
speak
in
otlp,
going
to
Splunk,
for
instance,
but
I
also
want
us
to
continue
to
forward
like
go
loggregator
envelopes
to
our
old
Splunk
instance,
so
that
that
like
can
keep
going
while
we
start
getting
this
new
one
up
off
the
ground
and
at
the
same
time
like
this,
is
like
the
very
far
fun
future.
Now.
Maybe
it
is
a
world
where
we
could
start
doing
metric
egress
drains
for
on
an
application
Level
which
would
be
kind
of
cool.
Where
you
could
say
an
app
developer
could
say.
D
I
would
like
my
app
I
would
like
for
my
apps
metrics,
to
be
forwarded
over
this
protocol.
How
we
would
do
that
back-end
connection
is
maybe
another
another
story,
that's
the
tricky
part,
but
the
nice
thing
about
hotel
the
hotel
collector.
Is
you
specify
the
list
of
agent
of
like
protocols
that
it
contains
up
front
which
affects
only
the
initial
size
of
the
thing?
D
As
far
as
we
can
tell
it's
not
big
enough
to
be
a
concern
on
the
VM
at
least,
and
then
you
could
speak
to
it
in
any
of
those
protocols
and
send
out
of
it
in
any
of
those
protocols.
The
you
you
mentioned,
also
instrumenting
components
to
speak
other
protocols,
that's
something
that
we've
historically
tried
to
do
and
failed
at
pretty
significantly
in
the
past.
D
Like
there's
part
of
the
reason,
there's
so
many
metric
agents
is
that
we've
been
unable
to
move
certain
components
off
of
their
metric
egress
solutions
from
years
past,
right,
like
UA,
Cappy,
still
use
the
stats,
D
collector
and
haven't
been
able
to
move
off
of
it
for
various
reasons,
giving
them
it's.
This
is
more
of
a
side
side
note,
but
it
would
I
would
hope
that
adding
an
agent
that
could
actually
listen
in
multiple
formats,
rather
than
in
our
like
custom
format.
D
We
would
be
able
to
eliminate
some
of
our
other
listeners
and
have
those
components
just
sent
directly
to
this
kind
of
default,
dmux
and
mux
product,
and
you
know,
then
maybe
we
could
have
a
look
at
trying
to
get
component
teams
to
actually
change
the
way
that
they
egress
metrics
to
something
that's
a
little
more
comprehensible.
But
that's
that's
been
harder
to
do.
C
D
Anything
else
I
want
to
call
out
here
yeah
proposal
proposal
income
and
keep
an
eye
out
for
it.
Nice
yeah.
A
C
A
E
A
D
D
A
Well,
well,
it's
great
seeing
you
all
and
Plum
and
I'll
try
to
get
that
information
to
you
right
after
this
about
how.