►
From YouTube: App Runtime Platform Working Group [Jan 12, 2021]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
So
it's
in
today's
working
group,
I
thought
well,
okay,
let's
go
back
in
the
last
working
group
meeting,
we
talked
about
the
problems
with
knowing
what's
going
on
and
what
people
are
working
on,
and
so
I
thought
in
this
working
group.
That's
what
we
would
talk
about.
B
C
Sure,
let's
go.
A
C
There
we
go
welcome
back.
Thank
you,
where's
the
right
this
year.
This
is
the
right
one.
Cool
dynamic,
asgs
is
what
I
was
going
to
talk
about.
If
I
can
get
it
here,
it
is
okay.
So
one
of
the
things
we're
working
on
in
the
short
term
is
trying
to
make
application
security
rules
apply
to
containers
after
you
make
changes
to
them
without
requiring
an
app
restart.
C
So
there's
this
prop
proposal
up.
There's
a
github
issue
on
the
cf
networking
release
that
talks
about
it
and
links
to
this
proposal.
I
encourage
you
to
read
it
all
the
the
main
overview
of
how
it's
how
things
work
now
are
asgs
get
set
in
capi
they
get
propagated
to
diego.
When
it
says,
give
me
a
new
desired
lrp,
then
the
cell
grabs,
those
on
lrp
start
sends
them
to
the
cni
and
the
cni
implements
the
asgs.
C
When
the
container
starts
up
and
then
periodically,
vxlan
policy
agent
will
pull
policy
server
for
network
policies
that
get
sent
directly
in
policy
server
from
the
cli
and
the
the
new
architecture
is
basically
the
same
except
also
vxlan
policy
agent
can
be
will
be
configured
on
by
default,
but
can
be
turned
off
to
pull
policy
server
for
asg
data.
A
policy
server
will
also
pull
cappy
to
sync
up
all
of
that
asg
data.
C
That
way
we
don't
need
policy
server
or
vxlan
policy
agent
talking
to
cappy.
All
of
a
sudden.
We
don't
have
to
make
a
bunch
of
changes
to
diego
to
figure
out
how
to
get
those
updated
all
the
time
and
potentially
cause
restarts
or
instability
in
the
diego
side
and
then
in
the
same
loop.
That
applies
new
cdc
policies.
Policy
agent
is
going
to
apply
the
new
asg
updates,
so
that's
kind
of
the
high
level
of
it.
C
B
I'm
curious
from
the
non-vmware
folk
is
this
an
issue
that
you
run
into.
D
C
C
Cool,
I
guess
that's
all.
I've
got
who's
next.
E
Sure
yeah
so,
basically
customers
they
want
their
c2c
to
have
tls
and
we
don't
provide
that.
They
only
can
talk
to
port
8080.
If
you
have
unprocessed
word
mappings
or
they
start
using.
E
So
they
kind
of
started
using
that,
but
that
port
kind
of
presents
certificate
with
the
ip
address
of
the
container
and
c2c.
E
They
have
to
either
kind
of
figure
out
the
ip
ip
address
of
the
container
they
want
to
talk
to
which
is
not
convenient
or
they
they
have
to
ignore
a
certificate.
E
So
that's
why
there's
this
whole
request?
Can
we
please
provide
an
end
point?
They
can
talk
to
where
there
is
a
certificate
that
includes
application
internal
route
so
that
they
don't
have
to
ignore
certificates
and
yeah
original
investigation
was
done
by
andrew,
but
we
looked
at
a
couple
of
solutions.
E
One
is
to
include
wildcard
of
like
all
internal
domains.
That
way
you
don't
need
to
like
do
like
any
updates
or
restarts
of
the
application.
Whenever
there
is
a
new
map,
route
command
has
run
with
like
or
like
unmapped
or
another
solution.
You
would
include
all
applications,
specific
routes,
its
internal
routes
in
the
sense
of
the
certificate
and
yeah.
We
kind
of
thought
that
that
would
be
more
secure
from
the
crowd
integrity
point.
We
validated
that
the
it's
kind
of
protected
by
network
policies.
E
E
But
if
you
have
network
policies
in
place,
there
is
still
a
possibility
where
you
can
talk
to
different
containers,
you're
not
intended
to
talk
to
and
so
and
actually
like.
We
found
out
that
there's
a
high
probability.
The
way
like
we
give
out
ports
so
yeah.
We
decided
to
go
with
the
solution
to
include
all
up
internal
domain
routes
in
in
the
sun
and
yeah
I
kind
of
want
to
go
over.
We
decided
to
dedicate
a
specific
word.
E
So
if
you
have
a
port
8080,
then
you're
gonna
get
a
dedicated
kind
of
like
pre-known
port
614
for
free
that
will
be
exposed
on
the
envoy,
so
that
c2c,
like
other
containers,
can
connect
to
that
for
cls.
So
why
is
that?
Because
we
don't
want
to
like
clients
to
figure
out
what
port
is
there,
so
we
decided
to
go
with
a
predefined
kind
of
word
kind
of
like
what
we
have
for
8080.
E
Yeah,
I
wonder
what
else
to
so
the
process
for
clients
here
they
map
the
route,
they
add
network
policy
and
then
they
can
talk
to
the
destination
applique
container,
on
port
644,
for
free
without
ignoring
certificate.
E
So
we
also
implemented
a
way
of
updating
existing
container,
so
bbs
doesn't
have
the
right
now
a
communication
with
the
rep
to
update
existing
categories.
So
we
introduced
a
new
api
call
and
that
will
swap
a
certificate
for
envoy
and
and
we
actually
can
read
updated
certificate
files
from
from
disk
and
just
reload
that
for
like
running
process,
which
is
pretty
cool
yeah,
and
so
that's
done
up
to
this
part.
And
then
the
other
thing
we
have
to
think
about
is
like.
E
If
initial
update
call
is
missed,
we
we
want
to
have
that
conversion
cycle
where
we're
gonna
track.
Okay,
what's
actually
out
there,
what
our
actual
rp's
have
and
converge
that
on
bbs
and
try
to
send
those
updates
in
convergence
cycles.
B
E
E
B
E
Yeah
yeah
gpmc
is
excited
about
it.
They
like
they,
they
have
to
turn
ipsec
and
they
use
unproxied
port
mappings
to
to
do
that.
Yeah.
B
Nice,
okay
ben,
is
up
next.
F
Howdy,
so
there's
a
propose.
Actually,
let's
see
there
is
a
list,
a
list
of
this
there's
a
proposal
and
and
cf
I'm
gonna
actually
change
this
to
be
a
link.
Maybe
can
I
do
that
in
the
google
doc?
No,
I
guess
I
can
there's
a
http
proposal.
I
can
link
it
in
our
chat.
F
The
hope
is
to
switch
from
emitting
http
events
to
http
metrics
with
regards
to
applications.
F
I
currently
have
a
branch
starting
work
on
trying
to
get
this
as
a
optional
feature,
and
then,
hopefully
we
can
iterate
on
it.
Maybe,
but
the
hope
is
to
both
reduce
the
volume
of
metrics
in
the
system.
As
for
large
systems,
the
hdtv
metrics
can
take
up
a
significant
portion
of
like
the
entire
metrics
throughput,
but
as
well
to
hopefully
make
the
metrics
a
little
bit
more
useful
to
consumers.
F
So
there
isn't
really
a
list
per
se
of
open
questions.
One
of
the
easy
open
questions
is
like:
how
do
you
bucket
latencies?
F
F
They
do
not
have
a
lot
of
metadata
associated
with
them,
but
perhaps
that
is
where
they
could
can
be
should
be,
but
also
there
are
current,
it's
not
merged
in
as
well
the
option
to
turn
it
off
and
on
isn't
that
great
and
there's
no
tls
on
the
endpoint,
so
all
of
those
things
will
have
to
be
improved.
B
Chat,
this
is
our
proposals,
github
project
board,
and
so
the
proposals
that
we
have
been
going
over
today
are
all
listed
here.
Here's
the
one
that
jeff
mentioned:
here's
one
that
maria
mentioned
here's
one
that
ben
talked
about
and
we're
going
to
talk
about.
That's
v2.
B
B
I
wrote
an
update
here,
yes,
the
other
day,
so
I
thought
I'd
review.
Why
do
we
want
to
update
just
a
little
quick
friendly
reminder?
You
know,
besides
the
like,
hey,
it's
nice
to
stay
on
updated
dependencies,
there's
also
been
a
history
of
users
experiencing
issues
with
gnats.
I
think
it's
a
little
bigger
there
we
go.
B
Something
will
happen
that
causes
all
the
nats
clients
to
need
to
reconnect,
and
then
that
seems
to
cause
a
cpu
spike
and
then
that
stops
accepting
the
new
clients
and
then
the
new
clients
keep
pounding
it,
and
then
it
causes
the
cpu
to
spike
higher.
And
then
none
of
these
clients
can
talk
to
gnats
and
then,
of
course,
you
get
the
problems
of
what
happens
in
your
foundation
when
things
aren't
going
through
nets.
B
So
we
want
to
update,
because
we
believe
that
fixing
or
that
updating
will
fix
this
issue
that
people
have
been
getting.
We.
I
wanted
to
point
out
that
we're
not
100
sure
an
engineer
that
had
been
like
did
the
deepest
dive
into
reproduction
and
looking
into
this,
just
wanted
to
call
out
that
our
reproduction
was
from
a
tcp
right.
B
Timeout
point
of
view
and
the
customers
seem
to
be,
or
our
users
seem
to
be
from
a
tcp
read,
timeout
point
of
view,
but
they
were
both
failing
because
of
server
mutex,
lock
issues,
and
so
that's
why
we
are
fairly
confident
that
nats2
will
fix
it.
We'll
hope.
B
B
The
biggest
problem
is
that
that's
one
can't
talk
to
nets
too
right
and
it's
like
a
cluster,
and
they
all
need
to
talk
to
each
other
so
during
a
rolling
deploy,
if
there's
more
than
two
nats
vms
you'll
get
a
split
brain
and
so,
depending
on
what
your
client
is
connected
to,
it
will
only
get
a
partial
amount
of
information
if
it's
a
very
short-lived
split
brain.
I
think
this
would
probably
be
okay.
B
You
might
like
some
go,
routers
wouldn't
get
new
routes
or
wouldn't
get
deleted
routes,
but
as
long
as
you
know
in
that,
like
let's
say,
this
split
ring
was
30
seconds
as
long
as
you
weren't
pushing
a
bunch
of
apps
at
that
that
moment
right.
It
should
just
like
be
30
seconds
of
a
little
bit,
maybe
of
wonkiness,
and
then
it
would
all
figure
itself
out.
So
I
think
the
biggest
issues
are
that
the
go
riders
wouldn't
get
new
routes
or
delete
old
routes.
B
Of
course
you
get
the
above
issues,
but
also
there
are
some
things
that
are
still
pruned
on
ttl's
on
time
to
lives,
and
so
after
120
seconds,
those
things
would
start
being
pruned.
B
B
B
So
we've
done
a
lot
of
exploration,
work
we've
not
totally
decided
on
a
path
forward.
We've
not
started
execution
on
this,
but
it's
definitely
on
the
horizon
to
pick
up
in
some
ws
soon.
I
would
like
to
say
like
by
the
time
I
see
you
next
we'll
be
working
on
it,
but
I
feel,
like
you
know,
with
software,
it's
hard
to
promise
this,
but
I
really
do
mean
soon.
B
A
A
D
B
B
Just
remember
that
you
can
always
look
at
this
project
board
and
if
you
want
to
go
deeper
into
any
of
these
proposals,
you
can
comment
on
the
appropriate
card
here
and
thank
you
so
much
for
joining
today.
Everyone,
if
you
have
something
that
you
think
maybe
we
should
talk
about
it
at
the
next
working
group
forum.
Maybe
there
is
something
else
you
want
to
cover
like.
Please
let
me
know
happy
to
do
what
the
group
wants
and
it's
always
great,
seeing
you
all
so
thanks
everybody
bye,
everybody
bye.