►
From YouTube: CNCF CNF WG Meeting - 2023-09-11
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
All
right
not
sure
who's
gonna
join,
but
let's
get
started.
B
B
Open
source
Summits
next
week
with
the
one
Summit
Regional
day,
maybe
some
stuff
there
coming
out
related
to
the
working
group
and
other
things
that
we're
doing
over
here
and.
B
B
Is
there
any
type
of
online
virtual
for
that?
No,
not
that
I
am
aware
of.
A
B
B
All
right
anything
else,
interesting
happening
soon
in
September
or
early
October,.
B
All
right,
let's
say
so,
Telco
Day
schedule
I
think
is
coming
out
today.
B
It's
a
half
day
event
afternoon,
Day
Zero
event,
and
we
should
see
the
schedule
announcement
sometime
today.
B
I'll
have
to
come
back
on
that
one
as
well.
I,
don't
know
what
the
latest
is.
C
B
B
Let's
see
what
Tom
had
to
say.
Well,
actually,
I
guess
we
could
just
give
a
quick
overview
so
been
working
towards
trying
to
get
some
more
best
practices.
This
is
the
first
one
in
a
while
proposed
fully.
We've
had
a
bunch
of
ideas,
but
officially
put
a
pull
request
in
hoping
to
get
keep
the
momentum
and
get
some
more
in.
So
everybody
be
thinking
about
best
practice.
If
there's
one
you're
motivated
to
contribute
to
at
least
write
up
something
summary
or
anything,
then
please
help
put
it
forward.
B
This
is
the
first
in
a
while,
so
single
concern
per
container
or
a
single
process
type,
so
a
CNF,
a
cnf's
containers
or
I'll
say
a
CNF
itself
may
have
multiple
internal
Services,
providing
the
functionality
and
if
those
services
that
are
providing
are
broken
into
different
processes,
so
they're
not
the
same
process
type,
for
example,
Apache
web
server
and
a
database
would
be
two
different
process
types
and
they
have
different
concerns.
So
one
is
about
data
storage.
Letter
other
is
about
serving
HP
request.
B
B
Right
I'll
just
go
through
it
now
Nick
live.
You
looked
at
it
all
right,
so
the
summary
is
pretty
much
what
I
just
said.
B
We're
saying
that
you
should
split
it
out,
given
some
references,
so
there's
a
a
little
bit,
that's
related
to
microservice
practices,
and
that
could
be
like
size
and
there's
a
lot
of
other
things
that
will
tie
in
on
the
motivation
and
goals
that
we're
trying
to
achieve
the
benefits.
B
Quick
overview,
though
proven
scalability
trying
to
Leverage
The
platforms.
Orchestration
system
rather
than
internal
container
orchestration
upgrades,
are
going
to
be
based
on
the
individual
containers
start
at
the
start,
so
you're
building
in
like
for
upgrade
process
dependencies
and
other
things
versus
having
it
any
type
of
a
process.
Maybe
a
that
HP
server
is
upgraded
and
there's
some
type
of
problem
or
dependency
between
it
and
the
database.
B
It's
tightly
coupled
because
they're
in
the
same
ones,
rather
than
Loosely,
coupled
in
different
containers
with
strong
apis
to
talk
to
them
between
them
and
that's
kind
of
tying.
In
with
this
as
well.
This
managing
the
service
concerns
is
individual
units
since
you're
splitting
them
into
containers.
This
is
just
the
high
level,
and
so
this
is
this
little
blurb
here,
it's
a
best
practice
a
Docker
talks
about
so
splitting
them.
This
is
the
single
concern
principle
related
thing
and
applied
it
to
containers
with
darker
docker.
B
They
don't
specifically
talk
about
microservices,
but
that'd
be
another
area,
all
right
motivation,
so
this
section,
where
you're
trying
to
look
at
the
different
areas
that
would
be
important
to
both
the
end
users
so
csps
and
whoever
else
running
these
applications
and
the
developers
integrators
that
sort
of
thing
so
life
cycle
management.
This
so
you
could
look
at
motivation,
would
be
like
problems,
challenges
that
we're
trying
to
solve.
B
So
the
first
thing
is
about:
if
you
have
multiple
process
types
in
a
single
container,
then
you
need
to
somehow
manage
how
those
are
working.
The
orchestration
of
this
process
is
keeping
them
alive
and
that
sort
of
thing.
B
If
you're
looking
at
doing
more
efficient
use
of
your
resources,
then
you
can
think
about
allocation
per
service
or
concern
this
also
I'm
going
to
jump
into
the
response
time.
So
you
want
to
do
faster
response
time
of
scaling
and
and
going
and
bringing
it
down.
So
if
you
have
a
peak,
you
want
to
be
very
responsive,
but
you
also
want
to
be
resource
efficient.
B
If
you
have
them
all
together,
then
you're
going
to
be
scaling
based
on
a
large
container
versus
one.
That's
scaling,
maybe
individual
sets
of
containers.
So
if
you
only
have
one
service,
that's
getting
hit
hard,
like
maybe
the
web
server,
but
your
database
is
fine.
Then
you
only
need
to
scale
up
your
web
server
and
not
your
database
and
upgrades.
So
if
you're
doing
upgrades
is
talking
about
problems
with
the
dependencies
between
them,
this
security
would
be
as
semi-related,
but
you
could
find
it
even
without
upgrades,
so
any
type
of
vulnerabilities.
B
So
observability
for
whether
that's
debugging
and
other
stuff
for
developers
they'd,
like
visibility,
maybe
you
have
two
different
teams
of
developers
working
on
the
different
services,
like
maybe
there's
some
people
that
are,
you
have
a
custom
storage
engine,
and
then
you
have
some
type
of
API
engine
for
request
and
doing
other
type
of
things.
So
this
could
be
two
different
teams
and.
B
If
you
have
them
together,
then
trying
to
figure
out
where
things
are
what's
going
on
services
having
problems
or
whatever
you're,
potentially
going
to
have
more
trouble
if,
if
they're
all
together
than
split,
so
some
of
this
digs
a
little
bit
more
in
the
development
cycle
on
the
tightly
coupled
and
then
test
coverage
would
be,
maybe
even
more
so
if,
if
they're
tightly
coupled
and
how
are
you
doing,
test
coverage
versus
if
they're
self-contained
in
the
container?
So
then
we
have
the
goals
which
relate
back
to
those
dive
sand.
B
So
the
orchestration
using
the
kubernetes
orchestration
engine
talking
about
microservice
architectural
practices,
if
you're
already
following
these.
So
if
you're
a
end
user
looking
at
the
Ops
Team
or
doing
integration
and
you're
already
trained
and
maybe
you're
trying
to
move
to
microservice
patterns,
where
you
want
to
take
advantage
of
those,
then
this
practice
would
be
aligning
with
that.
B
The
scaling
that
I
was
talking
about
so
just
reasoning
about
it.
So
this
could
be
maybe
the
people
creating
these
applications.
They
could
give
feedback
or
the
definitions
that
are
provided
that
talk
about
best
ways
to
scale
the
different
pieces.
You
can
reason
about
those
as
a
developer
and
now,
okay,
if
we
get
this
type
of
load
on
this
service,
we
need
to
scale
like
this.
This
other
services
go
like
that.
B
Similarly,
for
the
end
users,
The
Operators,
you
know
maintaining
whoever's,
helping
to
run
and
looking
at
the
services
they
can
reason
about
it
because
they're
split
into
different
services
that
makes
that
easier,
the
resource
utilization.
So
if
you're
only
scaling
the
HTTP
server
and
not
the
database,
you
could
scale
that
up
rather
than
both
and
maybe
the
the
database
or
storage
whatever
is
allocating
special
type
of
nodes.
Well,
the
the
HP
server
or
whatever
service.
This
is.
B
If
it's
a
single
container
and
you
can
try
to
reduce
the
risk
because
you
know
it's
only
going
to
upgrade
at
one
part-
that's
been
well
tested,
we'll
go
through
every
one
of
these
security.
I
think
I've
kind
of
talked
about
it.
I,
don't
think
we
had
this
specifically
up
there,
but
finer
control
of
permissions
because
you
can
set
them
per
service.
B
So
maybe
the
database
needs
some
some
type
of
storage
type
permissions,
but
the
web
server
wouldn't
so
you
would
have
finer
control
over
the
web
server
service,
and
maybe
you
tighten
it
on
the
database
for
some
other
area
on
observability
thinking
about
log
messages
and
other
things
for
whether
you're
debugging
from
it's
already
in
production
and
trying
to
figure
out
some
area
or
development
wise.
B
If
you're,
looking
at
output
from
a
single
container
as
a
single
service,
then
trying
to
figure
out
something
is
going
to
be
easier
if
it's
not
combined.
If
it's
split
out,
it's
going
to
be
easier.
If
you
are
looking
to
monitor
the
activity,
maybe
for
optimizing
scaling
or
some
other
thing
between
the
service
communication,
maybe
the
you're
having
a
spike
of
requests
to
your
storage
backend
and
it's
unexpected,
so
you're
thinking.
Okay,
we
need
to
scale
that
so
that
could
tie
in
looking
at
debugging.
B
Maybe
it's
not
in
the
log
output
but
there's
something
going
on.
So,
if
you're
exposing
that
inter-process
communication,
instead
of
it
being
within
the
container
which
or
it's
harder
to
view
if
it's
between
well
documented
defined
apis
between
the
services,
that's
going
to
be
easier
to
monitor.
We
have
a
lot
of
similar
stuff
here
in
the
software
development
cycle.
B
B
It's
probably
a
good
idea
to
have
some
practice
around
that,
but
we're
not
doing
it
if
you're
any
type
of
supervision
and
management
of
the
process
within
a
container
which
may
or
may
not
be
needed,
if
you're
using
external
orchestration,
but
we're
that's
out
of
scope,
any
type
of
issues
with
homegrown
Management,
Systems
out
of
scope,
implementation
details
specifically
saying
how
to
split
like
we're,
not
trying
to
say
you
should
definitely
split
your
storage
and
your
web
server
or
any
other
thing
we're
not
trying
to
Define
what
should
be
split.
B
All
right,
so
this
is
the
a
shorter
version
of
The
Proposal
versus
the
summary
has
a
little
bit
of
all
of
it.
So
CNF
with
the
multiple
current
concerns
should
be
split
into
services
or
process
types
for
each
of
those
concerns
into
separate
containers.
Service
dependencies
should
be
handled
between
containers
through
well-defined
interfaces.
It's
high
level
we're
not
saying
how
that
should
be,
but
that's
kind
of
related
to
this
pod
specs
for
the
CF
should
provide
scaling
and
monitoring
information
for
each
of
the
services
during.
A
B
In
different
containers,
so
these
last
two
are
the
high
level
of
directing
where
we
want
to
go
once
you
split,
so
these
are
going
to
help
with
the
deficient
efficiencies
on
resource
utilization,
helping
with
monitoring
helping
with
any
type
of
going
from
Loosely,
coupled
to
being
able
to
chain
the
different
services
and
reuse
them
as
different
parts,
whether
that's
internal
or
connecting
to
other
applications.
B
To
kubernetes
maybe
escalated
privileges
or
non-escalated.
We
think
this
is
a
good
practice
in
general
for
any
type.
Some
user
stories.
B
This
one
is
based
on
a
Intel
document
which
there's
a
link
at
the
bottom,
but
trying
to
have
a
5G
applications
that
are
configurable.
You
can
mix
and
match
them.
High
degree
of
programmability
and
and
mix
these
so
so
concerns
is
going
to
help
support
that
flexibility
and
programmability,
including
Hardware
requirements.
So
if
you
have
a
specific
service
within
the
application
that
has
requirements,
then
this
will
help,
because
you'll
know
that
that
single
container
is
going
to
need
some
type
of
Hardware
requirements
and
the
efficient
scaling
and
other
things.
B
So
that's
what
that
ties
in
also
supporting
automation
goals,
because
all
of
the
pieces
should
be
well
defined,
interface,
simultaneous
core
strength,
grain
dependencies,
so
meaning
that
they're
going
to
be
contained
within
the
container,
but
external.
The
container
you're
going
to
limit
the
dependencies,
so
it'll
be
easier
to
put
those
together
and
then
the
testing
as
well.
So
all
of
those
are
there.
So
this
is
one
of
the
use
cases.
Here's
another
use
case
that
we're
putting
forward
as
a
diagram.
B
So
looking
at
service
based
architecture,
the
SMF-
that's
this
right
here
and
it
has
a
lot
of
different
interfaces.
So,
if
your
SMF,
so
this
would
be
only
if
it's
like
this-
if
it's
split
or
it's
implemented,
to
have
multiple
processes
that
service
these
different
communication,
UPF
AMF,
ECF
udm,
all
these
different
things.
B
If
you
haven't
split
into
different
processes
for
that
type
of
communication,
it's
recommended
that
you
would
split
that
up
because
you
may
have
whether
it's
potentially
servicing
upgrades.
Maybe
your
PCF
is
getting
upgraded,
but
the
container
that's
running
these
Services
wouldn't
be
interrupted
by
any
type
of
upgrade
on
this
side,
because
you
have
these
running
into
containers
or
the
dependencies
are
going
to
be
limited
to
the
interfaces
for
the
service
that
communicates
with
the
PCF
or
another
thing
would
be.
Maybe
your
communication
between
the
UPF
and
AMF
to
the
SMF
are
much
more
variable.
B
They
scale
Up
and
Down,
based
on
the
peak
end
user
usage
during
the
day,
and
so
maybe
those
need
to
be
scaled
up,
but
the
communication
to
the
EDM
PCF
don't
well.
That
would
be
another
one.
So
this
is
just
to
give
context
on
how
these
could
look
and
why
we're
recommending
this
there
could
be
a
lot
of
other
use
cases.
The
simplest
one
you
know
up
there
at
the
top
for
the
application
would
be
they.
The
web
server
and
database.
B
Some
notes
so
we're
not
we're
not
saying
that
a
container
can't
have
multiple
processes,
so
Apache
can
start
multiple
worker
processes.
This
is
a
web
server.
It
can
have
multiple
worker
processes
that
it
Forks
off
to
handle
the
request.
It
can
also
have
multiple
threads,
so
those
are
both
fine
Java
is
going
to
run
a
Java
application
that
provide
service
might
have
many
many
threads,
that's
fine
definitions,
so
there
I
think
we
referred
to
monolithic
applications.
B
B
That's
what
we
mean
when
we
say
a
monolithic,
CNF
and
multi-concerned
containers
a
container
having
more
than
a
single
process
type
providing
services
for
different
concerns,
all
right,
and
then
we
have
a
bunch
of
references
to
many
different
places,
including
some
vendor
stuff,
like
this
Erickson
information,
the
Intel,
the
paper
that
I
was
referring
to
a
minute
ago,
testing
we
want
to
validate
that
is
there
more
than
one
process
type
and
we
actually
already
have
a
test
over
in
the
same
test
Suite
there
we
go.
B
C
I
wanted
to
add
something
and
I'll
ask
a
question
if
my
microphone
is
working
cool
yeah,
oh
sorry,
okay,
so
oh
one
of
the
things
that
I
might
have
missed
here,
I,
don't
know
if
it
was
so
from
the
benefits.
Typically
also
is
mentioned
in
such
documents.
Polling
glot,
like
the
community
to
have
you
know
the,
for
example.
The
front
ends
written
in
the
I,
don't
know
JS
or
whatever
right
is
suitable
for
a
front-end
application
to
be
written.
A
C
The
other
thing
that
I
wanted
to
ask
I'm,
not
sure
if
it's
again,
if
it's
the
place
here
to
to
to
recommend
this,
do
we
have
any
any
at
least
billing
and
internal
understanding
within
the
group
when
we
recommend
this,
do
we
have
any
specific
recommendations
about
I
would
say.
B
C
I
split
my
functionality
into
two
separate
containers:
do
I
can
I
put
a
restriction
to
run
them
on
the
same
worker
node,
or
do
we
recommend
that
these
functions
should
be
completely
distributable
across
multiple
worker
nodes,
multiple
data,
centers,
even
I,
don't
know
you
know
how
cluster
these
days
can
spawn.
A
Hey
Nikolai,
I
guess
it's
it's
a
very
valid
question:
yeah
I!
Guess
in
this
particular
principle:
we
we
didn't
talk
about
both,
but
we
we
were
decentralizing
discussion
about
how
to
split
things
in
the
container
I
guess
in
order
to
achieve
what
you're
saying
it's
like
yeah,
it's
maybe
well
you
you're,
you
know
like
you,
can
use
like
the
what
definition
to
ensure
that
all
these
thoughts
are
going
I
mean
all
the
containers
inside
of
that
board
is
going
to
be
in
the
same
location.
A
It
could
be
working
or
whatever
for
it.
For
some
reason,
you
need
to
keep
it
more
separated
yeah.
You
can
use
certain
tune
it
to
the
scalar
policy
to
distribute
like
have
like
a
better
High
levelity,
but
yes,
I
guess
for
this
particular
aspective.
Just
we
just
centralized
in
how
to
make
the
decision
like
separate,
like
Japanese
the
the
process
types
inside
of
the
container.
A
B
Think
it
would
be
good
to
add
it
straight
in
as
a
a
comment,
so
we
have
a
podcast
history
I'd
like
it
there
first
and
then
the
place
that
I'm
thinking
immediately,
where
we
should
at
least
say
something
about
it
and
and
you're.
Welcome
to
add
something
would
be
the
notes
section:
okay,
okay
and
I.
Don't
know
that
we
want
to
say
out
of
scope,
because
I
actually
I
think
we
should
consider
that
end
scope.
B
One
thing
to
think
about,
and
this
could
be
for
you
as
well
Nikolai
is:
do
you
think
that
we
should
stop
the
pull
request?
Is
it
something
that
you
feel
strongly
about
that
we
should
address
like
as
directly
part
of
The
Proposal,
or
is
it
something
that
we
could
put
comments
into
the
pull
request
and
then
maybe
do
an
update
later
I
think
notes?
B
I
could
see
it
as
add
it
in
right
now
and
probably
how
I'd
do
it
would
be
a
suggest
edit
where
you
go
down
and
where
is
it
past
I
think
oops,
yep,
sorry,
I,
don't
know
what
just
happened,
but
if
you
do
a
suggest
edit
or
you
go
in
here
and
review,
if
you,
if
you
think
you
have
an
idea
for
that,
then
I
think
at
least
in
the
notes
section.
It
would
be
good
yeah
I
will.
Okay,
I
would
hesitate
to
put
it
in
the
main
proposal.
A
Oh
no,
no,
the
only
thing
that
I
was
going
to
say
like
if,
if
for
some
reason,
the
document
is
not
reflecting
that
those
things
I
mean
it's
valid,
to
just
make
it
more
implicit.
So
maybe
we
were
just
assuming
things
and
and
it's
better
to
that,
that
the
response
or
the
explanation
has
to
be
in
the
document.
B
All
right,
let's
look
at
Toms
unless
someone
has
anything
else:
elderco
Lucina
Oliver,
if
y'all
have
anything,
speak
up.
Otherwise,
I'll
jump
into
Tom's
comments,
I'm
good,
all
right.
B
B
To
suggested
it
I
think
I'll
go
over
here
at
my
base
here.
B
All
right,
okay,
so
he's
talking
about
potentially
the
the
so
the
motivation
is
challenges
versus
what
we're
claiming
are
going
to
be
the
benefits
or,
and
really
it's
not
claiming
the
benefits.
It
could
be
a
goal
for
benefits.
We
hope
to
achieve
not
that
you're
going
to
achieve
them,
but
we're
talking
motivation
here
so
right
now
we
have
resource
utilization,
is
less
efficient
in
multi-concerned
containers
which
require
allocation
for
all
components.
So
all
services
all
processes,
whether
the
individual
microservices
and
then
he's
saying,
maybe
soften
it.
It's
likely
to
be
less
efficient.
B
A
Oh
no
I
wasn't
saying
anything.
Sorry,
no
I
I
was
saying,
but
it's
okay
also
for
me,
we
don't
have
any
proof
or
any
metrics
to
reject
like
that
to
make
like
a
hard
statement
in
this
case.
So
I
guess
regard
to
so
it's
valid
to
just
make
it
more
of
the
same
in
life.
B
Let's
see
next
one,
so
this
is
still
in
motivation.
Should
we
add
a
May
here,
it's
not
given
all
right,
Cena's,
multi
concerned
containers
have
a
larger
surface
area.
Seen
us
with
multi-concerned
containers
may
have
a
large
surface
area.
B
I'm
fine,
with
this
I
think
this
one
is
probably
not
as
big
a
deal
as
the
last
one.
B
I
could
probably
argue
that
just
immediately
because
they
have
multiple
processes,
they
actually
do
in
fact
I'm
kind
of
just
thinking
about
this.
If
you
have
a
web
server
and
a
database
server,
they
may
not
have
any
security
problems,
but
they're
a
tax
surface
area
is
larger,
because
people
can
try
to
find
vulnerabilities
in
two
different
process.
Types
I
think
they
do
have
a
larger
surface
area
for
attacks
and
bugs
I
disagree
with
this
one
to
change.
It.
B
Right
as
soon
as
you
have
or
talking
about
multi-concerns,
so
if
you
have
one
process,
then
it's
only
going
to
have
one
surface
area,
but
as
soon
as
you
have
two
process
types
in
so
one
process
providing
a
service
in
a
container.
Now
you
have
two
processes
providing
two
different
services
within
a
container
you're
you're,
at
least
doubling
at
that
point.
I,
don't
see
any
other
way
around
this.
A
B
B
B
Okay,
so
this
one
I
agree,
so
it
may
have
no
effect,
so
you
may
have
another
process
that,
even
though
there's
a
vulnerability
that
other
process
has
there's
no
effect,
you
know
you
could
say
that
it's
kind
of
it
would
get
very
nitty-gritty
on
this
because
I
could
say.
Maybe
your
web
server
has
a
bug
and
someone
gained
access
to
it,
but
your
database
is
so
secure
that
they
don't
access
the
data,
but
you
could,
if
you
have
access
to
the
web
server,
you
could
stop
all
traffic
to
the
database.
B
So
that's
affecting
storage
requests.
I,
don't
know
this
one's
getting
nitty-gritty
on
it,
I'm
kind
of
good
either
way.
I
might
word
it
to
use
the
May,
though,
if
we're
going
to
say
security
vulnerabilities
in
one
process
type
may
affect
others
versus
even
likely
so
I
just
make
it
lighter
may
affect,
may
not
affect.
A
Well,
the
thing
here
is
like
I
mean
what
the
arcany's
not
necessarily
like.
If
you
have
one
no
I,
think
one
process,
it
is
monetary
that
will
affect
the
other
processes
like
this
is
the
speed
that
that
the
mere
defense
like
like
open
the
proceed
to
do
not
affect
other
processes
or.
B
B
This
is
true,
but
he's
not
thinking
about
the
actual
communication
between
the
services,
but
that's
fine
he's
not
expanding
on
that
he's.
Just
saying
they're
only
going
to
say
the
init
process
signals
which
is
true,
I'm,
I'm.
Okay,
with
this.
B
B
The
container
run
time,
but
that's
known
to
people
on
kubernetes
and
I,
think
for
the
end
users
building
these
systems.
They
would
there.
They
already
understand
the
idea
of
the
the
run
time.
A
The
last
the
last
word
like
in
parenthesis,
supervisor
I,
don't
know
if
we
have
to
I,
don't
know
context
or
find
it
or
because
the
rest
is
fine.
Like
I
mean
a
very
long
time.
It's
fine
big
process.
I.
Guess
you
well
understood.
B
I
agree
with
you:
I'll
just
put
it
like
this
vector.
B
All
right,
I'm,
just
gonna,
put
that
and
not
accept
it
for
now
all
right.
What's
next,
all
the
way
down
to
oh
user
stories.
B
B
A
B
B
All
right,
so
this
is
just
saying,
if
you're
going
to
run
multiple
processes
in
the
notes-
and
this
is
like
a
sub
recommendation-
not
the
main
proposal,
it's
under
notes.
We
recommend,
instead
of
writing
your
own
supervisor.
You
use
one.
That's
already
out
there.
That's
well
developed,
tested
Etc
like
supervisor
day,
okay,
so
what
did
he
say
perhaps
add
some
additional
contacts
that
was
provided
from
the
discussion.
So
this
is
in
the
working
group
discussion
section.
B
As
the
container
runtime
monitors
the
pid1
and
uses
its
signals
to
report
events,
knowing
when
a
container
has
stopped,
so
this
is
talking
about
what
actually
happens.
How
do
you
monitor
what
are
we
trying
to
do
with
the
application,
so
this
is
giving
an
idea
of
what
a
supervisor
is.
If,
if
someone
doesn't
know
without
having
to
go
off,
that's
fine,
he
added
a
little
bit
more,
but
we're
not
trying
to
add
the
entire
discussion
into
this
best
practice.
We
give
a
link
if
you
want
to
go,
understand,
go
read
it.
B
C
B
B
Suggests
that
it-
and
we
can
accept
it
probably
later
this-
you
know
in
the
week
we'll
do
it
async,
that's
a
good
one,
so
that
this
would
be
going.
The
software
development
livestock
add
that
in
that's
great,
so
for
those
that
didn't
read
it,
you
can
have
multiple
languages,
libraries
dependencies.
All
of
that
stuff
with
your
service,
which
it
can
be
nice,
especially
with
large
services
or
multiple
teams,
are
working
with,
maybe
different,
totally
different
works.
B
B
So
we
will
try
to
go
through
those
this
week
and
then
maybe
accept
it
in
next
week.
If
you
have
ideas
for
another
practice,
especially
if
you're
having
problems
with
work,
that
you're
doing
or
you're
think
something's
needed
and
you'll
be
motivated
to
help
write
up
stuff
and
place,
add
it
to
the
slack
working
group,
ideas
or
drop
an
issue
or
go
take
a
look
at
the
issues.
B
We
have
a
bunch
of
best
practice
ideas
and
you
can
thumbs
up
or
comment
on
them
like
to
get
started
on
the
next
one
soon
within
next
week,
I'd
like
to
maybe
get
be
able
to
get
started
and
and
start
having
some
sessions
like
we
did.
We
do
have
some
drafts
on
some
that
are
issues
but
welcome
to
have
any
others.
One
area
that
we've
been
thinking
about,
especially
Victor
and
I,
would
be
looking
at
nephio
and
the
best
practices
there.
It
leverages
kpt.
So
there's
a
lot
of
configuration
management
items
that
could
be
interesting.
B
Nefio
is
also
doing
a
lot
of
stuff
with
get
Ops
patterns,
so
that
would
tie
into
other
projects
like
Flex
and
Argo
CD
so
deployments,
and
the
automation
side
of
things
could
be
some
best
practice
areas
so
just
to
be
thinking
about
it
and
hopefully
we'll
get
going
on
on
that.
One
we'll
have
a
best
practice,
maybe
the
top
three
picked
next
week
and
get
the
support
request
merged
thanks.
Everybody
have
a
great
day
and
a
great
week,
please
review
and
look
forward
to
next
time.