►
From YouTube: CNCF Telecom User Group 2021-06-07
Description
CNCF Telecom User Group 2021-06-07
A
Good
morning
morning
did
anyone
post
the
meeting
notes
into
the
chat
yet.
A
B
Sure,
just
so
people
know
we'll
probably
wait
till
five
after
before
we
get
started
so
people
have
time
to
join.
A
A
A
B
B
Okay,
it's
five
after
so
we
can
probably
get
started
so
welcome
everybody
to
the
telecom
user
group
meeting
we
meet
in
the
first
monday
of
every
single
month
and
if
you
want
to
add
your
name
and
any
agenda
items
to
the
meeting
minutes,
you
can
always
do
that
in
this
google
doc.
B
B
Agenda-
okay,
great
so
thanks
everyone
that
is
showing
up
so
just
a
reminder
about
upcoming
events.
The
cnf
working
group
meets
every
week
on
mondays
at
1600,
utc
kubecon
europe
just
passed
and
there's
a
bunch
of
great
recordings
from
about
telecommunications
and
cloud-native
telecommunications
industry.
The
videos
are
now
up
on
youtube.
The
kubecon
north
america
is
coming
up
in
this
fall
and
located
alongside
it
will
be
open
networking
in
edge
summit
and
kubernetes
on
edge
day
and
there's
more
information
in
the
links
here.
B
So
with
that
today
we
have
netize
to
talk
about
orchestra,
which
is
a
cloud
native
release:
orchestration
lifecycle
management
platform
for
fine-grained
orchestration
for
groups
of
interdependent
applications.
Yeah
natice.
Would
you
like
to
take
over.
C
Perfect
hi
folks,
my
name
is
nitish,
I'm
I'm
a
senior
software
engineer.
At
microsoft,
a
newly
formed
team
called
azure
for
operators.
I'm
gonna
talk
about
an
open
source
project
that
I've
been
working
on
for
the
past
year,
along
with
the
team,
which
is
called
azure
orchestra
orchestra
is
an
orchestration
platform.
C
That's
built
for
system
level
releases
of
complex
mission,
critical
applications,
which
can
also
be
seen
as
a
tightly
coupled
helm
applications
because
the
delivery
mechanism
is
held
in
charge
today
when
we
deliver
our
apps
to
our
customers,.
C
So
previously
I
was
a
technology
architect,
the
office
of
the
cto.
At
the
firm
networks,
my
firm
networks
was
acquired
by
microsoft
early
last
year.
I'm
also
a
kubernetes
enthusiast.
I
I've
been
working
with
kubernetes
for
the
past
three
to
four
years
now,
prior
to
joining
the
the
office
of
the
cto,
I
was
working
with
service
meshes
and
this
was
istio
to
to
support
our
5g
platform.
C
C
So
talking
about
the
a
couple
of
topics
today,
I'll
keep
it
short,
but
I
do
I
do.
C
I
wish
to
show
you
a
demo
of
orchestra,
with
a
slightly
less
complex,
app
as
compared
to
network
functions
that
we
that
we
ship
to
our
customers
so
as
an
overview
orchestra
is
an
open
source
project.
It
didn't
start
as
something
that
was
open
source.
It
was
kind
of
tailored
to
I'm
going
to
say
a
firm
network
right
now,
because
that's
where
we
started
our
journey
now,
it's
microsoft.
C
So
it
was
tailored
to
our
workloads
and
how
we
ship
our
network
functions
that
we
built,
which
are
5g
core
network
functions
which
are
sold
or
by
the
ship
to
service
providers,
but
over
time
we
kind
of
realized.
There
were
components
in
here
that
would
be
of
some
benefit
to
the
rest
of
the
kubernetes
community
and
started
abstracting
out
pieces
that
that
were
were
not
specific
to
the
affirmed
workloads
and
we
we
built
orchestra
out
of
those
abstractions.
C
The
primary
goal
of
the
platform
is
release
orchestration
of
diary
like
of
highly
dependent
applications,
so
groups
of
applications,
release
orchestration
would
be
the
rollouts
rollbacks,
and
then
you
have
the
life
cycle
management
which
would
be
around
watching
the
state
of
the
system,
our
network
functions
and
other
components
and
auto
remediation
on
failures.
C
Going
back
to
the
last
successful
deployment
and
everything
everything
should
be
zero
touch
and,
like
I
said
it's
a
it's,
a
group
of
helmet
applications
so
think
of
it
as
a
bunch
of
helm,
helm,
artifacts,
current
charts
that
are
your
applications
and
a
lot
of
these
helm
charts
can
also
have
sub
charts,
which
could
be
micro
services
that
power
your
your
network
function,
the
the
operator
is
cluster
scope,
it
it
doesn't
work
across.
It's
not
a
multi-cluster
solution,
though
there
are
some.
C
There
is
some
work
in
progress
to
kind
of
build,
an
abstraction
on
top
of
orchestra
to
to
make
this
multi-homed
across
multiple
clusters,
but
this
one
just
focuses
on
a
single
cluster
and
deploying
components
on
on
the
one
kubernetes
cluster,
the
architecture.
It's
the
the
whole
solution
is
built
using
some
popular
cncf
projects.
C
We
have
argo
workflows
that
lets
us
do
some
dependency
based
workflows,
so
we
have
tag-based
workflows
and
you
have
chart
museum
and
helm
which
are
used
as
as
your
health
registry
and
helping
your
vehicle
of
delivery
and
help
controller
to
kind
of
automate
the
whole
helm
operations.
C
So
hem
controller
has
some
nifty
features
where
it
can
do
remediation
for
you.
It's
it's
a
declarative,
spec.
You
no
longer
have
to
deal
with
the
imperative
com
commands
using
the
helm
cli
to
deploy
your
applications.
So
orchestra
itself
is
built
using
the
cube
builder
project.
It's
written
in
go
and
it
leverages
the
the
official
client
go
and
control
the
runtime
library,
so
so
no
surprises
there.
C
It
uses
what
cube
builder
provides
the
the
the
input
to
orchestra
is
a
custom
resource
called
application
group
and
it's
it's
a
collection
of
applications
with
dependencies
mentioned
amongst
them.
C
So
so
the
primary
use
case
is
is
for
the
in-service
upgrades
of
mission,
critical
systems
and
especially
as
we're
in
the
telco
group.
It's
around
the
5g
core
network
function.
So
we
we
build
a
bunch
of
network
functions
that
we
do
not
manage,
and
these
these
network
functions
are
are
kind
of
delivered
over
an
air
gap
environment
and
you
no
longer
have
any
visibility.
So
you
need
something
to
do
zero
touch
deployments.
You
need
it
to
be
reliable
and
completely
automated.
C
So
so
that's
the
main
use
case
when
we,
when
we
build
our
applications,
is
that
you
have
your
you
have
your
network
functions,
but
you
also
have
a
whole
bunch
of
supporting
components
that,
as
isps
we
have
to
provide
which
may
or
may
not
be
something
that
the
customer
is
going
to
manage
for
us.
So
you
have
to
build
an
entire
group
of
applications
supporting
components,
infra
components
that
need
to
be
shipped
along
with
the
nfs.
C
I
was
listening
to
a
session
from
the
previous
meeting
and
there
was
a
line
in
there
that
caught
my
attention.
Network
functions
are
not
just
helm
applications
so
they're
more
than
that.
It's
not
like
the
enterprise
world,
where
you
can
have
a
single
chart
for
a
single
application.
So
it's
a
little
more
complex,
so
another
use
case
of
orchestra
was
to
provide
rollout
strategies.
You
have
your
kubernetes
roll
out
the
standard
rollouts,
but
we
want
to
enhance
that,
especially
in
mission
critical
systems
like
5g.
C
You
want
to
provide
canary
and
blue
green
deployments,
so
orchestra
can
leverage
some
service
mesh
features
behind
the
scenes
to
do
canary
and
blue
green
deployments,
and
I
think
the
the
plug-in
ecosystem
kind
of
allows
allows
users
to
build
custom
strategies
for
rollouts
as
well.
So
you
can.
You
can
build
your
entire
pipeline
on
how
you
want
to
roll
your
application
out,
including
any
kind
of
telemetry
data
that
you
want
to
look
out.
That's
not
supported
by
the
service
meshes,
or
rather
the
progressive
delivery
frameworks.
C
So
another
feature
is
the
auto
remediation.
So
as
as
soon
as
the
failure
is
detected,
orchestra
is
capable
of
remediating
the
errors
and
rolling
back
to
the
last
successful
spec.
If
any
other
remediation
lets,
you
fail
fast,
so
you
can
catch
errors
quickly
and
rather
than
cause
a
lot
of
disruption,
you
can
you
can
just
roll
back
to
your
last
working
set
of
applications.
C
There's
two
plan
components
that
we're
working
on
planned
and
rather
they
actually
started.
We've
kicked
it
off
last
week.
We're
gonna
do
orchestra
would
support
quality
gates.
We
would
use
quality
gates
for
promotions
manual
and
automated
it
kind
of
gives
you
control
over
how
you
how
you
release
your
applications
across
the
customer's
network.
So
it
could
be
this,
the
staging
labs,
canary
labs
and
all
the
way
to
the
production,
labs
and
continuous
testing
as
an
integrated
part
of
orchestra.
C
So,
rather
than
having
your
own
test
infrastructure
running
outside
of
outside
of
your
kubernetes
orchestrator
right,
rather
the
release
orchestrator,
you
would
have
continuous
testing
built
into
these
rollouts.
So,
along
with
automated
canary
analysis,
orchestra
would
also
do
its
own
analysis.
Any
custom
metrics
that
it
needs
to
look
at.
C
So
so
the
the
the
reason
why
it's
it's
so
complex
is
why
network
functions
are
so
complex
is
because
they
rely
on
a
lot
of
infra
and
pass
components
that
that
are
specific
to
the
vendors
network
function.
Different
vendors
are
going
to
deploy
their
network
functions
on
a
single
or
across
the
network
of
a
customer's
cluster
survivors
cluster
and
everyone
has
to
bring
their
own
supporting
components.
So
so
we
just
because
of
that,
we
we
start
having
tight
dependencies
so
really
complex
dependencies
between
these
applications
and
all
the
supporting
components.
C
So
here
I
just
show
an
example
on
how
we
deploy
our
workloads,
but
we
have
a
strong
de
par
dependencies
on
some
are
back
policies
that
need
to
be
configured
before
anything
else
can
be
deployed.
These
aren't
these
aren't
pods
that
microservices
that
converge.
These
are
base
resources
that
you
need
to
set
up
before
anything
else
can
be
started.
Similarly,
you
have
opa
that
you
need
for
mutation
and
validation,
so
these
web
hooks
need
to
be
up
and
running
before
anything
else
can
start.
C
Metal
lb
is
another
case
when
you're
doing
this
on
bare
metal
servers
bare
metal
clusters,
and
then
we
do
leverage
sto
service
mesh,
which
means
the
the
service
mesh
needs
to
be
up
before
you
can
start
any
of
the
network
functions
because
of
the
whole
sidecar
injection
approach
and
configuration
of
envoy
and
there's
there's
other
components
to
observability
security
that
we
need
that
we
need
set
up
before
our
nfs
can
be
spun
up.
C
So
so
some
of
the
features
of
orchestra
is
that
it's
built
for
kubernetes.
It
uses
the
operator
pattern
to
deploy
controllers
to
to
kubernetes
last
year,
completely
declarative
because
of
the
customer
resource
approach,
you
provide
the
state
that
you
want
to
be
in,
and
the
controller
reconciles
and
gets
you
gets
you
to
that
state
and
it's
gitoff's
compatible
kind
of
it
can
plug
into
any
get
ops
framework
any
so
things
like
argo,
cd,
flux,
cd,
so
it
it's
it's
it's
pretty
much
agnostic
to
it
all.
C
The
way,
the
way
it
works
is
by
generating
tag-based
workflows.
This
is
where
we
leverage
our
google
workflows
and
what
what
orchestra
can
do
is
one
our
applications
or
the
help
charts
themselves
can
have
dependencies
amongst
themselves.
C
So
so
you
could,
you
could
think
of
the
of
the
first
layer
as
the
infra
components,
the
first
set
of
layers,
and
then
you
have
the
as
you
go
through
that
the
final
thing
is
your
network
functions
or
a
set
of
dependent
network
functions
that
are
being
deployed,
so
so
it
renders
a
dag
based
workflow
for
these
applications
as
an
optional.
For
us,
it's
kind
of
a
mandatory
feature.
C
Is
it
also
does
database
workflows
for
the
sub
charts
that
are
contained
within
the
application
chart,
so
we
have
a
lot
of
supporting,
or
rather
comprised
of
a
lot
of
microservices
within
an
application
within
an
nf
chart,
so
it
we,
we
deploy
those
sub
chop,
so
those
as
helm,
sub
charts
and
they
need
to
be
coordinated
as
well.
C
I
should
take
the
back.
They
don't
need
to
be
coordinated,
but
when
you
do
in
service
upgrades,
it's
it's
good
to
have
them
ordered
following
a
dag,
where
the
most
crucial
elements
go,
get
updated
first
or
last.
However,
you
want
it
so
that
you
can
reduce
the
blast
radius
when
things
go
wrong,
so
you
you
cast
the
issues
early
and
then
you
roll
back
rather
than
having
everything
kind
of
blow
up.
At
the
same
time.
C
The
reason
this
is
important
in
the
service
of
provider
world
is
because
they're
not
going
to
do
continuous
deployment,
as
is
done
in
the
enterprise
world.
It's
not
take
updates
as
they
come
in
and
roll
out
applications.
C
C
When,
whenever
they
they
schedule
the
the
releases,
which
means
it's,
it's
not
it's
not
just
one
application
going
in
there's
a
whole
bunch
of
helm,
charts
being
upgraded
along
with,
say
the
service
mesh
and
observability
components,
security
components.
So
so
just
one.
The
application
tag
helps
with
that.
Where
the
dag
is
always
followed,
you,
you
honor
the
dependency
tree
and
deploy
the
intro
components
first,
and
then
your
applications
come
in
and
the
subcharts
again
we
break
it
down
so
that
you
can
catch
any
failures
at
the
microservice
level.
C
So
so
I
I
made
a
mention
into
the
github
frameworks
it
plugs
into
any
csab
framework.
You
can
also
connect
it
to
you,
can
do
it
through
cuddle
as
well,
so
the
the
plug-in
ecosystem
is
you
bring
your
own
executor
container
image,
so
the
way
it
works
today
is
each
of
the
dag
nodes
are
responsible
for
generating
a
custom
resource.
C
That's
picked
up
by
helm
controller,
the
custom
resource
is
called
helm
release
and
it's
it's
it's
where
it's
very
similar
to
what
you
would
do
on
a
helm,
cli
command.
C
You
provide
the
values
you
provide,
any
any
flags
that
helm
needs
and
home
controller
would
reconcile
and
make
sure
it
does
all
the
helm,
actions
that
you
need
so
so
today
it
just
just
deploys
the
helm,
release,
object
and
waits
for
the
status
before
the
workflow
node
succeeds,
but
you
can
bring
your
own
workflow
container,
where
you
could
do
a
lot,
a
lot,
a
lot
of
other
things,
so
you
could
go
in
and
look
at
monitors
and
monitor
the
state
for
a
bit
before
deeming
the
node
of
success
and
moving
on
to
the
next
node
and
the
dag.
C
We
do
our
upgrades
following
the
forward
workflow,
but
the
the
reversal
is
not
a
blanket
roll
back.
Everything
won't
be
rolled
back
to
the
to
the
previous
versions,
all
together,
so
instead
it
reverses
the
workflow
workflow
again
honoring
the
the
dependencies
and
starts
starts
either
either.
If
this
is
a
delete,
then
it
starts
deleting
everything
in
the
reverse
order,
or
if
it's
a
rollback,
it
does
the
same
thing.
C
So
our
ecosystem
is
becoming
larger
every
day.
We're
we
just
introduced.
We
we
just
spoke
with
the
captain
team
and
we're
making.
We
started
the
integration
there
and
that's
captain
is
another
cncf
project
that
can
do
validation
and
quality
gates.
C
So
captain
would
be
an
integral
part
of
orchestra
over
the
next
few
months,
as
we
work
on
it
and
then
kept
in
itself.
Could
leverage
progressive
delivery
frameworks,
while
orchestra
outside
orchestra
can
do
this
without
captain
as
well.
So
we're
going
to
put
up
some
examples
of
using
argo
rollouts,
which
is
which
leverages
istio
to
do
some
progressive
delivery
and
automated
analysis
to
to
promote
your
applications.
C
So
so
this
is
our
roadmap,
like
I
said,
captain's
the
first
item
in
our
roadmap
and
then
we
have
argo
rollouts
as
an
example
as
well.
That
will
be
added,
so
you
can.
You
can
get
to
the
github
repo
following
these
links
and
we
do
have
official
docs
as
well,
especially
for
admins
and
contributors
who
are
interested
in
contributing.
C
C
C
All
right,
so
so
what
we
have
here
is
I'm
gonna
just
quickly
show
you
our
application
group,
so
so
this
is
our
custom
resource
that
orchestra
picks
up
and
you
can
see
that
it
just
has
a
list
of
applications.
C
So
so
this
is
your
application
group
with
two
applications
you
have
ambassador
and
book
info
is,
is
kind
of
based
on
the
sto
example,
where
the
book
info
app
has
a
whole
bunch
of
microservices
product,
page
reviews,
ratings
and
details.
So
these
are
treated
as
sub
charts
in
the
application
chart
and
in
this
example,
book
info
is
dependent
on
ambassador,
which
means
ambassador.
C
Please
spun
up
and
roll
it
out
before
book
info
can
be
started,
so
we
have
to
make
sure
that
everything
comes
up
and
only
then
booking
for
gets
kicked
off.
So
within
the
book
and
for
application
we
have
dependencies
among
the
sub
charts
as
well.
So
in
here
you
have
details
and
ratings
with
no
dependencies,
which
means
they
can.
C
You
know
first
and
then
you
have
reviews
which
depends
on
those
two
sub
charts
and
then
product
page.
That
depends
on
the
reviews.
So
kind
of
everything
is
driven
through
him,
so
you
have
the
standard,
helm
options.
You
can
specify
what
target
name
space
to
go
to
what
over
overlay
values
you
want
to
use.
In
this
case
we
have
the
replica
accounts
that
we
override
on
the
default,
charts
and
subcharts.
C
So
so
what
we
have
here
is
our
orchestra
took
in
that
the
application
group
and
it
generates
a
workflow
out
of
this.
C
So
there's
these
dags
are
intermediate
nodes
they're,
not
the
executor
containers,
so
you
have
ambassador
spinning
out
now
on
the
right.
You
can
see
it's
starting
the
ambassador
pods
and
with
within
that
and
the
way
orchestra
did
the
generated
the
object
for
these
executed
containers.
Is
it
passed
in
a
helm?
C
Release
object
to
our
executor
as
a
base64,
encoded
string
and
when
you
decode
that
it's
it's
just
a
helm,
release
object
that
controller
picks
up,
and
this
executable
container
down
here,
if
you
look
at
the
logs
is,
is
just
waiting
for
the
helm
release
to
go
into
a
successful
state.
C
C
Since,
since
subcharts
are
dependencies,
the
dependencies
go
first
before
the
application
chart
itself,
the
application
chart
may
or
may
not
have
any
resources
being
deployed,
but
if
it
does,
it
has
deployments
config
maps.
All
of
those
will
be
created
after
the
sub
charts
are
deployed.
So
this
is.
This
is
different
as
compared
with
helm
itself.
C
When
you
do
a
helm,
install
everything
get
started
together:
orchestra
kind
of
staggers
everything
out
so
so
we
have
a
successful
workflow
over
here
I
can
go
in
and
make
a
modification,
and
this
would
be
similar
to
a
is
changing
over
time
and
being
ready
and
the
cd
picking
it
up.
C
So
both
of
them
should
see
changes
and,
what's
going
to
happen,
is
this
workflow
is
going
to
take
off
again
it's
completely,
which
which
means
that
the
ones
the
helm
releases
that
haven't
changed
will
not
be
affected
at
all.
It's
just
the
executive
container
will
see.
We
have
releases.
It
is
in
a
succeeded
state
and
just
do
nothing
and
move
on
to
the
next
part.
So
so
now
we
have
the
sub
chart
spinning
up
quickly.
I
just
wanted
to
show
you.
C
So
you'll
see
that
reviews
and
product
pages
to
do
two
applications,
two
sub
charts
that
we
touched.
The
values
that
we
touched
were
moved
on
to
revision.
Two
final
thing
I
wanted
to
show
is
the
rollback
actually
the
delete.
So
so
so
let
me,
let
me
show
the
remediation
as
well.
So
if
I
go
in
and
let's
say
set
it
at
a
version
that
doesn't
exist,
it
should
roll
back
to
the
last
successful
spec.
C
C
So
succeeded
now
I
can
go
in
and
delete
the
application
group
and
what
we
should
see
is
when
we
look
at
the
workflows.
C
You
have
a
reverse
workflow,
that's
kicked
off,
so
the
the
forward
workflow,
if
it
was
in
the
middle
of
reconciling,
would
be
suspended
and
the
reversal
would
be
started
if
you
go
back
and
look
at
the
pods
they'll
be
terminating
in
the
same
order
as
they
were
the
opposite
order
of
how
they
were
deployed.
We
have
bookends
for
product
pages
reviews
going
in
the
reverse
order
and
being
deleted.
C
So
yeah.
That's
that's
what
I
I
have
for
the
demo
building.
Some
more
demo
has
to
show
features
like
progressibility
and
quality
that
should
be
out
in
in
a
short
while,
but
we
are
at
our
mvp
stage.
Things
are
a
lot
more
stable
than
they
used
to
be
so
feel
free
to
play
around
with
the
with
the
book
info
app
and
if
you
want
try
it
with
your
own
applications
as
well
and
we'd
love
to
get
some
feedback
and
also,
if
you
wish
to
contribute,
there's
a
lot
of
open,
open
issues.
C
All
right
any.
D
Questions
hi.
Thank
you,
nitish.
It's
a
really,
really
great
presentation.
I
have
a
lot
of
questions.
D
I'll
start,
maybe
with
one
you
know,
you
started
the
presentation
talking
about
the
challenges
of
cnfs
and
and
telco
workloads.
D
C
So
so
you
mean
the
orchestration
of
those
pnfs.
D
Well,
you
talked
about
the
some
challenges
of
cns
having
to
do
with
integration
with
the
platform
with
the
host.
C
Yes,
still,
let
me
go
back
to
this
picture,
so
I
said
this.
This
is
based
on
my
experience
with
the
firm,
the
the
way
I
so
you
could.
C
You
could
think
of
our
the
the
data
plane
component,
like
the
upf
upf
is
something
in
terms
of
the
platform
if
you're
speaking
is,
would
need
would
need
dpdk
on
your
platform
as
well,
but
when
you,
when
you
get
into
the
kubernetes
world,
there's
a
whole
bunch
of
cni
frameworks,
you
have
other
operators
like
intel
building
features
to
to
help
with
some
hardware
acceleration
dpdk
again,
I'm
not
the
expert
on
that,
but
I've
seen
like
these
dependencies
that
are
done
to
kubernetes
operators,
so
this
kind
of
becomes
a
a
prerequisite
before
nfs
can
be
deployed.
C
You
can't
just
go
in
and
deploy
upf
and
expect
it
to
work.
So
in
that
way,
things
need
to
be
set
up,
so
so
orchestra
honors
that
dependency
tag.
So
one
when
you're
rolling
things
out.
It
would
follow
that
order
where
all
the
prerequisites
are
started.
So
in
this
case
we
show
from
the
application
from
the
platform
side,
you
have
the
service
mesh.
That
needs
to
be
up.
C
You
have
opa
to
do
our
mutations
that
we
need
in
terms
of
we
do
leverage
it
for
a
lot
of
security
features
and,
like
I
said
any
of
those
from
what
I
remember,
they
were
like
new
pneuma,
core
mappers
and
there's
some
whole
dynamic
set
of
controllers
that
intel
builds
around
this,
so
so
that
that's
what
I
mean
by
orchestra
handling
dependencies
is
it's
agnostic
to
what
you're
deploying,
but
as
long
as
you
have
it,
packaged
as
a
helm,
chart
it'll
follow
the
dag.
D
Somewhat
yeah,
I
guess
I
guess
my
follow-up
is
related
to
that.
So
I
mean
the
dag
is
a
feature
of
argo
workflows.
This
is
not
maybe
specific
to
orchestra
and,
of
course,
one
of
the
things
you
can
do
with
argo.
You
don't
have
to
call
helm
you
could
your
your
tasks
can
be
anything
you
want
them
to
be,
including
configuration
tasks.
D
Things
like
things
that
you
absolutely
cannot
do
with
home
things
like
installing
things
on
the
host
installing
cni
plug-ins,
for
example,
and
configuring
them.
I
see.
Argo
also
were
very
useful
or
possibly
useful
for
solving
issues
with
configuration
right.
So
it's
not
just
deploying
lots
of
kubernetes
resources
right
because
in
the
end,
what
we
have
here,
as
you
showed
with
you,
showed
that
we
have
all
these
kubernetes
resources
deployments,
pods,
etc,
that
that
are
being
deployed
for
us
in
order
so
two
issues
there
one.
D
I
think
we
would
all
like
our
applications
to
be
more
cloud
native
in
the
sense
that
they
shouldn't
depend
on
order.
That
is,
things
should
come
up
and
if
they
do
have
dependencies,
they
should
be
autonomous
in
the
sense
that
they
every
component
should
be
able
to
make
sure
that
if
the
dependencies
don't
exist,
then
it
won't
do
any
any
work
right.
D
Ideally,
you'd
want
to
be
able
to
deploy
everything
declaratively
without
a
workflow
and
and
have
everything
up,
but
we
we
of
course
the
reality,
isn't
always
that
that
case,
one
of
the
aspects
has
to
do
with
configuration
tasks.
Things
like
netconf
configuration
for
various
components,
things
that
could
happen
in
an
order
and
it's
useful
to
sometimes
break
them
into
building
blocks
that
you
can
indeed
put
in
a
cyclical
graph.
D
But
one
of
the
problems
with
helm-
and
I
think
helm
is
very
problematic.
To
be
honest-
is
that
in
the
end
you
know
it's,
it's
a
text
templating
tool
right.
It
creates
a
kubernetes
manifest
for
you
and
yaml
and
and
those
become
the
the
kubernetes
resources,
but
but
but
it,
but,
but
we
have
much
more
that
we
need
than
just
that.
I
think
I
mentioned
netconf
configuration
task,
but
there
are
all
kinds
of
other
things
assembling.
D
For
example,
clusters
of
components
maybe
putting
a
load
balancer
of
some
sort
for
a
particular
protocol
among
them.
There's
it
this
problem
space
I
devoted
a
lot
of
time
for
it
I
I'll
share
in
the
chat.
Some
people
know
this
sorry
for
tooting,
my
own,
my
own
horn
here,
but
I'm
working
on
an
orchestrator
called
torandot,
which
is
based
on
tosca
rather
than
the
kubernetes
manifest
and
lets
you
create
topologies
with
all
kinds
of
relationships.
Relationships
themselves
are
typed
in
tasks.
D
So
what
I
see
here
is
basically
there's
one
kind
of
relationship
which
is
a
dependency
and
that
dependency
defines
order
of
execution.
But
I
think
there
are
a
lot
of
other
kinds
of
relationships
that
we
we
want
to
create.
Some
of
them
are
networking
connections,
but
some
of
them
are,
you
can
call
them
logical
relationships
that
have
to
do.
For
example,
with,
as
I
said,
say:
look
you
want
to
describe
a
load
bouncer,
so
you
know
I.
I
love
this
visual
graph
presentation,
but
this
is
a
deployment
graph.
A
D
The
question
would
be
more
specifically
about
how
does
orchestra
handle
tasks
that
are
not
helm,
so
I
mean
it's
integrated
with
argo
yeah,
but
how
well
is
it
integrated
with
argo?
That
is,
if
I
wanted
to
find.
A
Anything
give
an
example
of
a
task
that
it
wouldn't
handle
by
default
through
helm.
So
then
we
can
examine.
D
That
I
I
thought
I
gave
one
so
a
netcon
configuration.
So
let's
say
some
of
these
components
are
are
running
a
netconf
agent
and
they
need
to
be
configured
as
part
of
this,
this
entire
product
right.
So
it's
not
just
every
individual
subchart,
but
those
components,
those
pods
those
services
might
need
to
be
configured
with
netconf
right
according
to
a
general
plan.
How
would
that
integrate.
C
Yes,
it's
a
good
question,
but
it's
kind
of
outside
the
scope
of
orchestra
you're
right.
We
only
address
the
whole
deployment
strategy,
so
it's
doing
orchestration
agnostic
of
what
kind
of
deployments
you're
you're
you're
trying
to
leverage
it
for
so
it
doesn't
know
whether
you're
a
telco
system
deploying
5g
applications
or
not
orchestra.
That's
why
I
said
it
started
it
started
pretty
it
was
it
was.
It
was
built
for
the
affirmed
workloads.
C
At
that
point
there
were
a
lot
of
tie
backs
to
configuration
day,
two
deployments,
but
over
time
we
we
saw
that
this
could
be
applied
to
any
kind
of
application.
So
so
one
it's
it's
a
generic
tool.
It's
not
it's!
It's
in
no
way
tied
to
5g,
but
it
was.
It
was
built
with
these
mission
critical
systems
that
take
multiple
releases
at
a
time,
rather
than
continuous
delivery.
That
enterprises
do
so.
C
It
was
built
with
in-service
upgrades
in
in
mind,
and
this
is
this
is
just
around
the
deployment
aspect
of
it.
You're
right
about
the
day
two
configuration
a
lot
of
people
have
their
own
proprietary
stuff.
We
have
our
own
propriety
stuff
on
how
we
do
day.
Two
configuration
which
interestingly
also
leverages
helm
but
yeah,
I'm
not
gonna
dive
in.
C
I
I
don't
even
know
too
much
about
it,
because
I'm
not
the
the
main
person
working
with
that,
but
there
are
cases
where
we
leverage,
helm
and
helm
operations
to
to
do
those
configuration.
C
But
with
that
said,
the
the
selling
point
of
orchestra,
in
my
mind,
is
the
in-service
upgrades
and
the
the
the
kind
of
progressive
delivery
that
it
brings.
So
it's
it's
defense
and
layers,
and
by
that,
what
I
mean
is
you
could
start
at
the
lowest
layer
and
again
we
are
tied
to
helm
today.
So
I'm
gonna
speak
from
that
aspect.
C
You
can
have
helm
tests
which
are
part
of
your
health
charts
that
the
developers
build.
You
can
bring
in
progressive
delivery
framework
like
argo,
rollouts
or
flagger,
in
which
case
you
have,
if
you're
using
canary
deployments,
you
have
automated
canary
analysis.
So
it
would
it
would
leverage
service
meshes
to
to
redirect
traffic
in
steps
and
do
the
validation
as
it
as
it
redirects
traffic
to
the
to
the
canary
parts
before
doing
a
promotion
or
a
rollback.
C
So
that's
that's
another
layer
and
then
the
third
layer
is
the
introduction
of
captain
which
one
lets
you
do
quality
gates.
I
think
the
the
other
feature
that
we
we
love
about
captain
is
that
it
can.
It
can
do
continuous
testing
for
you
and
in
the
5g
world.
You
could
imagine
that,
as
as
the
deployments
happening
for
every
application
going
in
captain
can
do
a
call
back
to
say
an
x4
server
or
whatever
your
testing
framework
is
for
your
call
flows.
It
could
go
kick
that
off.
C
Do
some
validation
and
only
then
promote
the
node
and
the
dag
sorry.
The
applications
are
right,
so
it'll
it'll
change
the
node
in
the
dag
to
a
green
before
it
goes
to
the
next
next
node
in
in
the
workflow
tag.
So
so
that's
that's
three
layers.
The
the
fourth
one
would
be
bring
your
bring
your
own
container
executor.
So
now
you
could
build
your
own.
C
You
could
have
your
own
script,
whatever
runtime
you
want
to
use
that
could
query
if
you're
using
azure
it
would
go
in
and
look
at
azure
monitor
or
any
other
kind
of
metrics
or
behaviors
that
you
want
to
test
for
across
your
system
or
application
group
just
to
make
sure
nothing
else
was
affected.
So
you
can
build
your
own
own
validation
as
well.
So
I
I
think
it's
it's
to
answer
your
question.
It's
not!
C
It's
not
tailored
to
5g,
specifically,
or
rather
the
configuration
aspect
of
5g
which
itself
on
its
own,
is
like
a
really
complex
task
and
orchestra
is
not
trying
to
address
that
at
all.
D
So
I
guess
that
it
really
depends
on
you
know
for
these
tasks
to
turn
green
right.
It
means
that
you
have
to
write
a
helm
chart
that
not
only
succeeds
in
deploying,
but
actually
has
some
tests
to
make
sure
to
make
sure
that
this
component
is
up
right.
This
subchart
is
up,
so
it
it's
up
to
you
to
make
sure
that
your
helm
chart
does
the
verifications
that
it
needs
for
the
task
to
turn
green
before
it
just
moves
on.
C
Yeah,
additionally,
not
just
checking
its
own
state
and
whether
it's
up
it
can
also
so
a
lot
of
a
lot
of
those
nfs
are
interdependent,
so
it
can
go
and
query
the
state
of
or
make
make
some
sample
calls
or
whatever
it
needs
to
do
between
those
those
nfs
to
make
sure
things
are
looking
good.
All
the
calls
are
succeeding.
So
if
you
do
smaller
tests
there
like
integration
tests
in
there,
and
then
you
have
the
system
level
tests
with
the
call
flows
happening
using
captain.
C
So
it
can,
it
can
look,
look
at
the
entire
topology
and
do
some
verifications
across
the
infra
across
other
applications
running
on
the
cluster.
C
C
All
right,
so
let
me
just
share
the
link
with
all
of
you.
C
It
would
be
great
to
see
people
come
in
and
give
some
feedback
and
we
love.
We
love
our
contributors
to
to
come
in
and
pick
up
some
tasks
as
well
just
paste
this
right
here.
A
Yeah
put
it
on
the
yeah
that
google
doc,
you
can
add
them
the
links
to
the
line
item
for
you.
C
That
all
right
yeah,
that's
only
good
thanks
for
thanks
for
letting
me
speak.
I
appreciate
it.
B
Yeah,
absolutely
thanks
for
coming
and
presenting
to
us
today.
That's
all
we
had
on
the
agenda
for
today.
So
in
the
t-shirts
you
can
add
the
links
to
the
doc
just
so
people
can
find
them
after
the
meeting
that'd
be
great.
Is
there.
A
D
I
have
a
general
question
regarding
the
agenda
and
the
the
kubecon
videos
that
are
up.
There
are
a
lot
of
coupon
videos.
I
wonder
if
anybody
in
this
group
can
point
us
to
some
good
ones
having
to
do
with
topics
related
to
telco.
B
Yeah,
so
I
think
taylor
put
a
couple
in
so
one
that
I
recommend
was
the
keynote
from
book
from
deutsche
telekom.
Talking
about
how
they're
using
cncf
technologies
to
build
out
their
5g
infrastructure,
taylor
also
gave
a
talk
about
the
cnf
working
group.
Did
anybody
else
see
any
talks
at
kubecon
that
they
liked
or
would
recommend
to
other
people.
D
I've
I've
watched
exactly
the
two
videos
recommended
so
and
nothing
more
okay.
I
know
there
is
also.
B
Some
at
the
kubernetes
on
edge
day
two,
but
I
I'll
add
a
link
to
that
afterwards,
because
I
wasn't
able
to
watch
them
because
there's
a
lot
of
things
going
on
on
the
day:
zero
events
but
I'll
add
a
link
to
those
to
those
videos.
After
the
call
too.
B
Cool
is
there
anything
else
anyone
wants
to
discuss.
B
Okay,
I
see
nation
just
drop
the
link
to
the
slides
in
the
chat
too.
B
So
with
that
thanks
everyone
for
coming,
if
you're
joining
the
cnf
working
group
meeting
we're
going
to
be
starting
in
about
eight
minutes
on
the
other,
zoom
call.
So
thanks
everyone
for
joining.