►
From YouTube: CDF SIG Interoperability Meeting 2020-03-05
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
the
agenda
is
full.
Today.
First
item
is
basically
the
same
thing
action
item.
Maybe
we
have
some
action
items
we
took
drink
last
meeting,
followed
by
roadmap
discussion.
Tracy
just
sent
an
email
to
main
list
saying
that
she
will
be
late,
so
we
may
leave
this
topic
to
the
end
of
the
meeting
if
sufficient
time
left.
A
We
can
a
quick
discussion
on
this,
not
really
wait
for
the
next
meeting
and
a
reminder
about
google
Summer
of
Code,
hope,
I
understand
and
we
have
captain
and
then
play
with
this
presents
all
day
so
cough
good
luck.
Why
I'm
getting
worse
off
background
noise?
Okay,
talk
to
us
Fayette
and
yeah
there
to
present
their
way
of
doing,
see,
ICD,
establish
pipelines
from
source
to
day
of
CI
CB,
and
if
you
have
any
topics,
please
add
that
to
the
list.
A
We
might
not
have
time
to
cover
it,
but
if
we
take
it
as
the
first
topic
during
next
meeting
so
off
to
action
items
from
last
week,
let's
take
a
quick
look
at
that.
So
the
first
action
item
was
on
Tracy
and
we
have
the
topic
in
the
agenda
today.
Let's
hope
we
have
time
to
hear
what
she
thinks
about
the
roadmap,
and
then
there
is
another
action
item
on
Jason
Orosco
from
did
lab
to
provide
basic
examples
for
github
actions
and
circle.
A
Ci,
orbs
and
I
don't
see
him
in
the
meeting,
but
we
are
trying
to
get
someone
from
github
I'm,
sorry
OCI
to
join,
to
seek
and
then
they
can
provide
formation
about
github
actions
and
Sarkozy
I
orb.
So
that
is
going
in
this
tech.
Don't
pro
modifications
proposal?
Wasn't
public
I
think
it
is
still
the
case,
so
we
keep
it
open.
A
E
E
A
A
The
next
topic
is
basically
a
reminder.
You
might
be
aware
that
CDF
is
taking
part
in
google
Summer
of
Code
and
people
are
reaching
out
to
CDF
for
project
ideas.
Someone
reached
out
to
me
at
least
write
directed
them
to
then
Lopez
and
Jack
Dean
from
CDF,
and
if
any
of
you
have
any
idea
from
within
CDF,
then
I
suggest
you
to
contact
Dan,
Lopez
and
Jack
mean
to
make
it
official
and
they
get
that
proposal
published
on
the
website
and
then
the
students
or
interested
people
can
take
part
in
google
Summer
of
Code
I.
A
D
D
So
hi
everybody
and
thank
you
for
the
invitation
and
that
I
can
present
captain
here
in
this
sick,
so
I'm,
Andres,
Kramer
and
I'm,
a
technology
strategist
working
at
dynaTrace
and
I'm,
one
of
the
main
contributors
of
our
own
sauce
project
called
Katan.
So
let
me
start
with
some
motivation,
and
here
you
can
see
a
pipeline
which
was
really
used
inside
dynaTrace,
and
this
pipeline
per
se
is
not
really
big.
D
It
only
has
350
lines
of
codes,
but
what
we
have
observed
here
is
that
we
are
mixing
information
in
this
pipeline
contained
the
process
so
which
steps
are
needed
in
order
to
to
do
the
delivery.
It
also
contained
information
about
the
target
platform,
the
environments,
so
sorry,
I'm
not
interrupting.
Should
this
me
to
be
recorded
party.
A
D
Recorded
then,
let's
continue,
so
what
we
are
seeing
here
is
that
we
are
mixing
information
inside
Bourne
pipeline
and
what
we
did
is
we
integrated
different
tools,
different
plugins
into
one
pipeline
and
of
course,
we
had
to
do
the
integration
interoperability
between
all
those
tools,
and
this
is
fine
if
we
have
a
single
pipeline,
but
this
is
probably
not
the
case
in
native
environment
here.
In
most
cases,
we
do
have
one
single
pipeline
for
more
micro
service,
and
you
can
already
see
here.
D
This
does
not
scale
and
what
we
get
our
so
called
snowflake
pipelines,
and
we
have
to
do
this-
this
interoperability
and
this
adaptions
for
all
those
lines-
and
this
is
really
the
motivation
why
we
came
up
with
captain
so
captain
in
one
sentence,
which
is
an
event-based
control
plane
for
continuous
delivery
and
automated
operations,
and
this
is
important
because
we
also
contain
consider
automated
operation
tasks.
So
captain
does
not
end
if
the
artifact
is
released
in
production.
D
Instead,
we
also
consider
so-called
self-healing
actions,
and
this
makes
captain
unique
and
now
I
would
like
to
go
into
a
little
bit
on
the
core
concepts
which
we
are
following
inside
cap.
The
first
approach
is,
of
course,
we
are
using
declarative
ways
in
order
to
define
the
delivery
and
also
the
automation
processes,
and,
of
course,
these
definitions
can
be
then
shared
across
in
the
number
of
micro
services.
D
D
And
finally,
of
course,
we
do
have
a
built-in
observability,
which
means
we
can
trace
and
the
delivery
process
from
Def
until
an
artifact
is
in
production,
and
we
do
this
musical
tool
called
the
captain's
bridge
and
for
most
of
this
core
concepts,
we
will
now
see
how
we
implement
this
was
in
captain.
So
the
first
declarative
approach
we
are
introducing
in
captain
is
the
so
called
shipyard
and
the
shipyard
not
only
defines
the
stages
which
are
needed
and
what
to
do
in
the
stages.
D
D
D
You
can
already
see
here
each
of
this
tool
can
register
on
a
certain
type
or
on
a
list
of
events
which
they
are
interested
in
and
now
in
order
to
to
implement
this
tasks,
which
we
have
seen
in
the
shipyard
captain
sends
out
a
so
called
triggered
event
for
the
different
tasks.
Here,
for
example,
we
have
a
deployment
triggered
event.
D
This
can
also
be
a
test
triggered
event
or
any
any
other
action
which
you
require
in
your
pipeline,
and
what
you
can
see
here
is
a
typical
cloudy
event,
with
its
mandatory
fiends
like
an
ID
the
source
where
this
then
comes
from
and
also
time
point
and
in
the
data
block,
we
can
add
information
which
is
non
required
in
order
to
do
this
deployment.
For
example,
we
can
add
the
strategy
which
should
be
followed
p.m.
we
are
doing
a
Bluegreen
deployment
and
keep
your
your
elbow.
D
For
example,
the
deployment
or
the
helm
manifests
are
stored
and
the
algo
service
in
my
case,
nor
would
listen
on
this
certain
type
of
event
on
the
deployment
triggered
event
and
as
soon
as
it
starts
the
deployment
it
sends
out
an
event.
So
I
know
started
my
deployment
and
deployment
started.
We
met,
for
example,
the
Agra
service
would
not
tell
me
I'm
now
deploying
in
this
cache,
so
they
have
manifests
with
disk
attached,
which
you
can
find
in
this
repository
and
as
soon
as
I'll
go
now
finish
my
deployment.
D
Find
the
new
deployed
artifact-
and
this
is
now
really
the
mechanism
which
we
are
using
in
order
to
couple
these
different
tools.
Of
course,
for
example,
a
go
cannot
interpret
this
events
yet,
but
we
need
a
little
bit
of
translation
layer,
but
we
are
translating
this
into
an
agar
cone,
for
example,
and
now
I
would
like,
and
to
show
you
two
examples
how
we
can,
for
example,
two
continuous
delivery
miss
can.
For
example,
we
can
start
the
continuous
delivery
by
telling
captain
as
a
new
artifact,
which
should
be
deployed.
D
D
D
C
C
D
Events
generated
or
sent
out,
based
on
on
this
definition
of
the
work
flow,
which
we
here
called
artifacts
delivery
in
our
case,
captain
would
send
on
odd.
So
if
the
work,
if
this
I
started,
captain
sends
out
update,
triggered
and
then
in
the
background
there
are
one
or
several
tools
registered
for
this
update
event,
which
then
take
care
of
updating
and
as
soon
as
all
of
this
tools
reported
finished
event,
captain
would
would
would
then
go
to
the
deployment
to
the
next
task,
so
would
send
out
the
deployment
trigger
demand.
C
Sort
of
accept
that
like,
for
example,
if
even
just
the
the
definition
of
what
I'm
looking
at
here,
I'm
trying
I'm
trying
to
equate
what
I'm,
seeing
here
to
an
in
in
house,
a
tool
that
we
are
ourselves
have
developed
versus
something
like
Tecton
tech,
tongs
CD,
which
I'm
sure
you're
already
familiar
with.
So
this
kind
Shipyard.
Would
you
say
that
it's
equal
to
the
pipeline
is?
Can
I
say
that
a
shipyard
is
equivalent
of
what
we
would
find
as
a
definition
of
a
pipeline.
D
F
From
my
not
really
knowing
too
much
about
captain,
it
seems
kind
of
like
it's
one
level
up
almost
for
what,
because
I
would
almost
imagine
having
like
a
pipeline
for
like
if
I
was
using
tech.
Time
like
I,
would
have
maybe
a
pipeline
for
a
deployment
or
a
pipeline
for
update
for
the
things
that
you're
calling
tasks,
maybe
or
maybe
it
would
be
individual
workflows,
I'm,
not
sure,
maybe
be
like
artifact
delivery
would
be
one
pipeline,
but
I
don't
know
it
seems
kind
of
like
it
seems
like.
This
is
a
thing.
F
D
True,
each
task
is
basically
implemented
by
by
montauk
image
or
multiple
docking
images
and
and
so
yeah.
We
are
level
up.
That's
true.
We
do
not,
for
example,
have
steps
I.
C
D
E
D
E
F
F
Then
I
guess
the
part
that
I'm
not
I'm
not
totally
far
is
that,
then
you
said
that
the
tasks
are
docker
images,
I'm
kind
of
surprised,
because
I
was
I,
thought
that
the
idea
was
that,
like
you,
would
be
emitting
an
event
or
something
that
would
be
consumed
by
like
argo
CD
in
this
deployment
example
here
and
then
argo
CD
would
be
like
emitting
an
event
when
it
was
done,
and
then
captain
would
be
continuing
on
from
there
are.
The
are.
D
D
D
D
Doesn't
need
and
to
know
which
type
of
event
it's
sending
for
captain.
This
is
not
relevant,
but
the
information
which
is
encapsulated
and
the
type
of
the
event
is
really
helpful,
for
example,
for
for
monitoring
tools.
If
you
know,
when
there
is
a
new
version
deployed,
you
can
really
make
a
burgeoning
in
the
monitoring
tool
like
dynaTrace
or
promises,
and
therefore
having
a
common
set
of
events
would
help
tools
like
a
monitoring
tool
in
order
to
understand,
what's
going
on
in
the
background,
okay,.
F
Yeah
that
makes
sense.
Thank
you
and
then
probably
my
last
question.
Do
you
have
any
interest
in
I've
noticed
some
some
use
cases
like
this
being
mentioned
here
as
well,
and
I
was
one
thing,
one
sort
of
attribute
that
these
these
scenarios
have.
That
I
think
is
interesting.
I'm
wondering
if
it's
something
that
you
want
to
support
or
actively
don't
want
to
support
is
the
idea
of
being
able
to
enter
one
of
these
like
so
like
in
this
slide,
this
would
be
I'm
just
looking
at
the
ship
you're.
This
is
like
a
workflow.
D
F
Like
would
you
want
to
be
able
to
enter
the
workflow
at
different
points,
or
do
you
want
to
only
be
able
to
start
it
at
one
point,
because
I
guess
by
nature
of
it
being
like
an
event-driven
system
like
you
could
maybe
imagine
a
scenario
where,
like
you
manually,
run
argo
CD
or
something
and
it
emits
an
event
to
be
like
I
finished,
and
then
would
you
want
to
enter
the
workflow
from
that
point?
Or
do
you
only
want
to
enter
it
from
like
the
the
starting
point
when
the
new
artifact
is
made
a
really.
D
Good
question,
so
the
goal
should
be
to
always
run
the
complete
workflow,
but
if
you
did
this
update
by
your
own,
you
can
directly
jump
into
the
deployment
yeah.
So
it's
event
based
and
Kettner
should
should
finally
recognize
it
so
that
you
started
the
workflow
in
or
into
the
workflow
in
the
middle,
but
per
se
pure
design.
We
would
like
to
start
it
from
the
beginning.
I
see.
F
It's
an
interesting
choice
because
some
some
systems
have
made
different
choices.
I,
don't
know
if
you've
seen
concours
at
all,
but
that's
like
one
system
that
seems
to
take
it
to
kind
of
the
other
extreme.
Where
you
have,
you
can
kind
of
enter
a
workflow
at
any
point
if,
if
one
of
the
one
of
the
artifacts
that's
being
watched,
changes
yeah
so
just
interesting
I
have
no
I'm
still
trying
to
figure
out
my
opinion
on
like
whether
it's
a
good
idea
or
not.
It
seems
like
it's
confusing.
That's
my
that's.
F
G
G
D
G
D
G
D
C
C
Yard
looks
at
its
workflows
its
tasks
and
sees
okay
at
the
we
know
from
from
the
underlying
system
that
each
one
of
these
tax
tasks
produces
a
this
type
of
event,
and
you
can
then
figure
out
what
what
event
listeners
that
are
already
registered
and
start
to
build
out
a
graph
of
actions
that
actually
are
supposed
to
take
place.
I
understand
your
point
about:
yes,
what
happened?
What
what
happens
when
you
lose
one,
you
know
some
one
event
is
lost
in
the
middle
and
obviously
the
thing
that's
supposed
to
listen
to
it
never
receives
it.
C
D
A
quick
follow-up
on
your
last
comment,
so
we
need
this
any
way
to
know
what
events
do
we
expect,
because
otherwise
we
would
not
be
able
to
do
a
synchronization,
for
example,
if
we
have
multiple
testing
tools,
we
would
not
know
when
to
start
the
emulator
evaluation.
For
example,
we
have
functional
tests
and
performance
tests
which
are
executed
in
parallel
and
because
both
register
to
this
test
triggered
event,
and
we
can
only
start
invasion
if
both
completed
so
in
captain.
We
we
of
course
already
know
this,
then,
when
to
continue
the
evaluation.
C
D
C
So
that
you
already
have
that
ok,
that
makes
sense.
Do
you
have
any
place
that
a
developer
would
go
to
to
see
the
full
graph
of
actions
in
one
view,
and
once
again
it
does
not
need
to
be
UI,
it
can
be
just
some
JSON
or
a
llamó.
That
says,
based
on
your
shipyard
and
based
on
your
uniform
that
you've
registered
here's
what
we
expect
to
happen,
there's
the
what
we
expect
to
happen
and
I
would
imagine
that
something.
C
D
Good
point,
so
we
we
do
have
what
took
place
so
the
first
we
do
have.
We
can
show
the
user
in
our
so-called
captain's
bridge.
Unfortunately,
I
do
not
have
in
a
screenshot
here,
but
the
protection.
What
will
happen
in
in
in
the
future
and
the
user
cannot
get
this
information
yet.
But
of
course
we
know
that
we
need
it
and
going
forward
so.
A
For
you
guys,
apologies
for
interrupting,
but
we
have
one
more
topic.
We
might
appears
from
Orange.
The
discussion
and
questions
and
responses
are
great,
but
we
didn't
move
on,
but
one
must
comment
question
perhaps
to
Andrea's.
You
will
continue
joining
the
Sigma
things
and
we
can
continue
discussions
around
kept
in
an
event
based
approach,
yeah
sure
I
would
love
to
and
Akif
what
is
great
discussion
Thanks.
Thank
you.
Everyone
for
the
questions.
Sorry
again,
so,
let's
move
to
the
next
topic.
David.
I
I
A
J
J
Okay,
can
you
see
the
slides?
Yes,
yes,
so
first,
thanks
to
invite
me
to
this
meeting,
I
will
just
show
you
how
we
are
dealing
with
the
installation
from
bare
metal
to
vnf
PMF
for
the
Eternity
of
functions
in
a
range
it
is
in
the
labs
for
now,
but
we
hope
to
go
to
the
production
in
the
next
year's.
So
orange
is
a
big
cool
company.
We
are
more
than
100
thousand
people
inside
the
company.
J
It's
an
old
company.
We
need
to
replace
many
parameter
servers
that
are
dealing
with
goals
from
all
the
sacred
calls
to
the
next
5g
networks.
We
are
dealing
with
many
Tara
rooters,
who
has
many
many
things
on
the
internet
from
the
five
virtually
to
the
older
technologies,
and
we
have
many
many
vendors
inside
our
network.
So
we
must
deal
with
everything
with
a
very
intelligent
news
network.
J
J
What
was
the
the
start
of
CI
reflections?
We
first
work
on
opening
the
project
from
the
links,
condition
with
west
fatty
and
experience
on
that
project
was
that
we
were
working
on
the
deployment
of
a
big
telco.
Yes,
and
we
we
deal
with
project
with
insta
level
that
what
really
monolith
chicks,
instil
that
well,
that
were
doing
everything
with
a
big
shot
script
and
they
were
dealing
all
the
parameters,
all
the
steps,
everything
so
they,
instead
from
the
burritos,
ever
stood.
J
He
OpenStack
and
it
was
quite
complex
to
work
with
with
those
projects
and
the
Jenkins
was
in
a
separate
place.
So
in
inside
or
arrange
we.
We
did
not
appreciate
that
and
we
own
it
at
this
time.
A
few
years
ago
was
to
create
hardware
issues
that
can
be
long,
complex
and
it
be.
The
base
may
be
the
base
of
further
deployments.
We
want
to
install
an
open
stack
that
can
be
adjust
for
a
few
hours
or
for
several
years.
J
We
need
to
2d
to
deploy
cabinets
or
a
vnf
for
testing
or
for
a
long
time
and
at
each
time,
at
the
end
of
each
steps,
we
need
to
reuse
the
function
that
we
deploy
dot,
and
so
we
we
had
to
change
several
steps
server
project
we
choose
to
use
the
dub
CI
because
we
had
it
internally
in
a
range.
It's
it
was
the
reference
tool
inside
a
ranch.
It's
already
used
by
uttering,
propose
CI,
simply
and
natively,
without
extra
servers.
But
github
CI
is
much
more
monolithic
project.
J
It
can
trigger
extra
proton
other
project,
but
it's
quite
complex
and
not
easy
to
deal
with
artifacts.
So
back
to
the
basis.
We
just
set
a
few
projects.
We
we
just
have
input
project
with
input,
sorry,
input
parameters
and
files.
The
code
is
totally
agnostic
to
where
it
will
be
deployed
and
what
will
be
the
parameters
there
is.
The
goal
inside
orange
was
to
avoid
project
with
Arkady
parameters
or
really
parameters
that
were
linked
to
one
kind
of
under
renan's.
J
J
Each
project
had
to
deal
is
own
CI,
CD
steps.
We
don't
want
people
from
another
team
or
some
somewhere
else
to
to
say
you
must
use
that
and
we
don't
want
to
happen
long
meetings
to
to
install
complex
environments
and
we
want
to
have
artifacts
archives
where
we
want.
We,
you
have
the
login
the
password
here.
The
builds
result
everything
and
that
can
be
encrypted.
J
J
We
just
want
to
have
one
CD
CD
config
by
project
one
project,
for
example,
that
will
deal
with
the
creation
of
a
VM
one
other
project
that
we
deal
with
the
installation
of
a
bi-metal
server.
They
are
separate
spitted
project
and
each
project,
not
the
best
how
to
deal
with
that.
It's
not
another
project,
another
pipeline
everywhere
that
somewhere
somewhere,
that
will
say
you
have
to
deal
with
that.
J
We
want
to
easily
replace
one
project
by
another,
so
we
have
to
simplify
the
input
and
the
output
and
to
have
something
emotion,
use
between
the
project
with
the
same
function.
So
in
theory
we
like
other
project,
we
have
a
step
with
a
scenario
configuration
that
create
a
vise
or
a
set
of
files.
We
create
Jumbo's,
prepare,
for
example,
that
was
for
installing
of
Mustang.
We
create
a
tempest
that
will
create
files
with
the
infrastructure,
description
and
the
credentials
we
deploy
the
operating
system
on
the
infrastructure.
J
So
we
have
a
new
set
of
files
with
the
credentials,
the
jumpers,
credentials
and
infrastructure
description.
We
deploy
OpenStack
with
a
specific
project.
We
have
now
an
openstack
admin
credential
and
we
test
at
the
end.
We
open
stack.
So
that
was
a
theory.
If
we
want
to
go
now
to
the
the
practice
we
just
have
a
set
of
different
projects
in
get
lab.
We
have
a
config
project.
We
have
an
interest
in
from
another
project
that
will
create
and
deploy
the
the
bi-metal
servers.
J
We
have
the
color
project,
it's
just
a
wrap,
a
wrap
up
around
the
cola,
OpenStack
install
and
we
have
the
fun
test
project
that
will
test
the
OpenStack
at
each
time.
We
are
dealing
with
the
input
on
the
output
of
the
project,
so
we
can
deploy
OpenStack
on
a
VM
or
on
bare
metal
it.
Just
we
just
have
to
replace
the
project
not
to
deploy
with
infra
manager
that
is
dealing
with
bare
metal
but
OS
infra
manager.
That
is
dealing
with
OpenStack,
for
example.
We
can
also
have
different
different
step
for
cubanelles
deployment.
J
Still
with
the
stereo
configuration
creation
of
VM
deployment
of
cabinets
on
testing
of
cabinets,
we
also
have
the
same
thing:
the
same
Suites
of
step
for
own
app
own
app,
that
is
a
Vienna
focused
rotor,
so
we
create.
We
have
the
configuration
files
for
the
for
the
scenario,
the
Kuban
s,
preparation.
You
all
have
deployment
on
the
lab
test.
We
work
inside
orange
at
each
time
test
each
each
final
result
with
the
automated
project.
J
And,
of
course,
we
can
change
the
pipeline's
of
pipeline,
we
can
invite
simple
click.
Deploy
OpenStack
on
bare
metal,
then
deploy
cabinets
opens,
deploy
OpenStack,
we
add
VMs,
deploy
cabinets
test,
cabinets
Depot
unit
test
on
up,
then
deploy
INF,
Exeter,
etc.
So
all
of
that
can
be
triggered
by
a
user
can
be
triggered
by
a
chrome
simply
or
can
be
triggered
by
hearing
comments
on
several
as
we
are
using.
We
are
dealing
with
different
with
inputs
on
the
project
and
that
those
inputs
are
creating
artifacts
with
the
project.
J
J
J
A
range
is
dealing
with
many
vendors.
When
I
say
we
are
working
with
projects,
it's
mainly
projects
that
are
triggering
or
installing
vendors,
scraped
or
vendors
applications
opened
of
the
NF.
So
it's
mainly
wrap-up
scripts
that
are
working
with
other
applications.
So
the
idea
is
to
manage
the
chain
of
the
pipeline,
the
pipeline,
the
chain
of
the
of
this
pipeline
and
to
let
the
Windows
application
were
in
the
in
his
in
his
own
project.
J
So
we
worked
on
a
prod
gitlab
project,
a
good
project
at
kucheng
CI.
That
is
the
scenario
manager
that
is
dealing
with
the
pipeline
of
the
pipeline.
It's
a
trigger
of
auto
Hitler
projects.
We
this
project
will
trap
the
chain.
Ci
is
preparing
a
set
of
input
data.
You
fetch
the
results
of
the
other
pipelines
and
create
the
new
inputs
of
the
next
steps.
Each
time
we
can
encrypt,
we
can
deal
with
the
SSH
config
and
keep
the
pollution
to
work
on
several
infrastructure.
I.
J
B
J
It's
really
using
the
API.
So,
of
course
you
can.
You
can
call
another
RPI,
but
for
now
it's
only
linked
with
with
the
github.
You
can
have
several
get
lab.
We
are
deploying
with
the
same
scenario
we
are
dealing
with
internal
internal,
get
lab
and
external,
get
lab
on
the
on
github.com,
on
or
in
intricate
lab
in,
for
example,
in
orange
guitar
we
are
deploying
parameter
servers
and
the
OpenStack
and
get
lab
comm
we
are
using.
We
are
deploying
on
App,
so
can
you
see
the
the
Firefox
page
yeah.
J
So
this
is
a
simple
way:
it's
it's
quite
a
deal.
I
assume
that
I'm
not
a
web
developer.
That
shows
the
different
step
of
a
scenario
yeah.
We
are
getting
for
own
app.
So
it's
time
the
people
on
on
a
project
on
own
project
is
creating
a
package
deploying
automatically
on
our
infrastructure.
So
here
we
can
just
click
on
the
it's
not
the
best
example.
This
one
we
can.
Click
on
a
step
of
the
meta
pipeline
meet
a
pipeline
to
see
what
is
inside
the
other
pipeline,
the
other
project.
J
So
this
project,
the
own
project,
has
its
own
github
CI
file.
It's
the
manager,
the
project
is
the
only
one
that
can
deal
that
knows
what
to
do
with
the
Oasis
project.
So
the
idea,
as
we
are
using
in
range
many
teams
that
are
doing
dealing
with
just
one
equipments
with
specific
vendors.
We
have
to
create
the
projects
and
each
each
people
managing
a
project
is
responsible
out
deploying
is
function.
J
So,
for
example,
we
have
the
several
steps
you
had
to
deploy
the
own
project
and
we
can
click
simply
on
the
only
step
to
see
to
go
on
the
gate.
Lab
to
see
what
is
what
is
going,
so
it's
quite
easy
to
go
from
one
pipeline
to
the
others.
The
project
here
is
on
github.com.
You
can
you,
you
will
have
the
links
to
to
see
that
so,
for
example,
sorry,
I'm,
just
sure
I
will
show
you
a
scenario
file
to
deploy
on
a
parado
on
a
podcast.
This
is
the
configuration.
J
J
Those
parameters
will
be
sent
to
the
project,
OS
infra
manager,
with
the
file
generated
previously
and
with
two
specific
parameters.
At
the
end
we
will
deploy.
We
are
after
the
app
deployment
that
will
mix
artifacts
from
infra,
deploy
the
previous
step
that
will
mix
infra,
deploy
pipeline
from
other
projects
or
pipeline
from
specific
places,
and
at
the
end
we
have
the
step
for
the
the
application
testing
still
mixing
artifact.
If
we
want
to.
If
you
want
to
do
that,
there's.
A
J
We
just
have
a
simple
application
that
you
can
find
here.
I
will
just
show
you
that,
on
the
elephant
open
source
projects
in
orange
on
the
CD
part,
we
are,
we
have
an
application
that
will
trigger
that
will
up
the
the
way
books
from
the
Garrett's
and
it
that
will
create
a
waiting
queue
and
then
start
github
just
get
large,
NC
I
just
by
another
way
book.
So
it's
just
reaching
you
under
call
of
a
web
another
way
book
with
specific
parameters.
Okay,.
J
It
renews
and
we
have
to
deal
many
project
and
split
the
responsibilities
of
each
project
with
some
specific
people.
So
it's
it's
not
we
don't
have
cannery.
We
don't
have
some
currently
testing
or
something
like
that,
because
some
application
don't
have
enough
traffic
to
be
tested
in
cannery.
Some
other
network
functions
are
historical
with
with
complex
data
inside
and
that
can
be
stopped
and
starts
like
actual
web
web
services.
J
So
it's
it's,
maybe
a
little
more
complex,
but
so
we
are
dealing
as
we
can
with
by
splitting
the
project,
for
example,
if
we
want
to
describe
what
we
want
in
this
teller,
this
is
one
of
the
configuration
that
we
can
same
with
the
PDF,
an
idea
from
open
IV
that
we
kept.
So
this
this
file
is
describing
what
are
we
are
deploying
the
VMs,
etc,
etc.
Everything
is
a
configuration
inside
that
project.
We.
J
Of
course,
this
project
is
not
simplifying
the
github
CI
people
that
are
creating
their
own
project
and
want
to
be
triggered
after
understand
how
to
to
use
github
CI.
This
is
a
not.
This
is
a
significant
ticket
field,
but
they
are
only
after
deal
with
the
installation
of
the
project
and
it's
for
from
insider,
and
she
does
not
prevent
to
understand
how
to
access
the
resource
or
to
pass
the
proxy
and
everything
like
that
for
now
get
lab
C
chain
CI.
The
project
is
mainly
in
on
Siebel.
J
A
Andrea's,
if
you
can
upload
these
presentations
to
CDF
presentations
repository
on
github,
then
people
can
look
at
those
presentations
and
browse
the
web
sites
or
the
code.
You
have
there.
That
would
really
help
and
the
recording
of
this
meeting
will
be
published
on
YouTube
as
well
and
I.
Don't
know
if
anyone
has
any
question.
We
are
already
over
time,
sorry
to
interrupt
David,
but
thanks
for
the
presentation,
any.