►
From YouTube: CDF SIG Interoperability Meeting 2020-03-19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
C
I
would
have
a
question
I've
seen
that
you
have
worked
on
vocabulary,
yeah.
A
A
A
And
maybe
that
is
a
good
topic.
Actually
we
can
spend
few
minutes
on
because
the
idea
of
it
to
vocabulary
is
not
just
documenting
the
work
of
the
reviews
by
the
tools
but
to
see
if
we
can
come
up
with
some.
You
know
common
vocabulary
across
these
different
tools
and
most
of
the
terms
are
kind
of
common,
but
there
are
differences.
E
B
A
A
A
D
B
D
B
D
D
But
one
thing
I
just
want
to
emphasize
is
a
kind
of
best
practices
for
road
map
these
days
and
especially
for
a
group
like
this,
not
specifically
focusing
on
specific
timelines
or
delivery
dates,
but
rather
using
time
horizon
so
typically
now,
next
later
so,
I
didn't
know.
So
the
next
section
is
charter.
D
I
didn't
know
if
we
had
anything
already
defined
for
the
group,
but
I
went
ahead
and
just
started
putting
together
what
I
feel
is
the
the
Charter
for
the
groups
that
we're
working
to
solve
the
problems
of
making
CI
CD
software
tools,
interoperable
and
interchangeable,
and
the
reason
we're
doing
this,
because
we
want
to
drive
CI
CD
tool
adoption
and
we
feel
it's
essential
to
that,
as
well
as
its
promoting
innovation
in
the
industry.
So
if
we
kind
of
make
tools
work
together,
we
can
get
to
sort
of
the
next
stage
of
interesting
layers.
D
Okay,
so
and
then
I
had
a
thought
there,
which
I
have
to
finish
okay,
so
that
I
certainly
think
through
what
would
be
the
outcomes
and
so
a
bit
of
a
mixed
bag
of
things
like
some
of
it
is
driven
by
what
we're
actually
doing.
But
some
of
it
is
trying
to
have
the
end
in
mind.
So,
on
the
one
hand,
I
think
we
want
to
all
have
a
better
understanding
of
the
different
CI
CD
tools
and
how
the
terminology
is
used
and
and
how
those
compare
and
how
we
can
translate
between
them.
D
But
then
equally,
this
could
just
be
a
first
step
to
maybe
one
of
these
other
outcomes,
I
think
there's
a
focus
on
capturing
end-user
requirements.
I've
just
understand
you
know
making
sure
we
always
keep
the
end-user
in
mind,
we're
not
just
kind
of
doing
it
for
the
sake
of
it
or
in
a
way
ways.
We
think
it
should
be
done,
but
driven
by
actual
use
cases,
so
maybe
that's
to
first
do
a
bit
more.
D
How
but
I
think
some
of
the
things
we
want
to
get
out
of
it
are
you
know
having
this
shared
reference
terminology?
Oh,
so
it's
always
clear
when
people
talk
about
tools
or
they
can
use
our
reference
things
to
translate
between
tools,
we
want
interoperable
tools,
tools
that
can
connect
together
and
work
in
a
predefined
way
that
perhaps
is
tested
standardized
frameworks
and
and
then
also
promoting
best
practices.
Perhaps
how
tools
interoperate
I'll
take
a
pause
there,
maybe
to
capture
some
sentiment
or
feedback
or
see
if
people
think
this
is
on
the
right
track.
D
We
tend
to
have
agreement
within
between
people
of
what
we're
doing
and
what
the
outcome
will
be,
and
then
near
term
is
tends
to
be
a
bit
wider,
with
some
more
flexibility
about
what
that
will
mean,
and
then
we
have
a
bucket
for
future,
which
will
be
you
know,
a
broad
scope,
very
flexible.
We
haven't
talked
about
implementation,
yet
it
might
change
depending
on
the
way
the
industry
heads.
D
So
those
are
the
three
kind
of
buckets
that
I
want
to
put
things
in
and
then
in
terms
of
what
I'm
seeing
the
current
like
in
the
now
we're
focused
on
this
knowledge
transfer
of
CI
CD
tools,
focus
on
capturing
and
user
case
studies,
and
we
should
focus
on
you
know
having
this
shared
terminology
published
and
the
next
I
think
would
be
things
like.
Okay,
let's
talk
about
pipeline
standardization
or
deployment
standardization
and
then
maybe
either
further
down
we'd
have
things
like
events
analyzation.
So
this
is
pretty
raw
and
I.
D
Okay,
so
unless
there's
any
questions
or
comments,
I'll
say
I'll
work
on
this
for
another
week
or
so,
and
then
I'm
hoping
to
have
something
a
bit
more
well
presentable
for
a
wider
group
and
anyone
who
wants
to
get
involved
in
over
the
next
couple
of
weeks
in
the
process.
You've
got
the
doc,
so
please
just
feel
free
to
go
ahead
and
start
commenting
or
start
adding
things
to
that.
You
want
to
drive,
or
they
have
been
completed,
that
we
should
make
people
aware
of
actually.
A
And
the
global
is
actually
member
of
the
sick,
but
he
is
a
busy
guy,
so
he
is
not
joining
the
meetings.
But
when
we
start
working
to
vocabulary,
someone
from
spinnaker
community
show
up
to
that
correcting
the
mistakes
we
made
there
I
can
reach
out
to
them
again
and
ask
them
to
invite
them
to
seek
so
yeah
and
for
code
fresh.
No,
unfortunately,
but
if
you
have
any
contacts,
just
invite
them
the
reason.
B
I
ask
is
because
both
of
those
tool,
companies
have
just
a
ton
of
experience
from
gathering
use
cases.
You
know
things
that
Traci
is
talking
about
right
now.
They
just
have
a
ton
of
experience
with
dealing
with
all
sorts
of
various
use.
Cases
from
you
know
around
the
industry
and
can
really
turbocharged
the
process
here
and
provide
a
lot
of
input
as
to
what
what
the
vocabulary
needs
to
be.
B
A
Yeah
well
again
like
yeah,
I,
spinnaker,
I,
know
and
the
pretty
well
so
I
can
spam
him.
But
if,
like
you
can
help
it
that
as
well,
that
would
be
great
because,
like
I,
don't
already
met
people
with
his
times
either.
So,
but
yeah
I
hear
you
in
addition
to
spinnaker
and
codfish
I
reached
out
to
was
that
person
the
sick,
Kay
Kay
Williams
from
Microsoft
and
asked
if
she
can
help
us
to
find
some
time
from
github,
and
she
told
me
to
send
an
email
to
her.
A
B
So
I
mean
I
can
obviously
provide
lots
of
input
within
eBay.
We
have
lots
and
lots
of
different
use
cases
that
come
upon.
Then
we
come
upon
and
we
have
to
solve
and
we've
got
internal
tooling,
that's
doing
bits
and
pieces
of
it
and
you
know
trying
to
fill
in
the
process
as
best
as
possible,
but
have
having
a
tool
company
represented
represented
here
that
is
taken
in
requests
and
feedback
from
all
sorts
of
different
companies.
F
D
A
And
actually,
the
last-minute
addition
to
agenda
is
related
to
this
roadmap.
Maybe
let
me
move
that
point
to
right
here,
so
we
we
don't
switch
the
context
so
and
unless
asked,
if
he
can
add
a
vocabulary
about
captain
and
obviously
that
would
be
great-
and
my
question
is
actually
answered
in
your
presentation
or
document
Tracy,
the
next
step
vocabulary
and
time
energy.
So
maybe
we
can
either
wait
to
start
working
on
establishing
this
shared
vocabulary
or
the
terminology,
or
do
that
in
parallel.
A
If
anyone
wants
to
take
a
first
step
in
that
part,
because
that
was
kind
of
intention
when
we
start
with
to
a
cobbler,
even
though
there
are
many
terms
used
across
the
different
tools-
and
they
mean
the
same-
and
there
are
few
differences
so
again,
I
just
want
to
ask
this
question
what
to
do
with
the
vocabulary.
What
are
the
next
steps.
A
D
D
Actually
I'm
just
one
thing
to
share
from
the
Outreach
Committee
we're
looking
at
I
think
doing
some
kind
of
more
themed,
newsletters
and
one
of
the
suggestions
I
made
was
that
you
know
like
one.
You
know
they
want
things
like
one
month
would
be
security
and
another
one
might
be
interoperability,
so
maybe
tying
into
that
a
next
day.
It
would
be
just
producing
a
handful
of
different
posts,
some
of
the
shared
vocabulary,
someone
the
tools
and
how
they
get
used.
D
Someone
you
know
someone
big
picture,
someone
specific
tooling
and
that's
kind
of
the
next
step,
just
getting
the
word
out
and
then
I
would
love
to
see
us
drive
for
almost
kind
of.
What's
our
standardize.
What's
I
recommended
terms,
people
use,
and
maybe
that's
something
the
Tecton
folks
can
can
really
help
with.
A
Okay,
so
then
we
can
talk
about
this
offline
trace
and
see
if
we
can
somehow
in
Outreach
Committee
brother
yeah,
okay,
so
that
was
that
topic
and
going
to
the
next
one,
and
this
topic
was
in
goodbye
Christy,
but
she
as
she
noted
she
won't
be
able
to
participate
in
the
sick
for
a
while.
But
I
still
want
to
talk
about
this
topic,
because
it
is
always
good
at
different
people.
F
A
It
is
actually
like
for
Europe.
It
is
more
difficult
because
it
goes
at,
it
will
start
at
5
p.m.
and
that
will
be
difficult
for
people
from
Europe,
because
you
know
schools
and
so
on,
but
yeah
thanks
Eric
for
that
and
you
see
what
we
do
and
the
next
topic
is
another
presentation.
But
this
time
from
rule
came
Jeremy
kindly
accept
to
present
Ruth
to
the
sick,
and
he
already
shared
the
slides
in
advance
and
I.
Put
the
slice
on
in
the
meeting
agenda
and
I
will
share
them.
Employer
to
Jeremy.
C
G
G
The
the
service
doesn't
have
to
be
used
like
that.
You
can
run
it
in
a
purely
advisory
manner,
but
by
giving
it
full
control
over
whether
or
not
changes
are
merged.
That
gives
it
the
ability
to
actually
sequence
the
order
in
which
commits
are
merged
across
all
of
the
repositories
that
it
controls
and
the
ability
to
actually
test
what
the
state
of
that
set
of
repositories
is
going
to
be
from
the
perspective
of
all
of
the
CI
jobs,
you've
defined
at
every
stage
in
that
sequence.
G
So
you
don't
have
to
worry
about
if
I
approve
this
change
in
repository
a
and
this
change
in
repository
B
at
the
same
time,
and
one
of
them
is
breaking
the
other
and
they
haven't
been
tested
in
the
context
of
one
another
or
something
else
has
emerged
that
I'm,
not
even
aware
of
because
someone
on
another
team
that
works
on
software,
that's
related
to
mine,
has
merged
something.
That's
going
to
cause
my
change
now
to
break
that.
G
This
is
sort
of
central
to
the
design
really
because
it
basically
meant
for
get
specific
operations
and
it's
it's
relying
on
code
review
interactions
for
its
cues
and
triggers,
but
as
get
seems
to
be
the
predominant,
a
revision
control
system.
These
days
and
code
review,
workflows
are
becoming
increasingly
prevalent
and
the
projects
that
we
were
working
on
that
prompted
us
to
develop
in
the
first
place
focused
entirely
on
both
of
those
that
seemed
like
a
reasonable
compromise.
G
It.
It's
designed
to
be
cross-project
multi-tenant
handle
source
code
from
multiple
different
code
review
systems
simultaneously,
so,
like
a
large
deployment
that
I
operate
right
now
has
over
2,000
git
repositories
that
it's
managing.
It
has
source
connections
to
repositories
that
are
in
Garrett
instance,
and
actually
more
than
one
Garrett
instance
in
github,
and
it
can
basically
handle
its
functionality
across
those
multiple
code
review
systems
and
credit
treat
them
in
a
consistent
fashion.
G
It
is,
of
course,
open
source.
Actually,
it
follows
what
we
refer
to.
As
the
four
opens
open
source,
open
design,
open
development,
open
community,
it
is
focused
on
everything
happening
in
the
open
and
being
open
to
participation
from
anyone
who
wants
all
the
way
from
you
know,
submitting
fixes
and
and
filing
bugs
to
deciding
on.
What's
going
to
be
in
the
roadmap
to
the
the
overall
governance
of
the
project
and
direction
it,
it
can
be
run
yourself
and
it's
actually
got
a
fairly
light
footprint.
G
There's
a
there's,
a
quick
start
on
the
dual
website
that
will
install
all
of
the
different
micro
services
from
containers
into
a
single
system,
and
we
actually
use
this
for
Azul
to
test
changes
to
its
own
code
base,
and
we
do
that
on
a
virtual
machine
with
eight
gigs
of
ram
and
like
80
gigs,
a
hard
disk,
so
it
and,
and
that
also
beta,
unlike
an
example,
garrett
and
some
other
stuff.
So
I
mean
it's
really
like
you
could
run
it
on
a
fairly
tiny
footprint.
If
you
wanted
to.
G
So
a
little
bit
about
behalf
of
the
project.
Basically
OpenStack
was
the
genesis
of
the
dual
project.
Openstack
began
around
2010
and
the
the
CI
team
at
the
time
started
using
Hudson
to
do
CI
testing
the
the
folks
from
the
the
team
that
was
doing
the
community
infrastructure
management
for
the
informing
OpenStack
project
had
worked
on
another
project
called
drizzle
and
the
the
drizzle
community
had
this
philosophy
that
if
it's
not
tested
it's
broken.
G
So
in
2010,
when
OpenStack
began
to
be
a
public
project,
every
single
change
that
was
proposed
to
any
of
the
codebase
had
to
pass
a
battery
of
CI
tests
and
had
to
be
reviewed
by
other
developers
before
it
was
allowed
to
merge
to
any
of
the
public
branches
of
the
git
repositories
fairly.
Common.
These
days
was
actually
not
that
common
back
in
2010,
so
we
rained
a
lot
of
challenges,
especially
as
the
project
gained
in
popularity.
G
We
saw
a
huge
uptick
in
commit
activity
from
developers
increase
in
the
actual
CI
coverage,
as
well
as
the
the
toad
that
was
being
proposed,
and
the
reviews
are
being
done.
So
we
realized
in
about
2012
that
we
really
needed
to
have
some
way
of
sea
realizing
the
changes
that
were
going
in
because
OpenStack
was
a
multi
repository
community
of
projects
and
they
had
some
fairly
tight
coupling
between
services
and
those.
G
We
needed
a
way
to
serialize
changes
across
multiple
repositories
as
as
Jenkins
was
testing
them,
but
we
did
not
have
enough
time
in
the
day
to
test
all
the
changes
that
were
being
proposed
to
merge.
So
we
borrowed
an
idea
from
microprocessor
design
called
speculative
execution
where
we
assume
that
changes
are
going
to
merge
and
that
allows
us
to
change
the
test.
It
allows
us
to
test
the
changes
that
follow
them
as
if
the
change
of
Hedison
has
merged.
G
So
we
include
the
the
previously
approved
changes
into
the
context
of
the
change
that's
being
tested
and
then,
if
one
of
the
changes
fails,
then
at
that
point
you
know
we.
We
pull
it
aside
and
we
reset
testing
for
all
the
changes
that
were
approved
after
it
no
longer
included
in
the
context
and
retest
themselves.
So
they
it
gets
us
a
nice
compromise
between
having
a
serialized,
merge
order
across
multiple
repositories,
but
being
able
to
still
perform
testing
of
all
of
the
changes
that
are
in
flight
in
parallel.
G
G
So
we
came
up
with
a
way
through
a
Jenkins
plugin
to
coordinate,
to
have
ones,
will
scheduler
coordinate,
multiple
Jenkins
masters
and,
and
that
was
basically
rule
v2.
It
was
a
a
multi
master
fan-out
for
Jenkins
effectively,
so
you
could
have
a
central
dashboard.
You
could
evenly
distribute
jobs
across
as
many
Jenkins
masters
as
you
wanted.
G
So
at
that
point,
Jenkins
was
really
just
remotely
running
commands
via
SSH
on
slaves
that
we
had
another
separate
process
that
was
already
managing
the
slave
pool
and
adding
and
removing
them
to
to
Jenkins
difficult
job
requests.
So
we
hit
on
the
idea
that
we
could
just
swap
out
Jenkins
for
something
which
did
most
of
the
equivalent
functionality
for
us.
Ansible
was
a
really
good
fit.
G
It
was
also
written
in
Python,
which
is
what
jewel
had
been
written
in,
because
that's
what
OpenStack
was
written
in
it,
you
gamal
for
configuration,
which
we
had
early
on
in
running
Jenkins.
We
said
hit
on
the
fact
that
it
was
really
hard
to
keep
Jenkins
configurations
in
a
git
repository.
We
wanted
to
do
get
driven
public
management
of
everything,
including
our
god
configurations,
but
but
candidate
in
XML
kind
of
painful
and
and
extremely
repetitive,
a
lot
of
duplication.
G
So
we
wrote
a
separate
utility
called
Jenkins
job
builder
that
allowed
us
to
the
template,
drinken
jobs
out
of
gamal,
and
you
know
basically
avoid
a
lot
of
duplication,
be
easier
to
edit
easier
to
to
review
diffs
of
when
we're
proposing
changes
to
the
jobs
and
so
on.
So
since
ansible
already
used
llamo
for
its
own,
you
know
task
configuration
that
was
a
was
a
nice
fit.
We
could
just
continue
to
use
the
amel
for
all
of
our
job
definitions,
including
the
job
payloads,
and
got
a
nice
energy
from
that.
G
G
But
we
were
we're
hitting
the
point
where
Zul
had
gotten
kind
of
cobbled
together
that
we
were
basically
in
need
of
making
the
entire
architecture
of
the
system
easier
for
people
to
run.
A
lot
of
other
particularly
open
source
projects,
but
but
also
commercial
enterprises
and
so
on,
had
started
trying
to
use
Zul.
And
it
was
not
easy
to
manage
mainly.
C
G
G
We
moved
job
descriptions,
job
definitions
to
be
incorporated
into
the
git
repositories
that
were
being
tested
rather
than
having
a
central
repository
of
God
configuration
so
effectively
allowing
projects
to
manage
their
own
job
definitions,
more
self-service,
be
able
to
branch
job
definitions,
so
it
basically
became
branch
aware
from
a
job
perspective,
but
we
did
not
want
to
wind
up
with
a
whole
lot
of
related
projects,
including
lots
of
repetitive
and
possibly
overtime,
slightly
divergent
job
configurations.
So
we
designed
it
to
do
basically
centralized
job
configuration,
and
so
that's
that's
sort
of
the
interoperability
story.
G
G
G
We
like
that
ansible
was
something
that
a
lot
of
people
were
already
using
in
their
organizations
to
to
manage
orchestrate
deploy
their
soft,
so
it
giving
the
ability
to
possibly
reuse
a
lot
of
their
deployment
tooling
directly
in
their
jobs
or
at
least
more
natively
in
their
jobs.
What
I
happen
to
like
recall,
ansible
from
shell
scripts
and
that
sort
of
thing
it
also
made
orchestrating
jobs
across
multiple
hosts,
easy
from
a
from
multi
host
job
perspectives.
Ansible,
basically
already
designed
to
do
exactly
that.
G
So
we
didn't
have
to
do
anything
particularly
special
to
have
jobs
be
able
to
use
multiple
systems
in
a
single
build.
It
could
be
extended
with
Python
modules
kind
of
much
in
the
same
way
that
we
had
previously
used
Jenkins
to
do
a
variety
of
non
core
functionality
by
adding
Jenkins
plugins
ansible
has
a
vast
library
of
available
modules
to
to
do
things
that
people
commonly
want
to
do,
and
it
can
still
easily
run
arbitrary
shell
scripts,
so
pretty
much
just
a
few
lines
of
ansible
to
run
whatever
shell
command.
G
You
want
on
the
job
sharing
slide.
We
basically
wanted
to
design
a
system
where
workflows
and
triggers.
For
you
know,
your
builds
were
not
tied
to
the
individual
job
definitions,
so
triggering
is
more
of
a
central
concept
bound
to
the
pipelines
that
we
have
designed,
which
are
global
constructs
within
a
tenant.
G
We
basically
wanted
the
ability
to
reference,
job
definitions
and
ansible
roles
in
any
project
from
another
project,
so
we
came
up
with
safe
ways
to
be
able
to
do
that
and
and
still
be
able
to
to
implement
encrypted
secrets.
The
jobs
would
be
able
to
make
use
of.
We've
got
a
lot
of
access
controls
around
how
job
inheritance
works
with
the
secrets
and
how
they're
bound
to
play
books
within
the
repositories
that
they're
included
in
and
so
on.
G
Yet
the
story
around
avoiding
a
lot
of
repetition
in
similar
job
definitions,
kind
of
the
whole
reason
why
we
hadn't
been
did
Jenkins
job
builder
years
before
we
run
it
one
of
the
systems
that
allowed
you
to
describe
jobs
in
a
way
that
supported
inheritance
and
the
ability
to
declare
minor
variations.
Things
like
altering
variables,
passing
in
extra
bits
of
data
whatever
to
a
job
without
having
to
carry
a
separate
definition
of
that
job.
G
We
we
designed
it
to
securely
be
able
to
reuse
jobs
between
different
branches,
different
projects,
different
tenants,
different
source
connections,
so
you
can
have
jobs
that
are
defined
in
a
repository
and
Garret
being
used
to
test
changes
or
pull
requests
to
a
project
in
github.
Or
what
have
you?
We.
G
G
Batteries
included,
sort
of
job
repository
called
dual
jobs
that
intended
to
house
generic
job
definitions
and
danceable
roles
that
those
job
doesn't
might
use
so
that
people
don't
have
to
rewrite
the
same
stuff
over
and
over,
and
they
don't
even
need
to
necessarily
carry
their
own
copies
of
this.
It
can
basically
be.
G
Used
either
with
a
local
fork,
if
you
do
want
to
send
the
specific
points
in
time
in
the
repository
or
have
your
own
local
modifications
that,
for
some
reason,
couldn't
be
handled
through
inheritance
or
variation,
but
but
you
can
certainly
also
either
set
up
like
a
generic
there's,
a
Gantt
driver
in
in
Zul,
which
is
one
of
the
source
drivers.
It
supports
doing
just
periodically
poles
or
remote
a
gary
mote
and
updates
the
jobs
that
it
knows
about
from
from
that
remote.
G
So
you
can
continuously
deploy,
but
the
global
main
copy
of
those
little
jobs
repository
within
your
your
system.
If
you
want
to
be
use
it
that
way
intentionally
designed
to
support
that,
and
we,
we
also
have
a
lot
of
advisory
testing
happening
on
that
little
job,
depository
from
other
sites
that
have
deployed
Zul.
G
All
they
really
have
to
do
is
set
up
a
garrett
source
connection
for
the
open,
dev
copy
of
those
little
jobs
repository
because
that's
where
it's
hosted
and
they
can
automatically
provide
feedback
into
the
open
dead,
Garrett,
letting
the
reviewers
of
the
little
jobs
repository,
know
hey.
If
this
change
emerges,
it's
going
to
break
these
tests
that
we're
running
within
software
factories
or
deployment
so
basically
designed
to
make
things
like
third-party
testing
even
of
job
configurations
themselves,
easy
and
straightforward
on
to
the
multi
connection
slide.
G
As
I
mentioned,
Zul
is
designed
to
connect
to
multiple
code
review
systems
at
the
same
time,
and
there
are
a
lot
of
people
who
are
using
it.
That
way.
Already
it's
able
to
bridge
review
and
hosting
platforms,
including
currently
Garrett
github
tag,
your
statically
hosted
yet
there's
an
experimental
get
lab
driver.
That's
not
quite
got
as
full
feature
parity
with
some
of
the
others
yet,
but
isn't
in
nearing
completion.
From
from
that
perspective,
there's
also
a
bit
bucket
driver
that
is
currently
in
progress.
G
Their
changes
proposed
that
would
add
it
to
duels
drivers
and
I
know.
There
are
a
couple
of
folks
who
are
successful
using
it,
but
they're
still
trying
to
work
out
some
some
intermittent
connection
bugs
from
what
I
understand,
but
basically
it's
it's
designed
to
support
and
have
a
nice
driver
interface
for
whatever
code
review
systems.
People
might
want
to
add
it
also
on
the
job
environment
slide.
It
is
intended
for
use
with
any
sort
of
execution
environment.
G
You
might
need,
and
we've
got
some
some
drivers
for
the
node
management
layer
implemented
already
to
delay
to
use
OpenStack
virtual
machines
containers
very
metal
systems.
Amazon
ec2
in
eks
Microsoft
Azure
driver
is
in
progress.
I
just
saw
another
update
on
it.
Yesterday,
there's
one
that's:
it's
got
changes
up
for
review.
If
people
want
to
test
it
out,
supports
Google,
Cloud,
GCE
and
gke
kubernetes
pods
we're
ways
if
we
spin
up
a
dedicated,
kubernetes
pod
for
multiple
kubernetes
pods.
G
If
you
want
for
each
build
and
then
you
can
interact
with
the
API,
and
it's
got
some
nice
little
hooks
in
there
to
expose
the
the
credentials
and
everything
that
the
job
then
needs
to
do
that
evilly
OpenShift,
which
had
to
have
some
slight
differences.
But
it's
very
similar
to
the
kubernetes
driver.
G
Also
support
statically
added,
like
separately,
managed
servers.
If
you
need
or
appliances
I
know
one
organization
that
is
using
jewel
to
test
configuration
on
network
switches,
you
can't
you
know
you
can't
easily
dynamically
create
a
network
switch
some-some
switch
software,
virtualization
solutions.
Aside
generally
a
lot
of
things
that
you're
trying
to
test
on
appliance
type
system,
you
really
need
a
lab
with
the
hardware
in
it,
so
we've
got
people
using
it
in
that
fashion,
just
fine
and
as
previously
mentioned
that
designed
to
do
multi,
node
testing,
it
supports
not.
G
Node
groups,
but
also
heterogeneous.
So
you
know,
we've
got
some
jobs
that
the
team
I'm
on
runs
that
does
some
stuff
on
on
a
bun
to
virtual
machine
and
some
stuff
on
a
Cintas
virtual
machine
and
some
stuff
on
a
souza
virtual
machine
and
some
stuff
on
magenta
virtual
machine
all
within
the
context
of
a
single
build
to
basically
test
certain
actions
cross
multi
platforms.
G
So
it
can.
It
can
treat
that
as
a
single
group
of
resources
allocated
to
a
particular
build
on
to
the
dependencies
slide.
A
big
part
of
what
the
the
sequencing
functionality
and
Zul
allows
is
the
ability
to
define
an
arbitrary
dependencies
between
changes,
and
we
found
that
immensely
useful
over
the
years.
Of
course,
there's
the
implicit
ordering
that
happens
in
a
dependent
pipeline.
G
If
somebody
is
approving
changes
that
are
in
related
repositories
and
were
sequencing,
those
then
it
it
basically
considers
them
to
be
dependent
on
the
changes
that
merged
ahead
of
them
from
a
contextual
perspectives,
when
it's
setting
up
the
execution
environments
for
the
jobs.
But
you
can
also
add
your
own
explicit
dependencies
on
other
changes
in
commit
message,
footers
or
in
github
case,
because
it
tends
to
cluster
commits
into
a
single
floor
request.
G
You
can
put
it
in
the
poor
request,
description
and,
and
that
will
tells
you
all
that
it
should
fetch
those
other
changes
that
you
specified
as
dependencies
and
incorporate
them
into
the
build
environment
for
the
builds
for
your
change
and
also
that
it
should
not
allow
your
change
to
merge
unless
the
changes
that
you've
declared
as
dependencies
merge.
So
that's
it
can
do
that
between
different
projects,
but
also
across
different
source
connections.
G
So
you
can
have
a
changing
in
Gerrit,
depending
on
a
change
in
github
or
or
whatever,
and
works
out
just
fine
and
it
of
course
it
prepares
the
the
git
repository
themselves.
So
there's
a
standard
role
that
will
allow
you
to
synchronize
those
to
all
of
the
the
job
nodes
as
well,
and
basically
the
repository
that's
constructed,
has
the
dependent
changes
merged
into
the
relevant
branches
of
the
repository
that
are
litter
being
requested
by
the
job
so
that
you
don't
have
to
go.
G
Try
to
have
your
job
itself,
etch,
all
of
those
things
they're
provided
up,
and
the
last
slide
just
has
the
URL
to
the
website
URL
to
the
presentation
source
for
the
speaking
notes.
That
sati
has
been
very
generously
screaming
for
you
and
the
pre-license
upon
the
content
there.
If
anyone
wants
to
reuse
any
of
it
and
that
was
30
minutes,
I
am
open
to
questions.
If
there
is
time
and
if
thought
he
does
not
need
to
be
meeting
along
I.
G
A
Yeah
I
will
do
when
I
am
kind
of
like
I,
know
bits
and
pieces
about
rule,
but
one
thing
that
caught
my
interest
is
like
most
of
the
focus
is
around
the
activities
that
happen
near
SEM
systems
like
it
helped
get
it
and
so
on.
But
what
about
the
further
you
know
loops
in
CI
flow
or
contains
deliberate
type
of
stuff
like
if
you
look
at
Jenkins,
you
know
we
can
have
like
periodic
jobs
and
so
on.
What
about
those
things.
G
But
there's
one
one
of
the
non
source
connector
triggers
that
Zul
implements
is
it's
timer
trigger
which
allows
you
to
basically
provide
a
thing.
That's
familiar
with
how
the
the
time
specs
in
a
cron
job
are
defined.
You
can
basically
provide
a
time
spec,
like
cron,
would
use
for
your
periodic
pipeline.
So
that
says,
I
want
this
job
run
at
this
frequency.
You
know
what
it
like
at
6:00
a.m.
every
Monday
or
at
midnight
the
first
day
of
each
month
or,
however,
you
want
to
schedule
that
every
night
at
2:00
a.m.
G
and
it
runs
all
of
those
builds
within
the
context
of
the
repository
canonical
state
at
that
time.
Since
there's
no
change
context.
In
that
case,
it's
basically
going
to
be
running,
builds
relative
to
the
public
state
of
your
source
code
repositories.
It
fires
an
event
for
every
branch,
and
so,
if
you
have
jobs
defined
for
some
of
those
branches,
then
they
will
have
builds
run
within
that
periodic
pipeline,
and
you
can
define
as
many
periodic
pipelines
with
different
timer
triggers
or
multiple
time
or
triggers
as
you
like
and
and
there's
I
didn't
mention
it
cuz.
A
And
you
mentioned
Iran
snow
like
Gerrit,
s3
melons,
and
that
kind
of
like,
if
I
remember
correctly,
like
that
there
was
a
driver
for
0
mq
or
something
like
that,
because
that's
also
you
know
we
had
this
captain
presentation,
also
a
previous
meeting
and
then
I
for
presentation
during
very
first
meeting.
So
what
about
postings
like
you
want
based
approach
to.
G
Like
I
mean,
there's
really
nothing
that
would
prevent
adding
a
an
arbitrary
event
stream
driver
in
to
to
the
function
as
a
trigger
it
probably
would
not
I
mean
it
really
depends
on
on
what
is
being
carried
within
the
event
payloads
themselves.
It
it
may
or
may
not
be
something
that
would
be
functional
as
a
as
a
source
connection
as
well,
but
it
could
certainly
be
used
to
trigger
builds
I
know.
We've
got
somebody
working
on
some
features
right
now
to
allow
arbitrary,
triggering
through
through
sort
of
a
web
hook.
G
C
G
Of
the
the
power
that
you
get
out
of,
Zul
is
going
to
come
from
the
B
context
surrounding
the
code
review
events.
So
it's
really
focused
more
on
those
workflows
than
on
just
being
an
arbitrary
execution
system.
That's
that's
kind
of
more
Jenkins
realm
I
think
and
it's
definitely
not
trying
to
be
a
an
alternative
to
Jenkins
in
general.
It's
mostly
just
focused
on
being
a
code
review
driven
continuous
integration
project,
gating
system
and
people
definitely
do
use
Jenkins
and
Zul
side-by-side
for
different
things.
I
know
organizations
that
are
doing
that.
G
B
A
question
with
regards
to
dependency
on
ansible
itself:
I
understand
you
guys
decided
on
the
ansible
syntax
for
effectively
a
type
line,
definition
I,
suppose
I
mean
familiar
or
not
financeable.
You
you,
you
I
mean
it
does
orchestration
of
things.
Obviously,
but
what's
what
is
your
actual
dependency
on
ansible
runtime
itself?
Do
you
have
one.
G
G
B
G
G
That
said,
it
doesn't
rely
strictly
on
an
Sable's
own
security
layers.
It
it
has
its
own
key
handling,
built-in
that
basically
attempts
to
sidestep
a
lot
of
the
pitfalls
where
people
will
wind
up.
Just
doing
you
know,
validation
of
remote
host
keys,
those
sorts
of
things.
It
also
sandboxes
the
ansible
execution
inside
bubble
wrap,
which
is
basically
like
a
very
lightweight
container
implementation.
So
every
time
ansible
is
called
for
a
build.
That
ansible
process
is
isolated
from
all
of
the
other
ansible
processes
on
the
central
executor,
which
allows
this
to
be
a
little
bit
more.
G
Well,
I
guess
a
little
less
worried
about
the
possibility
of
cross
talk
between
ansible
processes,
especially
in
situations
like
it
was
originally
designed
for
where
you
know.
You're
testing
proposed
changes
from
untrusted
sources,
so
we
want
to
kind
of
guard
against
possible
breakout
or
or
leveraging
vulnerabilities
in
ansible
to
influence
the
outcomes
of
other
builds.
That
might
be
handling
sensitive
information
and
also
it
supports
a
range
of
answerable
versions,
not
not
just
locked
to
one.
B
Ok,
that's
I
figured
there
has
to
be
something
like
that
underneath
for
it
to
to
work
like
that.
Much
in
much
like
in
the
way
of
how
you
know,
Tecton
is
hook
line
and
sinker
connected
to
kubernetes.
It
is
that's
what
it
requires
it
sounds
like
was,
is
the
same
thing
you
know,
ansible
is
the
same
thing
jewel.
Ok,
thanks.
G
Yep
and
definitely
the
Missoula,
IRC
channel
mailing
list,
etc.
There's
folks
happy
to
answer
questions
about
how
a
lot
of
that
works,
not
not
just
if
you
want
to
run
jewel,
but
you
know
if
you're
really
doing
anything
on
related
topics
we
like
talking
about
it.
It's
as
we
see
it.
Free
software
is
it's
good,
regardless
of
who's
project.
It
is
and
we're
all
in
this
together.