►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
what
we
have
been
discussing,
do
you
all
have
access
to
the
meeting
notes
it's
on
the
invite,
but
I'll
stick
this
in
the
chat
as
well,
and
then
what
we
have
been
discussing
in
past
meetings
is
the
cloud
a
little
bit
the
tekton
client
plug-in,
but
also
the
cloud
events
plug-in
proposed
by
vibhav
and
there's
more
on
that
in
the
meeting
notes.
To
give
you
a
bit
of
context,
if
you
are
not
familiar
with
those
initiatives.
B
B
Okay,
I
guess
then
it's
fine,
okay,
okay!
So
hey
guys
nice
to
see
you!
So,
let's,
let's
get
started.
A
A
What
that
would
contain
think
about
user
stories
around
cloud
events,
the
number
of
action
items
we
have
here
actually
and
the
play
with
the
cloud
event.
Sdk.
B
Yeah,
so
so
here,
okay,
so
research
needed
on
what
cloud
events
metadata
should
contain.
So
this
one
I
saw
around
a
bit
and
saw
how
if
the
events
already
been
collected
and
sent
to
a
place
through
jenkins,
so
it
turns
out
there
is
a
plugin
called
statistics,
so
statistics
handler
or
something
so
let
me
just
I
just
have
it
right
here.
B
Oh
yeah,
it's
the
statistics.
This
plugin
basically
could
help
us
get
started
with
like
what
all
stuff
we
can
send
us
cloud
events,
then
what
I
was
thinking
with
this
was
we
could
probably
just
start
with
getting
these
events
with
the
start.
District
gathered
already
gathers
and
then
convert
them
to
cloud
events,
the
plugin.
So
this
is
what
I
was
thinking
about,
and
this
would
be
a
good
start
basically
for
this
and
yeah.
A
B
Is
that
is
for
that?
Let
me
just
open
the
dock,
so
I
can
just
yeah
okay,
so
here
yeah
and
then
there
is
so
that
was
that
was
some
of
the
research.
I
did
then
also
the
metadata,
so
I
still
have
to
make
the
metadata
table
and
like
what
that
looks.
Like
other
things,
I
have
been
on
vacation
for
this
last
week,
not
vacation,
but
last
two
days
I
was
working,
but
before
I
had
been
on
vacation.
B
With
respect
to
the
cloud
events
sdk,
so
I
read
the
code
in
the
examples
for
the
cloud
and
sdk
and
seems
rather
straightforward
like
how
we
could
actually
change
the
jenkins
like
job.
Like
start,
stop
finish
like
or
whatever
the
events
that
are
there
from
the
start,
statistics
gather
plug-in.
We
could
actually
just
convert
them
to
cloud
events
directly.
We
should
we
just
have
to
see
how
to
extend
the
other
plugin
for
that
and
we
could
kind
of
start
out
over
there.
B
So
this
also
kind
of
falls
under
what
could
be
the
extend
of
the
bootstrap,
so
the
bootstrap
can
contain
a
basic.
B
Cloud
ancestry
like
use
it
of
cloud
events,
http
basic
module,
and
then
the
student
can
go
ahead
and
see
how
that
works,
and
then
they
can
play
around
with
the
statistics
gather
plugin.
We
can
help
them
out
with
how
to
get
these,
get
these
statistics
and
then
convert
them
to
cloud
events,
and
then
what
the
meter
data
should
look
like
then.
So.
C
A
Me
and
it
might
be
helpful
to
others
when
you
talk
about
taking
the
events
that
we
can
already
sort
of
register
and
transforming
them
to
cloud
events.
What
what
is
involved
in
that
process
of
transformation.
B
Good
question
so
that
would
be
reading
through
the
statistics
of
the
plug-in
code
and
the
status
of
other
plugin
has
dsl,
which
people
can
already
use.
So
it
is
clue
it
has
the
synth
as
well
that
you
can
give-
and
this
sync
like
is
given
in
statistics
login
and
all
the
events
regarding
job
project
creation
or
anything
that
have
that
the
user
has
subscribed
to.
They
just
go
to
that
thing,
so
the
tran.
So
what
so?
B
I
I
don't
have
a
detailed
idea
about
like
what
goes
into
the
transformation
exactly,
but
this
would
mean
that
the
user
student.
B
Extend
classes
from
the
statistics
of
the
plugin
and
then
work
them
out
in
the
work
them
out
in
the
cloud
cloud
events
plugin
itself,
so
that
would
be
the
high
level
process.
But
I
what
I'll
do
is
I'll
try
to
figure
out
what
is
the
exact
detail
process
and
I
I
can
update
you
guys
so
like
on
the
channel
or
the
next
week.
B
Thank
you
so
so
the
fine,
so
the
next
one
we
have
is
find
endpoints
that
can
be
subscribed
to
so
this
one.
I
will
sing
over
it
a
little
bit
because
this
this
one
is
more
related
to
consuming
cloud
events,
and
I
am
not
sure
how
that
would
work
exactly
because
I
haven't
worked
on
something
like
that
before
so
guys.
B
Do
you
have
an
idea
how
we
could
probably
consume
cloud
events,
maybe
into
jobs
or
something
would
it
be
so
do
we
consume
and
then
convert
the
cloud
events
into
jenkins
metadata,
which
can
then
be
used
to
trigger
jobs,
probably
like
or
trigger
something
like
a
generic
web
trigger,
or
something
like
that.
D
B
D
There'd
be
some
level
of
authentication,
I'm
assuming
on
that,
like
hmac
token
or
something
yeah
yeah,
some
something
along
those
lines.
That
might
be
a
good
first
pass
and
there
are
plugins
out
there
already
that
do
something
very
similar.
So.
B
So
there's
a
generic
web
trigger
plugin,
which
also
mark
had
suggested
to
look
at
this
one.
I
still
have
to
look
at
it
so.
B
E
And
yep
plugin
is
called
what
is
the
name
of
that.
B
B
Okay,
so,
and
next
one
next
one
we
have.
B
Okay,
so
next
one,
we
have
first
prototype
to
understand
how
cloud
events
work.
So
this
this
prototype,
I
feel
that
would
be
part
of
the
bootstrap,
so
this
bootstrap
would
basically
yeah.
This
bootstrap,
as
I
said,
would
basically
contain
like
a
minimal
like
cloud
events,
hdb
leader
and
handler,
like
handler,
which,
which
would
just
like,
be
based
on
the
http
cloud
events
http
basic
module,
which
we
can
just
create
so
that
so
that
would
be
like
a
good
place
for
the
for
the
student
to.
A
B
And
I
I
had
a
question
about
this
actually
so
cloud
events.
Instead
of
it
being
a
build
stack,
would
it
just
be
a
global
plug-in
configuration
like
work
in
the
global
plug-in
configuration
it
would
be
like
the
admin
should
be
able
to
enable
like
certain
cloud
events
for
the
jenkins,
so
he
would
go
to
plug-in
configuration.
B
You
would
go
into
cloud
events
configuration
and
under
that
he
would
just
mark.
You
know,
okay,
I
need
to
see
jobs
in
the
cloud
events.
I
need
to
see
job
steps
also,
maybe
like
how
they
are
being
executed.
They
fail
or
something
like,
and
then
there
is
project.
Maybe
I
need
to
see
what
projects
are
being
created
so.
B
Like
this,
why
I
thought
about
this
was
we
could
make
build
steps
as
well,
but
I
don't,
but
I
think
global
plug-in
configuration
would
make
more
sense
and
it
would
be
quite
should
be
easier
for
the
user
to
have,
like
you
know,
like
just
have
like
proper
configuration
for
the
jenkins
itself.
D
Okay
check
strategy,
or
something
like
that
that
sends
it
sends
github
sort
of
checks
on
every
build
stage.
That
happens,
and
I'm
wondering
whether
there'd
be
some
useful
code
inside
there
to
listen
to,
for.
If
that's
just
like
you
install
the
plugin
and
every
build
that
happens,
automatically
gets.
D
It
might
be
something
similar
to
listen
for.
B
Okay,
that's
the
consumer
aspect
of
it:
you're
watching:
okay,
okay,
okay,
okay
yeah!
That
makes
sense.
So
I
was
actually
talking
about
the
producer
aspect
of
it.
So
to
produce
like
the
plugins
like
so
produce,
the
events
so
probably
having
a
global
plug-in
configuration
would
make
sense.
That
is
what
I
was
thinking.
B
Okay,
and-
and
in
so
that
is
producing
and
consuming
yours
you're
saying
that
we
should
have.
We
should
look
at
the
that
plugin
you're
talking
about
for
this.
This
plugin
can.
Can
you?
Can
you
remind
me
what
it's
called.
D
D
D
E
B
B
B
B
So
this
one,
this
one
would
be
so
pretty
consuming.
B
Okay,
okay
I'll
actually
have
to
look
into
this.
B
D
So
I
think
it
kind
of
it
kind
of
sends
events
to
something
based
on
whenever
you,
whenever
a
bill
or
pipeline
hits
a
or
runs
a
stage.
So
I
think
it's
used
for
like
reporting
into
things
like
yeah
influx
db
and
grafana,
or
something
like
that.
But
when
I
think
it's
installed
with
the
checks
plugin,
it
sends
that
to
github
so
that
if
you've
got
a
pipeline
with
10
stages,
it
will
appear
as
10
different
checks.
So
you
can
kind
of
like
follow
the
status
of
your
pipeline
through.
D
B
B
B
Next
one,
let
me
see
yeah,
so
next
one
is
figure
out
extend
to
which
bootstrap
for
the
g-stock
students,
user
yeah,
so
the
next,
the
bootstrap
would
be
basically
just
implementing
their
cloud.
Events,
hdb
basic
module
or
to
like
a
very
simple
level
and
in
the
plug-in
and
like
implementing
that
in
the
global
plug-in
configuration
that
that
would.
A
B
And
actually,
this
is
this
is
actually
related
to
your
last
question.
A
B
Like
how
do
you
see
the
transformation
from
the
statistics,
gatherer
plug-in
to
a
cloud
events
plug-in
so
probably
would
need
a
little
bit
more
homework
on
this
part,
because
that
would
tell
us
like
what
the
bootstrap
would
look
like,
because
I
think
we'll
have
to
bootstrap
the
global
plugin
configuration
and.
B
B
B
I'll
share
it
share
it
in
the
doc
itself.
B
B
E
B
A
Have
you
you've
had
a
look
at
the
metadata
that
say
tacton
is
producing,
because
I
think
for
the
cloud
events
work.
You
had
said.
That
was
an
initial
first
step
and
I
think
you've
you've
probably
already
done
that,
but
that
might
be
initial
first
step
to
outline
for
students
or
to
have
them
do
produce
hack.
B
Yeah
so
yeah
I
did
that
and
the
metadata,
so
I
so
I
have
written
down
the
metadata
and
it's
in
a
text
over
here.
I
didn't
put
anything
on
google
doc
like
these
two,
so
it's
basically
would
be
like.
B
Dot
io.jenkins.event.com
dot
created
and
that
would
be
succeeded
and
all
that
would
be
failed.
So
so
it
would
be
something
like
that,
so
I
still
have
to
write
the
table
itself,
so
that
would
be
the
action
item
which
should
be
moving.
So
I
couldn't
complete
that
action
item
properly.
I
kind
of
got
an
idea
what
it
would
look
like,
but
I
still
have
to
write
the
metadata
table
down
so
I'll.
Do
that
I'll
I'll?
Do
that
and
post
in
the
media?
Should
I
upload
should
what
I'll
actually
do.
A
B
Create
a
doc
in
which
I'll
just
note
down
everything
that
is
required
for
the
cloud
events
plug-in
the
extent
of
the
bootstrap
and
everything
I'll
just
make
like
good
notes
about
it,
instead
of
having
it
in
text.
So
like
actual
text,
so
I'll
do
that.
A
That
would
be
great.
That
would
be
really
really
helpful.
I
think,
for
students
and
for
all
of
us
and
then
staying
on
the
same.
B
A
For
the
taxon
client
plug-in
proposal,
gareth
had
mentioned
the
potential
integration
between
tectonis
code
and
the
tactile
client
plug-in.
I
I
haven't
had
a
chance
to
look
into
it
gareth.
Do
you
want
to
speak
more
to
that
or
above.
D
I
mean
yeah,
I
I
was
wondering
whether
I
suppose
we
could
use
the
idea
of
the
dot
techton
folder
and
apply
those
type
of
changes.
I
did
speak
to
the
guys
on
the
drinking
sex
project,
about
how
they're,
manipulating
and
modifying
it
within
lighthouse,
which
was
very
similar
sort
of
idea.
D
Actually,
so
I
think
that
kind
of
thing
could
work
quite
nicely.
I
think
there
would
need
to
be
some
processing
of
resources,
but
it's
fairly
minimal
in
this
case
of
yeah
kicking
off
the
pipeline
run
with
the
correct
variable
set
and
things
like
that,
but
it
should
be.
D
Yeah,
so
they
have
a,
they
have
a
a
lighthouse
folder
which
contains
the
tasks
and
the
runs,
and
it's
just
a
list
of
yammer
files
to
be
applied,
but
some
of
them,
I
think
they're
kind
of
like
load
they're,
not
just
applied
loaded,
manipulated
slightly
and
then
applied,
and
it's
what
level
does
that
have
to
have
to
happen,
and
that's
listening,
look
from
looking
yeah,
so
yes
also
so
that
the
tech
timer's
code
protect
yeah,
take
them
as
code
projects
seem
to
you
had
to
create
a
a
pipeline
yaml
and
then
a
tasks
yeah
there
were
certain
file
names
you
needed
to
create,
but,
but
generally
the
approach
is
very,
very
similar
of
how
it's
doing
it
here.
A
Okay,
awesome
and
the
above
just
so
I
know,
and
everyone
else
so
you
know
lighthouse
is
functionally
similar
to
prom.
Was
that
a
fair
enough
way
to
say
it.
D
Yeah,
it's
it's
more
of
a
yeah
tecton
native
version
of
pro
really
it
supports
tecton
as
a
more
first-class
citizen,
nice.
B
So
how
so?
How
do
you
see
the
integration
going
about
like
like
we
have
some
technical
idea
like
what
that
would
look
like.
D
I
suppose
it
would
need
I
mean
it
would
need
to
handle
whatever
the
incoming
events
are
to
trigger
the
job.
And
then
it's
going
to
need
to
clone
the
repository,
presumably
on
on
the
master
or
on
the
controller
and
then
load
any.
If
that
tecton
folder
or
the
tecton
folder
exists,
load
any
resources
from
there
process
them
and
then
apply
them
into
the
cluster.
D
D
B
For
me,
like
thinking
in
java,
is
like
a
little
tougher
than
thinking,
I'm
just
thinking
what
that
would
work.
Like,
probably
probably
could
you
could?
We
have
an
action
item
in
which,
like
we
understand
technically
like
how
that
would
work.
B
Because
in
my
in
my
head,
when
I'm
saying
technocrat,
but
I
I
I
just
see
the
ui
and
so
I'm
just
having
like
a
hard
time
understanding
like
and
when
it
comes
to
technology,
it
just
it's,
they
seem
like
on
like
two
different
planets
to
me.
You
know,
because
the
client
plug-in
basically
would
create
stuff
and.
B
D
D
I
think
that
approach
would
work
really
well,
so
I
don't
think
integrating-
and
I
think
you
know
directly
integrating
with
what
the
work
that
has
been
done
for
the
technology
code,
but
I
think
doing
something
very
similar,
something
along
those
lines,
so
yeah
having
a
tech,
ton,
folder
and
being
able
to,
I
suppose,
test
for
the
existence
of
that.
If
it's
there
then
apply
those
resources.
D
I
actually
have
a
poc
of
that
working
from
a
jenkins
pipeline,
but
the
problem
is
it:
it
spins
up
an
extra
agent
to
be
able
to
do
the
applying
of
things
and
that's
one
of
the
things
I
was
trying
to
avoid
because
it's
an
agent,
it's
a
jvm,
it's
another
pod
to
do
the
applying
of
stuff
and
then
the
monitoring
of
that
job
is.
D
So
one
of
the
things
that
one
of
the
tech
time
sports
really
well
is
the
tecton
catalogues
and
the
sort
of
reusable
tasks
and
they've
got
this
templating
stuff.
Now,
that's
quite
nice,
so
you
can
pull
in
existing
tasks
from
other
gate
repos
and
have
them
versioned
and
synchronize
them,
and
I
think
you
do
the
same
with
pipelines
as
well.
It's
almost
like
a
you
know
a
slightly
it's
a
different
implement,
it's
more
of
a
cloud
native
implementation
of
pipeline
libraries.
D
D
B
Probably
probably
more
idea
about
this,
I'm
still
a
little
bit
confused
like
I.
I
still
am
like
trying
to
piece
the
puzzle
together,
trying
to
understand
how
the
level,
but
when
you
say
that
you're
trying
to
extend
the
electron
plant
again
to
do
something
similar
as
the
tectonics
code,
but
in
this
in
this
scenario,
I
don't
see
the
dot
tecton
folder.
B
What
I
see
is,
I
see
a
dsl
in
a
jenkins
file
which,
which
is
which
the
user
can
use
because
of
the
techground
plan.
Plugin
I
may
be
the
dog
tecton
folder
can
be
a
place
where
they
can
keep
the
resources
for
the
jenkins
file
to
consume.
B
B
A
B
C
B
B
Yeah
yeah,
oh
yeah,
yeah.
Actually,
that
would
be.
That
would
be
good
actually
to
get
a
bit
of
light.
How
does
lighthouse
exactly
work?
Do
you
have
any
idea
like
I?
I
don't
have
like
a
perfect
idea
of
how
technology
code
works
as
well,
and
you
were
saying
that
lighthouse
works
based
on
the
same
principles
so.
D
Yeah
so
lighthouse
it
was
originally
written
as
a
kind
of
replacement
for
prowl
where
it
takes
in
your
github
or
bitbucket
or
whatever
web
hook.
It
converts
it
into
a
like
an
internal
representation.
Really
it
has
a
series
of
plugins
that
can
be
run
against
that
data.
D
But
what
they
found
is
that
they
spent
so
much
time,
essentially
like
version
chasing
or
trying
to
keep
up
with
the
tecton
syntax,
because
it's
evolving
and
it's
you
know
they
had
to
build
that
into
the
jenkins
x,
syntax
and
it
very
was
very
time
consuming.
So
it
was
felt
that
it
was.
It
was
better
to
use
text
on
directly.
So
what
they
have
is
a
dot
lighthouse
folder,
where
you
put
your
tasks
and
pipelines
and
things
and
when
a
job
needs
to
be
triggered,
it
loads.
D
I
mean,
I
don't
think
they
use
customize
at
that
point,
I
think
they
have
been
using
customizing
captain
things
in
other
other
parts
of
the
product,
but
I'm
not
sure
if
you,
I
think
the
lighthouse
was
designed
to
kind
of
be
a
bit
agnostic
to
all
of
those
things.
B
D
D
It
was,
it
was
originally
kind
of
designed
to
be
like
a
high
availability
web
webhook
handler
for
github,
so
that
when
your
cluster
was
updating
or
if
anything
died
at
least
you
would
still
record
webhooks
or
capture
them
and
store
them.
They
stored
it
like
internally
and
they
can
be
processed.
That
was
the
idea,
and
then
it
would
sort
of
eventually
catch
up.
B
B
Oh
yeah,
okay,
so
the
last
one
was
would
be
great
if
james
is
for
attending
to
discuss
technology.
A
A
Get
them
I
know,
they've
been
really
busy
with
the
general
well
of
alpha
of
edging
as
x3,
but
hopefully
we'll
get
them
to
come
in.
Maybe.
G
G
Currently,
what
I
have
a
plan
is
exploring
kubernetes
and
then
reading
about
cloud
events
by
own,
because
I
mean
some
steps.
That
would
be
also
great.
A
I
I
think
explain:
kubernetes
is
a
very
good
first
step
and
then
you
can
look
at
some
of
the
resources
we
put
in
the
talk
which
another
feedback.
You
would
like
to
add
anything
to
that.
B
Yeah
so
into
that
talk
actually,
so
if
you
want
to
get
started
with
cloud
events,
so
that
doesn't
actually
require
kubernetes
knowledge,
you
could
actually
just
play
around
with
the
cloud
events
sdk
and
see
just
go
through
the
example
see
what
it
looks
like
just
play
around.
I
would
give
like
a
really
good
idea
how
to
start
cloud
events.
B
If
you
want
to
see
the
tecton
plant
plugin,
then
you
can,
then
you
would
need
kubernetes
and
act
on
on,
like
literal
from
that
knowledge,
that'll
be
nice.
If
you
would
want
to
do
it.
G
Okay,
so
I
will
explore
the
cloud
events
and
then,
when
I
have
some
idea
what
it
is
actually
doing,
then
I
will
maybe
later
on.
Then
I
have
a
report.
G
A
A
All
right,
good,
good
meeting.
Thank
you
so
much
for
being
here.
Thank
you
for
working
on
these
projects.
It's
really
interesting
and
see
you
next
week.