►
From YouTube: Essentials: Open Planning Meeting (2018-06-26)
Description
This is a weekly meeting to discuss the progress and plan for Jenkins Essentials, an automatically updated Jenkins distribution.
Find us on GitHub: https://github.com/jenkins-infra/evergreen
Join our Gitter chat: https://gitter.im/jenkins-infra/evergreen
Jira board: https://issues.jenkins-ci.org/secure/RapidBoard.jspa?rapidView=406
A
Welcome
to
the
Jenkins
essentials,
open
planning
meeting
looks
like
we've
got
only
a
few
of
us
here
today,
because
Google
Hangouts
has
been
especially
difficult
lately,
so
I
think
we
can.
We
can
make
this
quick
and
bautista,
since
we
just
had
a
discussion
with
Carlos
and
Jesse
around
some
of
the
AWS
stuff.
Maybe
you
could
give
us
an
overview
of
how
the
AWS
auto
configuration
is
going.
A
C
A
C
And
I
just
need
to
try
to
stop
chatting
so
since
last
week,
I
did
experimentation
using
ECS
and
basically
it
seems
too
complex
for
the
purpose
of
it
being
or
so.
You
know
having
something
working
and
moving
forward
and
the
other
story
basically
needs
a
lot
of
setup.
On
the
other
side,
then
the
ECS
configuration
so
I
ended
up
switching
back
to
trying
to
configure
it
using
the
more
Morris
or
historical
ec2
cloud
plugin,
which
then
moves
it
a
bit
more
smoothly.
So.
C
Right
now,
I
have
a
lot
of
moving
parts
which
I'm
starting
to
reconcile
together,
basically
because
I
have
played
on
a
separate
master
with
the
s3
artifact
manager
plugin.
So
for
those
not
really
aware
about
what
it
is
basically
historically
Jenkins
has
been
storing
artifacts
and
Firestone's
papule
pipeline
stashes.
Everything
is
sent
back
and
stored
on
the
master
itself.
C
So
what
that
plugin
does
is
that
his
is
in
leveraging
the
chip,
OH
I,
think
which
basically
rewires
the
internals
of
Jenkins
to
be
allowed
to
be
allowing
people
to
plug
a
separate
storage
engine
and,
in
that
very
case,
there's
already
a
plug-in
implemented.
So
that
lets
you
store
those
blobs,
basically
in
s3.
So
that's
why
acts
I
play
with
and
it
worked
out
of
the
box
nicely.
So
that's
very
cool,
so
then
I
played
and
make
sure
available
to
configure
the
ec2
part.
What
is
you
cloud
so
now
I'm
able
to
do
that?
C
So,
yes,
that's
the
one,
and
so
you
have
to
scrub
kind
of
do
all
in
moving
parts.
Now,
the
only
thing
the
separate
POCs
I
need
to
be
doing
before
really
finishing.
The
CloudFormation
template
I've
also
studied,
is
to
play
now
to
remove
the
WS
access
key
and
access
key
ideas.
I've
been
using
for
my
POC
and
to
switch
to
an
eye
area.
I
am
roll
to
avoid
having
to
pass
around
basically
secrets,
which
is
going
to
be
much
more
secure
solution
out
of
the
box.
C
Even
thing
would
have
like:
okay,
I
have
access
to
ready,
Nia
3
I
have
the
permission
to
respond,
ec2
instances
and
something
like
this,
and
when
spatting
amp
receipt,
two
instances.
Instance,
friends,
for
example,
I
can,
through
the
Sierra
or
IP
I,
say:
ok,
that
instance
has
that
profile.
So
that
means
that
the
thing
that
is
going
to
burn
running
into
that
in
that
will
be
allowed
to
do
what
the
police
is
saying.
So
that's
why
I'm
going
to
be
switching
to
in
the
next
hours
and
so
then
to
wrap
it
up.
A
C
Not
that
I
know
of
I,
don't
think
I
forgot
anything
obvious
with
in
the
future
for
for
for
the
record,
indeed,
maybe,
but
it's
not
existing,
yet
we
probably
have
some
refactoring,
as
things
are
going
out
for
the
logging
part,
but
that
it's
really
much
working
progress
for
now.
So
not
yet.
It's
not
working
progress
for
you.
If
somebody
else
I
think
for
a
system
and
people
working
on
more
lower
layers
and
the
architecture
Carlos
all
again
and
Jesse
like
sending
things
through
free
on
D
and
so
on.
I
guess.
B
A
To
talking
with
Mandy
I
had
some
questions
about
this
task
wisher
and
should
mean
late
last
week.
I
think
and
my
understanding
of
this
is
that,
because
we
are
putting
logs
in
a
different
place,
that
the
SSE
gateway
wasn't
respecting.
That
is
that
something
that
you
just
discovered
in
some
of
your
manual
testing
or
did
that
come
up
in
an
automated
fashion
in
any
way,
no.
C
It
come,
it
came
to
be
discovered
when,
for
your
PR,
you
added
more
plugins
I
think
as
we
had
so
transitively
or
directly
I'm,
not
sure
we
added
the
SSC
gateway
plug-in,
which
we
didn't
before
and
that
plug-in
for
historical
reason
has
always
been
assuming
that
you
know.
Hard-Coded
knee
region
construed,
slash
logs
where
the
logs
tributes
and
as
it's
not
true
anymore,
and
we
have
to
adapt
it
and
we
didn't
so
kind
of
wrapping
up
what
your
watch
was.
C
D
C
C
A
Or
exome
and
it
sounds
like
Batista's-
has
shared
some
tickets
with
you,
I
think
the
the
error
log
into
sentry
was
probably
one
of
the
the
big
ones
that
I
was
thinking
about.
So
are
there?
Are
there
any
things
from
last
week
that
you
want
to
talk
about
or
give
questions
about
these
two
tasks?
For
this
week.
A
A
That,
just
from
a
historical
standpoint,
the
reason
that
that
there's
the
assert
and
expect
stuff
and
then
there's
crappy
tests
as
well
when
we
first
started
working
on
this
I,
was
fairly
fresh
to
note
and
I
started,
using
just
based
on
and
on
a
recommendation
from
a
friend
of
mine
who
has
a
strong
node
background
and
I
didn't
know
for
probably
the
first
two
or
three
weeks
that
just
had
a
lot
more
useful
features.
Then
I
was
aware
of
so.
D
A
C
A
Client
for
some
of
these
api's
to
model
some
of
the
interactions
and
that's
where
the
helpers
j/s
pull
that
up
the
the
helpers
stuff
in
the
acceptance
directory
started
becoming
theirs,
I
think
they're
still
worthwhile.
We
factoring
work
that
we
can
do
to
make
like
an
actual
pretend
clients.
That's
gonna,
make
sort
of
an
authentic
8a
register
call
post
versions
and
that
sort
of
a
thing
I,
don't
know
if
there's
tools
that
go
along
nicely
with
with
gesture.
A
Whether
just
does
this
nicely
around
defining
pictures,
but
that's
another
area
that
I
sort
of
mentally
punted
in
my
head
like
right
now,
there's
a
lot
of
acceptance.
Tests
in
particular
that
are
defining
defining
request
bodies
and
expected
request
responses
that
could
easily
just
be
defined
in
fixtures
and
reused
in
a
lot
of
different
tests.
E
C
A
Yeah
we've
basically
been
whenever
the
whenever
I
noticed
that
we've
raised
the
threshold
to
a
certain
amount.
I
set
that
as
the
new
minimum
bar
I
want
us
to
be
getting
better,
not
worse,
yeah.
This
is
in
services,
package.json,
there's
a
threshold
in
there
and
then
the
distribution
client
you
package.json,
there's
thresholds
defined
for
for
just
as
well.
A
A
So,
for
me
the
the
big
thing
that
happened
last
week
or
I-
guess
it's
Tuesday,
so
yeah
it
was
last
week
is
batiste
help
me
finally
get
that
damn
tour
request:
105
merge,
which
includes
the
update
service
properly
I'm,
actually
really
really
happy
that
that
patÃs
submitted.
Let
me
find
the
pull
request
is
I
was
really
really
happy
to
see
this
pull
request
so
batise
just
I
hope.
A
Ran
this
thing
and
generated
I
think
the
make
ingest
update
center
and
so
we've
cold,
pulled
in
some
updates
automatically
for
all
of
these
plugins
and
so
for
me,
I
know
we're
not
we're
not
using
incrementals.
Yet
we're
using
incrementals
for
for
configuration
is
code
and
the
essentials
plug-in
and.
E
A
A
A
A
Is
going
to
be
a
lot
simpler
because
feathers
has
support
for,
like
feathers,
has
events
sort
of
built
into
the
subsystem
so
like
whenever
a
record
is
created
or
updated
or
deleted,
or
any
of
those
any
of
those
verbs
anytime?
Something
is
verbs
in
feathers
and
event
is
emitted
and
can
be
received.
So
when
we
create
new
update
records
or
a
new
update
level,
we
will
be
able
to
just
automatically
dispatch
an
event
over
a
socket,
IO
or
WebSocket
channel
to
the
client
to
check
in
so
this
should
be.
This
should
be
fairly
straightforward.
A
The
biggest
I'm
only
taking
this
task
on
this
week,
because
I
might
be
the
disappearing,
but
I
also
have
a
pretty
heavy
meeting
load
with
some
other
ideas
related
work
this
week,
so
any
of
these
other
tickets.
If
anybody
wants
to
take
this
from
me,
you're
more
than
welcome
to
them,
but
this
one
I
am
anticipating
getting
done
in
the
next.
C
But
when
I
will
reach,
you
know
the
point
where
I'm,
basically
starting
up
at
gently,
slash
every
instance,
then
I'm
going
to
start
having
issues
so
I
will
switch
back
to
using
something
like
a
variant
object.
Is
that
I
usual
back,
which
I've
already
started
doing,
but
at
some
point
it's
likely
I
will
need
to
redeploy
order.
This
just
services
regularly
or
something
so
yeah.
B
A
I'm,
just
gonna
make
a
note
for
this.
In
case
you
get
to
it
beforehand.
I've
done
some
I.
This
was
my
bad
I
had
done
the
sort
of
initial
scoping
work
of
this,
but
had
that
in
my
head
and
in
my
yellow
notebook,
but
I
didn't
put
it
into
the
ticket
as
I
was
anticipating
to
get
to
it
sooner
then
I
am
going
to
get
to
it,
and
so
I
talked
with
Olivier
a
bit
about
this
and
the
the
challenge.
A
A
And
find
some
way
of
implementing
the
database
migrations
I
talked
with
a
friend
of
mine.
He
does
a
lot
of
work
on
kubernetes
and
the
way
that
our
options
are
are
pretty
much
using
an
image
container,
which
is
not
he.
He
suggested
that
that
was
not
the
best
idea,
because
then,
if
you
need
to
run
migrations,
you
have
to
basically
restart
your
service,
and
you
may
not
want
to
do
that.
A
Does
the
latter
pattern
that
he
described
for
me
was
that
we
would
basically,
from
our
repository
we
would
be
creating
effectively
to
services
containers?
We
would
have
the
backend
services
container
that
we
are
already
creating
and
then
we'd
have
a
custom
container
that
had
an
entry
point
to
run
the
sequel
eyes,
migrations
and
that
we
would
only
deploy.
We
would
sort
of
recreate
that
container
whenever
we
had
a
migration
and
then
we
would
either
submit
a
job
or
deploy
it
as
a
deployment
in
in
the
kubernetes
cluster.
A
B
A
To
do
that,
it's
just
that's
a
pattern
that
I
had
learned
at
some
previous
companies
to
we're
running
rails.
Migrations
would
get
slower
and
slower
as
time
went
on
still.
What
we
would
do
is
they're
like
every
every
quarter
or
every
half
year.
We
would
just
basically
squash
all
the
migrations
into
one
one
alter
table
statement.
Basically,
and
then
we
just
run
that
on
our
deployments,
because
we're
running
migrations
with
every
deployment
we
don't
need,
we
don't
need
to
optimize
for
that
yeah,
but
I
think
with
the
with
these
deals
details.
A
C
A
It's
difficult
to
test
because
the
the
way
that
we
sort
of
test
our
kubernetes
resources
is
we
stand
up
or
we
stand
up
sort
of
our
own
kubernetes
cluster
for
testing.
And
then
we
provision
those
resources
against
that.
And
then
we
check
them
into
the
puppet
repository
I.
Don't
know
if
alleviate
has
a
more
convoluted
test
setup
than
I
do,
but
that's
how
I
test
the
kubernetes
resources,
because
I
don't
have
a
puppet
master
running.
C
Making
that
right
now,
but
well,
it's
kind
of
a
detail,
but
we
actually
don't
push
each
Inc
in
C.
I
am
Not
sure
we
actually
build
it
per
se.
The
Jenkins
CI
infrastructure
bean
back-end
week
I'm
pushing
you
have
in
a
block
our
site.
Calm,
oh
really,
yeah.
We
we
seem
to
be
only
pushing
in
the
distribution
part.