►
From YouTube: CNCF CI WG Meeting - 2018-05-22
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
Join us for KubeCon + CloudNativeCon in San Diego November 18 - 21. Learn more at https://bit.ly/2XTN3ho. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
B
Welcome
to
the
monthly
CNC
SEO
working
group
switched
from
the
twice
monthly
to
monthly
right
now,
8:00
a.m.
Pacific
time
zoom,
and
all
of
that
there's
the
the
notes
are
available
here.
This
is
this
one
right
here.
If
anyone
wants
to
add
themselves
to
the
who's
here
on
the
attendees
as
well
as
agenda,
feel
free
to
do
that
for
this
or
next
one.
B
Pudsey,
I
so
on
this
project
we
were
at
cube.
Con
copenhagen
gave
two
talks,
had
a
lot
of
interactions
that
intro
was
going
over
the
cross,
got
CI
project
and
how
it
worked,
and
we've
done
that
before
all
of
this
talk,
sir,
on
youtube,
do
you
want
to
check
that
out?
And
we
also
gave
a
deep
dive
on
the
cross
cloud
provisioning
portions
of
the
one
component
out
of
all
the
different
commitments
in
the
project
and
how
provisioning
is
currently
working
for
kubernetes
across
multiple
clouds
and
the
testing
and
stuff
that
happened
and
both
talks?
B
We
got
quite
a
bit
of
feedback,
which
was
one
of
the
main
goals
so
that
we
can
see.
Where
do
we
want
to
go?
What's
going
to
benefit
the
community
on
that
and
met
up
with
quite
a
few
folks
individuals,
as
well
as
like
informants
group
and
other
things,
so
quick
recap,
goals,
original
goals
on
this
project,
so.
B
Cn
CF
growing
and
if
we've
gone
through
this
quite
a
bit
lots
of
new
clouds
added
new
projects
and
how
do
we
test
and
how
did
they
all
work
together?
Well,
the
original
goals
on
the
project
we
had
building
the
actual
CI
platform
that
could
test
kubernetes
and
deploy
it
to
multiple
clouds,
deploy
the
projects
onto
kubernetes
and
test
how
they're
going
to
all
work
together
and
that
was
broken
into
multiple
components,
including
the
cross
cloud.
B
The
multi
cloud
provisioner
the
cross
project
portion
and
that
being
able
to
have
the
e
to
e
test
split
out
separated
whether
they
were
included
as
the
project
was
deployed
or
deployed
at
the
containers.
So
that
was
kind
of
phase
one
and
that's
gone
through
a
few
iterations
at
this
point
and
the
components
can
be
used
independently
and
then
the
phase
two
was
which
part
of
the
original
goal
was
a
dashboard.
And
how
was
it?
The
data
and
Status
going
to
be
pulled
from
all
those
places
and
displayed,
and
that
goal
has
also
been
met.
B
As
far
as
the
current
view,
which
is
a
multi
view,
showing
the
kubernetes
kind
of
the
infrastructure
testing.
As
well
as
the
project
view
and
then
a
original
kind
of
goal,
that
was
the
idea
there
was
the
CI
system
as
far
as
builds
and
stuff
would
be
optional
and
we
could
use
external
CI
systems
when
we
added
ona.
We
met
that
goal
rather
than
just
as
an
idea,
but
actually
being
able
to
integrate
and
pull
those,
and
some
of
those
are
actually
still
being
we're
using
that
now
and
they're
actually
being
pulled
in
I.
B
The
nap
team
they're
doing
some
stuff
with
OB,
nfe
and
stuff,
so
we've
been
giving
them
feedback.
How
we
did
that
and
integrated
with
an
external
system.
B
So
some
of
the
projects
we
have
right
now:
prometheus
Flint
D,
cordeen
s,
linker
day
on
the
page,
an
app.
What
I
was
just
mentioning
OpenStack
as
your
IBM
cloud
packet,
of
course,
and
here's
the
current
view,
what
we're
saying
so,
kubernetes
and
and
then
all
the
projects
builds
first
then
provisioning
of
kubernetes
and
then
once
the
builds
for
all
of
the
other
projects
finished.
B
Then
they're
deployed
so
gitlab
terraform
cloud
in
a
cube
test
right
now
for
kubernetes
and
helm
and
then
running
whatever
the
project
and
then
test
had
a
lot
of
talk
at
coupon
about
pair
form
and
and
also
a
little
bit
about
cloud
in
it.
It
all
made
sense
once
that
was
going
on,
but
what's
used
there
and
where
that
would
tie
in
with
current
deployments
from
that
cluster
lifecycle
team,
the
dashboard
ethics,
there's
less
I,
guess
contention
on
it
more
desire
to
have
similar
things
for
like
test
grade.
B
B
So
our
main
goal
right
now
is
gathering
info
for
moving
forward.
What
do
we
want
to
do?
Community
wise,
so
working
with
testing
SIG's
confirm
us
working
group.
Talking
with
specific
providers
and
other
things,
we've
had
some
feedback
as
far
as
the
CNCs
projects,
helping
more
with
the
end-to-end
test.
B
B
So
those
are
some
of
the
goals
as
far
as
the
community
side.
As
far
as
the
project
internal
side,
so
API
for
history
builds
deployment
seems
to
still
be
desired
and
that
could
tie
in
with
the
status
repository
piece
which
could
be
potentially
useful
outside
of
even
the
dashboard,
but
providing
access
like
the
test
test
grid.
B
As
far
as
the
data-
and
there
was
a
desire
talking
with
the
cig
and
Aaron
and
some
other
folks
about
providing
access
to
the
test
grid
data
and
the
cross
cloud
CI
and
potentially
some
other
projects
on
how
to
combine
all
of
that
and
allow
people
to
query
and
how
you'd
filter
and
see
how
things
worked
with
different
flags.
So
those
are
some
potential
and
then
on
the
dashboard
based
on
feedback.
Potentially
splitting
things
out
occur
same
project
deployment,
the
sort
of
things
so
we're
trying
we're
still
gathering
feedback
talking
with
folks.
A
B
Help
there,
what
should
we
be
testing
and
showing
and
then
integrations
between
the
different
projects?
What
would
be
most
useful
to
support
those
and
then
the
independent
testing
of
kubernetes
container
service
providers?
That's
was
something
talked
about:
Matt,
Q,
Khan
and
before
not
sure
exactly
where
that's
going.
We
want
to
try
to
track
and
work
with
conformance
group
and
figure
out.
Some
of
these
things.
B
So
network
service
mesh
is
something
they're
asking
about
cross
cloud.
Some
of
the
other
groups
are
asking
like
a
BNF
fee
for
the
community
asking
about
how
some
of
these
things
could
be
used
like
the
Neff
integration
we've
passed
over
some
of
that
info,
how
we
did
it
and
I
think
a
lot
of
the
different
components
will
become
come
up
with
or
the
ideas,
different
pieces
or
desired
in
different
groups.
So
that's
been
really
nice
to
see
people
coming
in
and
saying.
B
How
did
you
do
these
different
parts
so
from
the
standpoint
of
taking
a
lot
of
ideas
and
showing
here's
how
it
could
work
together?
I
think
that's
really
great
and
we're
trying
to
gather
enough
feedback
to
see
what
would
be
the
best
direction
to
go
next
and
love
to
hear
more
feedback.
If
you
want
to
watch
any
of
those
like
the
intro
video
again,
there
go
back
up
here.
Those
are
there
the
deep
dive
and
love
to
hear
feedback
from
anyone,
whether
on
the
list
or
if
you
would
like
to
dig
into
anything
specific.
B
A
B
C
A
Just
talk
through
them
that
problem
so
again,
so
I'm,
Advil,
MIDI
and
I'm
at
packet.
We
provide
infrastructure
for
the
bare
metal
testing
on
the
CNC
FC
I
go
ahead
to
the
next
slide,
because
I
think
that's
where
I
got
so
just
general
status.
From
our
perspective
generally,
the
status
reports
have
been
green
for
the
packet
column,
which
has
been
great
every
once
in
a
while,
they're,
not
green,
and
we
take
a
close
look
at
every
morning
at
status
of
things
and
make
sure
that
there's
nothing
unexpected.
That
happened
overnight.
A
One
of
the
things
that
we
have
run
into
in
the
past
has
been
capacity
issues
where
the
CI
infrastructure
requests
resources
at
the
exact
same
time
that
some
other
project
scoops
up
all
the
resources
in
a
data
center
when
it's
pointed
us
to,
is
a
need
at
our
API
to
have
some
flexibility
about
the
request
such
that
someone
might
be
able
to
say,
I
need
eight
machines
in
any
all
in
any
of
some
data
center,
but
I,
don't
care
which,
and
so
we're
exploring
some
API
flexibility.
That
would
reduce
the
capacity
related
issues.
A
Packet
does
have
a
reserved
hardware
capacity
where
we
could
set
aside
some
number
of
machines
dedicated
for
the
task.
It
would
change
the
test
a
little
bit.
It
would
no
longer
focus
on
capacity
of
packet,
but
more
solely
on
the
kubernetes
issues.
I,
don't
have
a
really
good
idea
of
how
many
machines
you
need
and
since
I
know,
you're
only
using
them
for
some
small
number
of
hours
per
day.
It's
not
the
most
efficient,
but
it
might
be
the
most
effective.
So
I'll
leave
that
as
an
open
question
to
discuss.
A
The
other
issue
that
I
want
to
touch
on
what
has
been
on
the
coming
soon
or
are
under
consideration
list
is
cross
cloud
CI
on
arm
one
of
the
things
that
I
do
that
packet
has
run
the
works
on
arm
project
which
is
funded
by
arm
and
has
some
equipment
data,
canada
from
the
task
of
ports
and
CI
and
cd4
farm
64
based
server
software.
So
I
thought
I'd
run
through
a
quick
list
of
the
status
of
if
we
were
to
start
an
arm
on
an
arm
on
bare
metal
CI.
A
There
are
community
builds
of
this,
but
since
we're
looking
at
doing
a
test
of
the
of
the
code
rather
than
test
of
someone's
interpretation
of
the
code,
I
think
this
is
probably
first
out
of
the
gate
in
terms
of
infrastructure
that
would
be
necessary
before
the
testing
would
commence
the
just
less
of
a
first
issue,
but
but
a
known
issue
just
in
terms
of
conformance
testing.
Also,
the
sonobuoy
code
base
has
not
been
released
in
an
arm.
Format
are
ready
to
go
format,
I,
don't
know
degree
of
difficulty
on
that.
A
At
least
someone
thought
that
it
wasn't
going
to
be
that
hard,
because
a
lot
of
the
components
were
already
ready,
but
there's
there's
non-trivial
work
necessary
there
to
do
the
test
things
as
two
components.
The
core
of
kubernetes
looks
good
and
we
have
a
community
response
of
people
using
it
in
all
sorts
of
cluster
environments,
both
Prometheus
and
core
DNS
have
been
ported
and
provide
arm.
64,
binaries,
fluent
D
and
linker
D
are
not
currently
ported.
A
A
There
are
some
dependencies
on
Rancher
inside
own
app
as
I
understand
it
as
I
read
their
commentary
on
it
and
that
may
make
an
own
app
port
a
longer
process
rather
than
a
short
one.
What
I
don't
have
is
a
list
of
everything
that
needs
to
work
for
this
all
to
work
on
arm
if
there
are
other
components
that
I
need
to
be
aware
of
that,
we
that
would
be
prerequisites
for
even
starting
a
CI
system.
A
That's,
on
my
mind,
yeah,
you
know
like
the.
What
is
the
complete
dependency
graph
for
the
entire
CNC
FCI
is,
is
not
a
small
question,
I'm
sure
we
will
discover
things
as
we
start
them,
but
I
wanted
to
give
a
sense
for
where,
where
I
thought
first
focus
work
would
be,
and
that
would
be
if
I
had
to
pick
one
thing
out
of
this
whole
list.
It
would
be
the
helm
and
tiller
question
so
and
with
that,
I
will
take
any
questions.
B
It
as
far
so
not
a
question
but
as
far
as
I
comment
on
testing
from
the
cross,
clad
CI
project
itself
and
I,
don't
think
that
running
the
cross
cut
CI
software
and
the
entire
stack
is
required
if
we
have
the
ability
to
target
resources.
So
if
we
bring
resources
to
run
the
projects
kubernetes
and
that
sort
of
thing
on,
then
you
can
deploy
to
arm
and
not
have
all
of
it
working.
C
Ahead,
I'm,
sorry
to
interrupt
I
just
wanted
to
mention
ed.
This
is
Dan
Khan
that
I
how
much
we
do
for
shape
package
contributions
here
and
just
in
a
separate
route.
I
have
a
totally
separate
project
that
I'm
looking
to
spin
up
related
to
the
vnf
work
that
the
tailor
and
his
team
are
now
working
on,
but
that
we're
hoping
to
use
some
packet
hardware,
both
x86
and
arm,
and
to
show
sort
of
why
automated
provisioning
of
a
lot
of
hardware
without
needing
to
use
OpenStack.
So
it
circles
back
to
that
classic
blog
post.
B
I'd
love
to
get
some
of
the
other
I
guess
groups
and
folks
who
are
doing
CI.
That
could
be
interest
useful
for
various
in
CF
projects,
how
we
could
get
them
more
involved,
we're
trying
to
reach
out
and
gather
ideas
ourselves,
but
for
the
crossroads
project
as
well
as
the
CNF
enf
like
what's
happening
in
different
groups,
are
people
doing
stuff
getting
other
folks
to
get
on
here
would
be
great
right
now
how
to
connect.
Here's
the
mailing
list.
A
A
I
will
make
a
point
to
share
this
contact
information
with
them
just
to
see
if
they
want
to
have
someone
pop
in
and
understand
the
the
nature
of
what
you
are
doing,
because
it's
conceptually
very
similar
to
what
they're
doing,
which
is
building
a
complex
system
across
a
lot
of
architectures
and
platforms
and
I'm
sure
there's
either
insights
from
one
to
the
other
or
perhaps
shared
experience.
That
might
be
useful.