►
From YouTube: CNCF CI WG Meeting - 2019-02-26
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
A
And
also
in
April
will
be
open.
Networking
summit
where
I
see
Watson
was
his
CFP
was
accepted
and
he'll
be
presenting
on
the
CNF
testbed.
There's
a
link
to
that
presentation.
There
please
check
it
out
if
you're
able
to
attend
ons
Q
on
cloud
native.
Con
Barcelona
is,
in
the
middle
of
May
hope
to
have
an
intro
and
deep
dive
for
the
CN
CF
CI
dashboard.
A
So
we're
excited
to
announce
that
the
C&C
FCI
status,
dashboard
v2o,
has
been
released.
You
can
check
out
the
release,
notes
at
the
link
below
and
I,
have
prepared
some
slides
to
go
over
quickly,
I
think
about
10
minutes,
if
not
sooner
so,
show
where
we've
been
where
we're
at
and
where
we're
going
with
CNCs
without
CI
thatis
dashboard.
A
A
A
So
why
do
we
have
a
CNC
API
dashboard,
because
the
CNCs
ecosystem
it
keeps
growing
right
now
there
are
four
graduated
projects.
Sixteen
incubating
projects,
twelve
sandbox
projects
and
scene
staff
would
like
to
ensure
that
those
projects
are
building
provisioning,
deploying
as
expected
and
the
CNC
FCI
dashboard
visualizes
back
in
the
site.
A
So
the
CNC
FCI
dashboard
consists
of
a
CI
system,
a
status,
repo
server
and
the
user
facing
dashboard.
The
CI
system
has
three
stages
currently
for
build
provisioning
and,
and
we
test
on
the
project's
table
in
head-on
a
bare-metal
environment,
the
testing
software
can
reuse
artifacts
from
the
project
CI
system
or
generate
new,
build
artifacts,
and
then
the
repository
server
collects
the
results
and
displays
them
on
the
dashboard.
So
here's
a
view
at
the
CN
CF
projects
that
we
are
targeting
to
add
to
the
sands,
the
FCI
dashboard.
A
A
A
We'll
take
a
look
at
where
we're
at
today.
So
where
we're
at
today
is
we're
focusing
on
the
projects
we
have.
The
graduated
projects
displayed
above
the
incubating
projects
and
then
onap
we're
showing
the
build
status,
the
releases
for
stable
in
head
and
we're
deploying
and
testing
on
a
bare
metal
packet,
environment.
A
A
Jerris
at-a-glance
highlight
and
validate
those
fancy.
F
graduated
in
incubating
projects,
increased
collaboration
with
the
San
CF
project,
maintainer
x',
accelerate,
adding
new
projects
to
C&C
FCI
demonstrate
provisioning
on
bare
metal
pocket,
and
the
next
release
will
show
kubernetes
stable
release
and
then
we'll
add
functionality
to
show
the
head
release
as
well
as
we
can
support
more
release
versions
of
kubernetes
like
a
release
candidate
or
the
last
stable
release.
We
can
do
it
can
scale
infinitely.
So
I'll
show
you
the
mock-up
in
a
minute.
A
So
what's
next,
this
is
a
mock-up
of
the
UI
changes
that
are
in
progress
on
our
dev
environment
and
will
be
released
as
soon
as
possible
too.
Since
the
f
that
CI,
they
include
adding
the
test
environment
section
for
kubernetes
table
on
packet
at
the
top
and
adding
a
test
column
to
show
end-to-end
test
results
that
are
provided
by
the
project
maintainer
and
will
also
switch
the
order
of
the
build
and
release
columns.
A
After
the
app
deploy
phase,
we
also
will
continue
planning
on
integrations
and
how
to
use
cube
ATM
and
next
month
we
will
add
support
for
those
external
integrations
for
the
build,
deploy
and
end-to-end
tests
will
update
the
own
apps
table
and
hand
releases
and
add
cube
ATM
to
the
provisioning
stage
in
April
will
publish
documentation
on
how
external
C
and
C
F
project
maintainer
is,
can
add
and
maintain
their
projects
on
the
dashboard.
We'll
add
those
smoke
tests
to
the
app
to
pull
a
stage
and
collaborate
with
maintainer
Zhan.
A
A
That's
at
a
high
level
releasing
three
how
to
add
a
new
project.
I'm
testing
on
kubernetes
release,
candidate
three
tickets
are
in
progress
or
really
in
testing,
so
these
three
will
be
closed
by
ends
of
the
week
to
implement
that
test
environment
session.
Add
the
test
column
and
move
the
build
and
release
columns,
then
we'll
move
on
to.
A
V21
to
implement
a
really
selector
drop-down
to
toggle
between
stable
and
head
kubernetes
will
add,
sub
headers
and
alphabetized
sorting
so
that
you
can
see
at
a
glance
which
projects
are
graduated,
which
ones
are
incubating
and
which
ones
are
Linux
Foundation
and
then
we'll
start
changing
the
backend
and
how
the
existing
project
details,
release
details,
build
details,
deploy,
phase
details
are
all
added
so
that
we
can
support
that
external
integration
with
contributors.
So.
A
A
There
are
several
ways
you
can
provide
feedback
to
the
CNC
FCI
dashboard.
You
can
join
the
monthly
CNC
FCI
working
route
Klaus
currently
scheduled
for
the
fourth
Tuesday
is
at
11
a.m.
Pacific
time.
Please
subscribe
to
the
mailing
list.
If
you
have
any
questions,
you're
welcome
to
send
us
an
email
and
if
you're
not
already
on
the
C&C
of
slack
Channel,
please
join
that
and
join
the
CNC
FCI
channel,
and
we
always
welcome
issues
to
the
github
trucker
board
on
cross
cloud.
Ci.
A
B
A
That's
a
good
question:
we've
recently
in
this
last
print,
and
it's
in
testing
we've
broken
it
out
into
the
various
parts.
So
it's
composable
and
we're
currently
are
testing
this
project
column.
So
how
to
add
a
logo,
the
name,
the
display
name
and
the
URL,
and
that
would
take
I
think
it
would
just
take
cloning,
the
the
repo
creating
a
creating
a
folder
and
then
adding
that
information
and
creating
a
pull
request
so
that
one
itself
would
be
a
matter
of
15
minutes.
A
C
I
think
you're
right.
That
would
be
very
small
amount
of
time.
So
it's
the
other
parts
that
we're
working
out,
but
as
far
as
maintenance
updating
the
project
details
that
should
be
pretty
minimal
right
now,
it
is
in
a
it's
in
a
separate
repo
from
the
projects.
So
each
one
of
the
repos
each
one
of
the
projects
have
their
own
repository
under
under
this
org
and
the
configuration
the
way
we're
doing
it.
C
We're
trying
to
make
it
to
where
a
CNCs
CI
configuration
file
can
be
moved
into
the
project
eventually,
and
then
all
that
could
be
maintained,
so
any
type
of
name
changes
or
logo
changes
or
other
things
that
get
updated
can
be
there,
and
that
would
be
similar
with
all
the
other
pieces
on
the
screen
that
lucina's
showing
so
we're
working
through
each
one
of
those
right
now
to
just
put
a
ballpark
out
there.
If
all
of
the
prereqs
are
met,
which
is
primarily
are
your
artifacts
publicly
available?
C
Is
your
status
information
if
you're
using
something
like
circle
CI?
Can
we
pull
that
because
that's
where
we're
going
towards
us
pulling
in
information
from
multiple
places
and
trying
to
show
how
they
work
together?
So
if
those
are
publicly
available
and
you've
met
those
prereqs,
then
adding
the
necessary
information
to
the
configuration
file
should
be
less
effort.
Where
we're
going
to
have
more
effort
would
be
say
if
Cordina
says
we're
using
Travis
CI
here,
and
we
need
this
sort
of
setup
then,
and
we
haven't
integrated
with
that.
Then
we
needed
to
walk
through
that
process.
C
That
would
take
some
time,
but
then
the
next
project
we'll
be
able
to
utilize
that
same
integration.
So
for
onap
we
actually
have
an
integration
with
their
Jenkins
server
and
that's
using
the
jenkins
api.
So
ideally
we
can
that
with
anyone
else
beyond
that,
you
start
getting
into
stuff
like
testing,
so
for
the
deploy
stage,
we
actually
want
to
do
smoke
test.
C
And
then
you
start
looking
at
the
next
stage,
which
would
be
integration
testing,
so
we're
hoping
and
what
Luciano
is
showing
that
we
can
do
these
piece
by
piece
and
let
projects
maintain
more
and
more
and
add
themselves
and
start
updating
more
of
the
parts.
So
they
can
get
on
sooner
and
then,
as
they
add,
like
full
integration
testing,
then
that
badge
would
go
live
from
na
to
green
red.
C
B
C
About
the
project
itself,
absolutely
a
lot
of
the
upfront
effort
is
CI
integration
and
then,
as
we
have
covered
more
of
them,
then
we
will
focus
on
helping
with
the
testing
like
Cordy,
Ness
and
Prometheus.
In
particular,
we've
had
a
lot
of
feedback
on.
How
can
we
collaborate
to
build
those
tests,
including
stuff,
like
at
templates
or
best
practices
in
areas
where
you
could
say,
we've
dropped
them
in
here
and
here's
the
other
heard
of
run
and
what
it's
expected.
A
C
C
C
Okay,
so
this
is
someone
talk
about
a
CNF
testbed,
which
is
a
new
release
project
from
st.
CF.
This
is
a
company
more
of
a
complementary
effort
to
all
the
project
similar
to
Saints.
Fc
I
is
not
a
project
going
through
the
incubating
process
or
anything
yet,
but
it's
it's
a
project,
that's
helping
different
projects.
So
let
me
see
there's
a
slide.
Deck
is
moving
around
a
little
bit.
C
So
VM
based
network
functions,
cloud
native
versions
and
it
could
encompass,
and
it
probably
will
because
we've
been
talking
about
use
cases
that
are
not
just
performance
based
but
other
functionality,
and
that
will
highlight
items
like
orchestration
and
and
failures
and
other
things
so
we'll
end
up
with
tests
and
CI
testing
deploying
this,
where
we
would
take
down
different
components
and
see
how
those
work
and
other
things,
but
right
now
it's
been
around
performance
and
and
trying
to
get
identical
code.
So
the
same
base
network
function
code.
C
How
does
it
work
in
an
environment
like
OpenStack
or
any
KVM
environment
versus
kubernetes
and
and
then
trying
to
say
we're
using
the
same
hardware?
It's
public?
This
is
it's
all
being
done
on
unpack
it
as
the
primary
and
do
have
a
complementary
work
being
done
with
the
Linux
Foundation
CC
project,
which
is
part
of
FDI,
oh
and
they
do
testing
in
their
lab
and
they're
actually
doing
some
of
the
same,
taking
the
same
test
that
we're
doing
and
running
it.
C
There
we're
also
running
some
of
their
tests
to
kind
of
compare
that
the
test
fit
itself,
though
the
idea
is,
you
can
deploy
the
entire
thing
from
nothing,
but
an
API
key
package,
API
key
and
able
to
bring
everything
up
in
your
own
account
and
and
have
the
machines
running
the
clusters,
whether
that's
open
side
kubernetes,
whatever
you're
wanting
to
test
and
then
deploy
the
the
network,
functions
the
applications
in
a
configuration
and
then
run
tests
against
those.
So
it
looks
pretty
similar
between
the
environments.
C
You
have
your
regular
clusters
and
4s
on
the
performance
test.
We're
talking
about
is,
there's
a
traffic
generator,
that's
sending
packets
and
in
our
case,
as
fast
as
possible,
so
that
we're
trying
to
stress
test
the
clusters,
all
the
different
components
that
are
comprised
on
that
and
and
see
what
the
performance
is
and
how
everything
reacts.
So
that's
kind
of
a
high
level
and
there's
a
this-
is
we're
looking
at
in
this.
It's
showing
there's
layer,
two
connections,
so
that
packet
we
actually
connect
one
of
the
ports
to
be
layer
to
traffic
so
vs.
C
and
the
kubernetes
side.
I
didn't
drop
up
a
slide
in
here
we
still
on
a
kubernetes
side.
You
still
have
your
regular
flat
layer,
3
network,
but
you
also
have
in
these
pods
and
on
the
containers
themselves,
additional
ports
that
are
connected
to
layer,
2,
and
then
you
can
do
additional
type
of
traffic
on
there
and,
in
our
case,
we're
handing
over
the
interface
outside
of
running
it
in
the
regular
path
that
cuber
Nettie's
will
do
or
open
sec
would
do
to
control
that
traffic
and
and
and
run
it
in
a
higher
performance
setup.
C
This
is
showing
some
of
the
software
running
so
just
trying
to
see
all
the
different
pieces
that
that's
up
so
on
the
bottom.
Here
we
have
the
packet
pieces,
there's
some
type
of
physical
router.
We
talked
and
actually
configure
that
when
we
bring
up
the
testbed,
so
you
don't
have
to
do
anything
ahead
of
time.
You're
not
required
to
go
in
try
to
pre-configure
the
packet
environment.
Once
you
have
a
API
key.
C
We've
been
working
pretty
closely
with
packet
on
access
to
the
different
things
that
are
coming
out
and
trying
to
work
to
support
more
use
cases
and
telcos,
absolutely
great,
and
thanks
to
them
on
that
and
the
rest
of
this
kind
of
goes
over
the
software.
It's
all
hunterson
open-source.
So
one
of
the
items
on
this
project
is
it's
trying
to
recreate
use
cases
that
are
out
there
that
use
some
open
source,
some
proprietary
or
different
bits
or
configuration
that
you
may
not
have
visibility
to.
C
So
all
of
this
is
reproducible
on
the
traffic
generator,
there's
OPN
efi
or
Linux
Foundation,
and
a
fee
bench
tier
X,
DB
DK.
All
those
pieces
are
out
there
and
looking
at
other
projects
that
are
using
those
and
trying
to
reuse
some
of
what
they
have
here
for
the
differences
on
the
cluster
would
be
this
V
switch.
So
this
is
what's
providing
the
additional
interfaces
so
normally
on
the
kubernetes,
it's
going
to
talk
through
the
kernel
networking
we
add
a
V
switch
running
VPP
software
that
connects
to
the
interfaces.
C
It
also
connects
to
the
containers
using
a
memory
interface
called
mementh
and
that
allows
high-speed
communication
and
then
over
on
the
I.
Don't
know
why
that
says:
OpenStack
I
should
say
be
host
user
to
fix
that
helps
prettiest.
So,
on
the
OpenStack
side,
it's
also
using
the
same
VPP
v
switch
and
there's
a
OpenStack
project
called
VPP
networking.
So
it
talks
to
the
open
SEC
using
the
neutron
and-
and
everything
looks
pretty
similar
after
that.
C
Move
on,
and
if
you
all
have
questions
we
can
come
back
to
those
here
at
the
end,
so
once
for
this
particular
like
what
are
we
doing
right
now,
we've
done
at
several
different
use
cases
and
tests
over
the
past
10
months
or
so,
including
some
tests
for
cube
con
and
here's
one
of
them.
This
is
the
performance
one
you
can
have.
C
You
can
take
these
network
functions,
deploy
them
to
kubernetes
or
OpenStack
and
chain
them
together
in
some
fashion,
you
can
go
through
the
V
switch
so
that
this
is
the
traffic
going
through
and
it's
similar
kubernetes
or
OpenStack.
The
big
difference
on
kubernetes
is:
it
uses
the
memory
interface,
vs,
vs
user
and
then
on
cue,
brunette
YZ.
You
can
also
do
directly
connect
those
containers
together,
so
that
makes
a
big
difference
on
performance.
C
There's
some
other
type
of
scenarios,
but
those
are
the
two
big
ones
that
we're
looking
at
for
now
we're
testing
and
you
can
also
run
multiple
chains,
so
whether
you
have
different
you're
separating
because
they're
running
different
type
of
services
on
these
or
maybe
you're
splitting
stuff
up
between
different
networks
or
may
want
to
do
its.
You
have
use
cases
where
you
want
different
chains,
so
we
are
doing
testing
where
you
have
multiple
chains
where
the
density
changes
on
a
on
a
node,
the
amount
of
resources.
Cpu
memory
can
be
affected.
C
So
then,
how
does
that
affect
performance?
So
we've
done
different
a
lot
of
different
type
of
scenarios.
This
is
showing
one
of
them
with
three
chains
and
two
network
functions
for
each
change
for
each
one
of
those
configuration
types
and
then
we
pull
the
results.
This
is
kind
of
a
summary
of
some
of
the
results
on
the
open
sac
site
for
the
three
chain,
two
Network
functions
or
the
VM
side.
This
was
actually
I
think
this
was
KBM
1.1
million
packets
per
second,
so
you're,
looking
at
large
number.
C
So
this
is
a
that
when
you're
looking
at
requests
per
second
on,
say
a
web
server.
This
is
on
your
high-speed
network
equipment
and
then
you're
moving
that
type
of
service
into
containers
or
vans
in
the
six
million
on
queue
brunette
to
brunette
es.
Actually,
so
this
is
kubernetes.
These
are
for
the
snake
case,
where
it's
going
in
and
out
and
then,
when
you
directly
connect,
we
were
seeing
nearly
a
nearly
nine
million
packets
per
second,
so
that's
pretty
cool.
C
This
is
some
of
the
stats
that
is
about
deploy.
Time
like
on
open
sac
is
over
an
hour
to
get
the
infrastructure
up
in
less
than
16
minutes,
including
a
reboot
of
packet
server
for
kubernetes.
So
that's
pretty
cool
we're
working
to
get
that
down
to
not
have
to
reboot
servers,
even
with
paths
like
kernel,
perimeters
and
stuff
via
the
packet
API
and
then
deploy
time
with
actual
network
function,
so
bringing
up
like
this
snake
case
or
whatever.
C
How
long
does
that
take,
and
so,
if
you're,
looking
at
CI
and
iterating-
and
you
want
to
go
through
these-
and
you
want
to
get
all
of
these
numbers
down
as
much
as
possible
resources
down
so
that
you
can
run
more
workloads
and
then
ideally
still
maintain
the
same
performance
or
better
as
we
continue
going,
let's
see
so
we've.
This
has
been
going
since
around
May,
so
cute,
con
barbecue
con
Copenhagen
was
kind
of
when
this
project
kicked
off
and
the
primary
challenges
have
really
been
around
OpenStack.
C
We
have
actually
been
trying
to
get
a
OpenStack
high
performance
with
VPP
hundred
percent
open
source.
That's
redeploy
able
and
works
as
expected.
Whenever
you
bring
it
up,
has
been
very
difficult,
so
we've
actually
just
got
fully
up
to
working
on
on-demand
packet
instances
and
last
couple
of
weeks
and
those
are
with
Mellanox
network
arts
which
are
not
ideal
as
far
as
they
don't
have.
They.
C
They
use
proprietary
drivers,
there's
a
lot
of
other
weirdness
and
the
way
the
drivers
work
and
there's
some
funkiness
in
the
OpenStack
Neutron
and
other
pieces
that
we've
messed
with,
especially
around
making
that
work
with
VPP.
So
a
lot
of
different
things
to
get
it
to
a
point
where
someone
can
just
take
it
and
deploy
a
new
cluster
when
their
one
on
on
packet,
kubernetes,
Mellanox,
again
difficult
everywhere
doing.
How
do
you
want
to
do
layer?
Two?
C
So
that's
probably
it
and
let's
see
if
again
API
key.
If
people
are
interested
in
this,
you
can
recreate
it
interested
in
people
saying
you're
not
doing
here.
Brunette
ease
right.
You
should
be
doing
this
and
pull
request
opening
tickets.
Whatever
there
we
are
looking
at
other
environments
so
that
we
can
repair.
I
could
select
the
primary
area
for
us
to
focus
we'd
like
to
be
able
to
do
stuff
to
compare
it
on
other
areas
like
AWS
metal,
I
through
metal,
they've
also
released
I.
C
D
Yeah
hi,
this
is
Chris
Hodge
from
the
OpenStack
foundation,
actually
have
a
few
questions.
You
have
to
forgive
me
because
we
really
didn't
start
digging
into
this
until
a
couple
days
ago
or
yesterday.
Really
so
you
know
for
uninformed
on
some
of
these
items.
You
know,
please
be
sure
to
correct
me,
but
one
of
the
things
that
concerns
us
a
little
bit
is.
It
actually
appears
that
you're
running
both
the
kubernetes
and
the
OpenStack
on
different
hardware.
D
D
C
Okay,
and
so,
if
I,
would
love
a
pointer,
open,
an
issue
or
semi
a
slack
message
on
the
CNS
wherever
you'd
like,
if
you
see
something
that's
different,
it's
there,
maybe
something
with
potentially
two
controllers
and
master
notes
that
could
be
on
different
instant
sites.
The
performance
metrics
that
you're
seeing
here
those
are
on
the
same
hardware,
so
that
would
be
on
so
when
I
refer
to
like
the
Mellanox,
that's
the
m2
extra
large
for
both
open
sac
and
for
kubernetes.
So
that's
Mellanox.
C
Next,
the
Connect
connect
x-force
is
what
the
Mellanox
is
and
that's
exact
same
hardware
when
we're
talking
about
the
data
claim
testing
we're
not
doing
any
testing
of
how
the
master
and
control
node
worked
for
management
traffic,
the
layer
two
when
I
showed
like
yeah.
So
all
of
the
we
happen
to
have
the
masters
connected.
They
can
talk
there,
but
there's
no
talk
on
further
the
management
communication
there,
so
the
traffic
generator
is
only
hitting
the
worker
notes.
That's
all
there.
These
worker
nodes
are
all
m2
extra
larges.
C
What
we
don't
have
on
OpenStack
yet,
and
so
we're
not
showing
any
comparison
yet
is
we
do
not
have
OpenStack
running
on
Intel
Nick
based
instances,
the
packet
will
be
releasing
a
new
instance
type
and
Eadie.
If
you're
listening,
correct
me
if
I'm
wrong
but
I
believe
it's
called
the
into
extra-large
that'll
be
coming
out
that
have
quad
port
Intel's
and
we've
been
testing
some
reserved
types.
B
C
Thanks
and
so
I
think
those
are
coming
out
if
I've
heard
and
sometime
this
quarter
in
the
next
six
weeks,
ish
or
something
I
don't
see
any
public
anything
mentioned,
but
I
think
that's
about
right
and
then
at
that
point
everyone
will
be
able
to
deploy
on
on-demand
Intel
versions.
But
if
you
want
to
do
tests
right
now,
the
comparisons
that
you
get
between
OpenStack
and
kubernetes
would
be
on
m2
extra
large
with
Mellanox.
Next,
those
are
built
for
it.
Okay,
yeah.
D
The
other
thing
that
we've
done
that
kind
of
jumped
out
at
us
is
we
were
comparing
deployment
times.
65
minutes
is
to
me
it
feels
like
you're
doing
something
wrong.
You
know,
I
know
that
for
some
similar
installations
that
I
do
in
my
home,
lab,
which
I
probably
doesn't
even
have
the
same
performance
characteristics
of
what
you're
doing
there-
that
time
should
be
much
closer
to
what
you're
talking
about
with
the
kubernetes
time.
But
it's
not
very
clear
to
us
what
you're
measuring
to
like.
D
D
But
it's
not
clear
if
you
know
you
capture
the
same
amount
of
time
when
you're
deploying
the
when
you're
doing
the
kubernetes
deployments,
and
so
it's
not
clear
to
us
exactly
where
that
time
is
being
burned
up.
But
you
know
we
think
that
the
infra
deploy
time.
You
know
that
you
know
I.
We
need
to
dig
in
and
look
at
closer
what
you're
doing,
but
I
think
that
there
are
probably
better
deployment
methodologies
you
can
use
that
are
much
faster
and
get
you
to
a
get.
E
Either
results
look
very
high,
but
if
we
would
a
test
say
just
the
OpenStack
deployment
time,
it
would
probably
be
some
way
more
like
20
to
30
minutes
and
humanity's
would
be
a
lot
lower
as
well.
We
were
kissing
both
the
same
amount
of
or
from
the
same
SAP
point
as
OpenStack
for
Cuba
natives
as
well,
but
where
most
of
that
time
is
built
up,
is
we
have
to
do
reboots
on
the
packet
nodes?
C
So
part
of
it
may
be
when
were
able
to
actually
deploy
different
components
and
and
there's
some
limitations
on
that,
like
the
VPP
networking
setup
and
the
V
switch
on
OpenStack,
where
we
set
that
up
at
a
different
time
than
we
do
kubernetes
and
it's
just
when
you
can't
do
it
at
an
earlier
time.
The
open
sec
there's
some
infrastructure
set
up.
That
needs
to
be
created
ahead
of
time
so
that
you
have
all
the
information
back
from
the
system
so
that
you
can
use
that
as
input
for
deploying
OpenStack.
C
We
had
some
limitations
on
the
OpenStack
deployment
method.
So
part
of
this
right
now
is
we're
using
chef
OpenStack
to
do
a
deploy,
and
we
already
are
aware
that
there's
some
other
deployments
that
may
use
like
up
containers
for
deploying
the
services
that
can
be
very
fast
and
that
wasn't
really
an
option.
It
would
probably
be
a
good
idea
to
say
here's
different
ways
to
deploy,
and
maybe
we
even
say
here's
a
chef
to
point
up.
Second,
here's
another,
but
I.
D
D
You
know,
because
you
don't
know
you
know
you
know
you
know.
What's
being
you
know
if
things
are
being
built
or
you
know
if
your
download
or
packages
like
it's,
you
know,
you
know
downloading
and
installing
a
container
image
just
much
faster
than
you
know,
installing
a
whole
wealth
of
packages
across
across
the
system.
You
know
you're,
essentially
taking
out
the
container
build
time
if
you're,
you
know
considering
things
like
that,
you
know
so
it's
so
it's.
So
it's
hard
to
tell
if
it's
actually
an
apples
to
apples,
comparison.
C
Absolutely
so
it
sounds
like
it
may
be.
The
first
step
would
be
more
visibility
on
the
stages
of
what's
happening
and
I
I
didn't
drop
it
in
here,
but
I
have
a
another
slide
and
I.
We.
We
got
to
update
the
readme
that
actually
go
through
the
stages
and
talk
more
about
here's,
where
this
kicks
off.
Terraform
runs
here
and
eventually
we're
using
ansible
to
to
provision
some
of
the
things.
Here's
where
those
pieces
run
and
some
visibility
on
that
will
be
pushing
that
to
docs
so
probably
seeing
what
we
do
now.
C
D
C
I
understand
and
I'm
happy
to
hear
the
feedback
and
if
you'll
ping
me
on
slack
I
can
actually
invite
you
to
the
there's
a
CNF
like
testing
dev
type
channel
where
we're
focusing
on
stuff
and
then
we
can
get
you
going
there
and
then
also,
you
know,
get
you
going
on
the
github,
so
would
love
to
have
improvements
on
that.
We
definitely
want
it
to
be
a
fair
comparison
and
talking
about
options
that
people
have
out
there
cool
thanks
appreciative.
C
Okay,
let's
see
so
if
anyone
else
would
like
to
join
in
on
this
there's
a
twice
monthly
meeting
that
Lucinda
mentioned
earlier
CNF
test
bed
bath,
that's
going
to
be
starting
on
March
4th,
1st
and
3rd
Monday
8
a.m.
Pacific
time
and
we'll
be
talking
about
stuff
like
we're
just
saying
use
cases
that
could
be
implemented
and
whatever
else
we
went
there
I'm
feel
free
to
open
issues
and
then
the
CNS
slack
channel
I,
don't
know
if
it's
been
renamed
to
CNF
test
bed
or
not.
It
probably
will
at
some
point
thanks.
F
If
you've
seen
that,
if
the
kubernetes
in
docker,
it's
really
interesting
project
that
he
ever
done
with
a
single
binary
and
say
kind,
build
the
kubernetes
from
source
and
then
time
deploy
and
it
fits
up
the
containers
they're
wanting
to
have
some
of
their
CI
infra-red
on
arm.
So
thanks
to
Ed
and
crew
at
packet,
we
now
have
a
few
arm
boxes
available
and
getting
people
to
the
state
where
they're
successfully.
Integrating
the
various
features
of
CI
into
their
projects
is
something
that
that
I
would
say.
F
Our
goodness
group
is,
is
passionate
about
and
since
we've
kind
of
a
developed,
some
expertise
and
I'd
like
to
offer
in
pairing
with
folks,
so
that,
where
we're
not
necessarily
writing
all
the
info
for
them,
but
we're
pairing
with
them
and
documenting
what
we
do
together
so
that
others
can
gain
more
momentum
in
getting
their
CI
for
their
various
projects.
Integrated
and
would
love
to
actually
spend
some
time
getting
to
know
the
onboarding
help
with
the
onboarding
through
CN
CF
FCI
as
well.
E
F
Yet
this
is
something
that
we've
just
had
people
asking
for
help
with
various
teams.
We
also
in
watching
all
of
the
community
discussions
they're
saying
how
do
we
get
started
using
the
infrastructure
from
the
CNCs
for
our
CI
and
how
do
we
use
a
packet
with
our
CI
and
so
we're
just
seeing
this
need
in
the
community
and
meeting
and
I
think
as
a
CI
working
group?
F
C
I
know
that
I've
I've
heard
sending
stuff
to
a
CNC
F
dot.
Io,
there's
a
maybe
it's
a
helpdesk
mailing
list
was
I've,
seen
it
as
like
an
initial
start
for
people
requesting
to
work
on
things.
You
could
look
at
that.
Otherwise,
if
there's
like
some
milling
list
or
somewhere
for
people
to
reach
out
I,
don't
know
if
you
want
shoes
opened
on,
say
API
snoop
for
that,
but
somewhere
where
people
could
get
started.
F
And
this
is
kind
of
separate
from
API
snippets,
definitely
I
think
falls
within
the
CMC
FCI
working
group,
okay
and
and
I
see
that
it
helped.
Us
is
one
of
those
places
where
we
can
see
the
requests,
but
we
really
don't
have
a
response
from
somewhere
working
group.
I,
don't
think
we're.
Focusing
I
would
like
to
see
it
more
intentional
effort
and
I'm
trying
to
kind
of
help
in
that
regard,
to
provide
some
of
the
pairing
and
some
of
the
mentoring
and
creating
documentation.
C
Yeah
I'm
just
trying
I,
don't
know
and
I.
It
sounds
like
I'm
just
dropping
this
slide
in.
It
sounds
like
the
mailing
list
in
general
sounds
good
and
then
the
help
desk
I
was
thinking
might
be
a
helpful
thing,
because
Chris
Chris,
aide,
Dan
and
other
folks
are
already
telling
projects.
If
you're
interested
or
have
a
need
for
help
and
there's
not
a
specific
place.
Mat
then
going
varus
is
good.
Anyways,
that's
my
thoughts,
but
I
don't
know
if
anyone
else
is.
F
It's
it's
a
in
the
rebranding.
I
wasn't
really
a
part
of
that
portion.
It's
news
to
me
and
said
today
and
I
was
my
initial
intentions
and
registering
cncs.
That
CI
were
to
provide
this
type
of
stuff
for
all
of
the
same
camp
projects
and
so
I'm
trying
to
find
a
way
where
that
fits
within
the
CI
working
group.
Since
yeah
I
got
she's
trying
to
navigate
that
and
saying.
This
is
something
that
the
CNCs
and
I
and
our
working
groups
should
be
providing.
F
Is
that
a
sub
thing
of
the
CMC
FBI,
because,
as
I've
been
working
on
stuff
I
said
things
up
as
sub
projects
and
CS
dot,
CI
right
as
part
of
the
CI
CN
CFC
I
focus
on
the
community,
but
with
the
which
are
trying
to
expose
with
the
pairing?
How
do
I
help?
How
do
we
get
people
working
together
and
creating
documentation
of
this,
and
also
where
does
it
sit
within
the
community?
F
C
So
I
would
say
part
of
what
and
in
these
conversations
for
everyone
here
have
been
kind
of
all
over
the
place,
different
groups
and
everything
else.
So
the
CI
working
group
itself
I,
would
say
think
of
that
as
a
separate
thing
from
that
from
the
dashboard
that
Lucia
was
shown
earlier,
and
it
happens
to
have
that
domain
and
I
know
the
naming
looks
as
if
that
means.
Then
it's
part
of
the
working
group.
C
The
dashboard
has
been
specifically
it's
not
just
rebranding
of
that,
so
that
it's
not
rebranding
the
working
group,
its
remanding,
that
dashboard
and
the
focus
of
what
that's
trying
to
show,
which
is
definitely
different
from
what
you're
talking
about
what
you're
talking
about
they're
like
pairing
and
working,
would
be
an
additional
project
under
the
working
group.
So
from
the
idea
of
the
CI
working
group,
that
sounds
great
I,
don't
know
where
the
conversation
should
be
on
that
the
first
place
I
could
just
think
is
right.
C
Now
this
public
mailing
list
was
tied
into
the
work
group,
the
maybe
the
me
the
github
for
that.
As
far
as
where
the
working
group
is
going
I'm
not
sure
because
the
TOC
has
just
kind
of
redone
things
if
you'll
go,
look
at
some
of
the
recent
there's
been
the
TOC
meetings
that
have
been
out.
The
there's
been
a
lot
of
mailing
list
stuff,
there's
several
documents
that
the
TOC
is
hat
on,
what
our
working
groups
going
to
be.
What
are
those
going
in
the
CI
working
group
has
been
labeled.
C
C
Here's
that
service
desk
link
there's
actually
a
there's
a
github
on
that
emails
and
other
stuff
for
anybody.
That's
that
and
steai
responsibilities,
then
talking
there
and
and
trying
to
see
what's
available.
I
think
offering
those
extra
services
is
a
good
thing
for
the
community
and
probably
reaching
out
and
trying
to
work
with
them
as
be
a
good
place.