►
From YouTube: CNCF CI WG Meeting - 2018-11-27
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
So
I
am
gonna,
give
a
update
on
the
crosscut
CI
project,
real
quick,
and
then
we
can
jump
into
some
of
the
other
ones.
I
think
we
have
a
few
topics
here:
we're
going
to
go
over
the
upcoming
events
that
are
related
for
CI
within
kubernetes
and
CN
CF,
fgi
mini
mini
summit.
There's
a
Sancia
CNF
project.
Where
there's
a
lot
of
demos
happening
there,
there's
kubernetes
I'm
network
service
mesh
talk
after
you
have
many
summits.
A
So
there's
a
lot
of
things
going
on
there
@q
con,
there's
gonna
be
presentations
on
both
of
those
topics,
there's
also
on
the
cross
cloud,
CI
an
intro
talk
and
then
there's
a
deep
dive
talk
where
there's
an
over
overlap
between
the
cross
cloud,
CI
project
and
the
senate
earth
project.
Where
there's
been
some
CI
testing
for
doing
work
on
that
CNF
project
and
then
Andrew
from
VMware
is
giving
a
talk
on
adding
support
to
cross
cloud
and
and
how
he's
been
using
the
different
components
in
that
project.
I.
A
A
What
we're
looking
at
right
now
is
what
is
the
next
iteration
that
we
want
for
that
whole
project
and
specifically
for
the
dashboard,
if
I
I
can
bring
that
up
the
dashboard
itself,
there's
a
split
between
testing
of
kubernetes
across
the
cloud
providers
and
the
different
feature
set,
and
then
testing
projects
on
those
clusters
and
they're
kind
of
two
different
purposes
and
audiences.
So
one
thing
that
we're
looking
at
is
host
cube.
A
Con
Seattle
is
doing
planning
for
the
next
iteration
and
updating
documentation
for
projects
to
contribute
themselves,
whether
we,
what
do
we
want
to
do
with
the
different
pieces,
so
we've
been
doing
a
lot
of
collaboration
of
different
projects
and
the
VMware
side.
So
I
would
love
to
get
feedback
from
the
community
on
what
they
would
like
to
see.
A
There's
some
ideas
that
we've
been
having
with
different
projects
like
the
cordilla
nests
and
Prometheus
have
these
cases
that
they
would
like
to
have
in
their
testing.
So
that's
something
that
we've
been
interested
in
Watsons
talked
about
reference
reference
test
cases
that
other
people
can
look
at
or
that
projects
are
working
together
and
we'd
like
to
get
feedback
for
the
next
version
of
the
dashboard.
And,
what's
going
to
be
shown,
what
would
be
useful,
as
well
as
the
components
underneath
like
cross
cloud
and
and
cross
project
and
the
different
pieces
that
do
different
testing.
A
Okay,
so
I'll
take
that
as
something
like
in
the
fur
for
now.
So
these
are
some
of
the
presentations
and
stuff
were
referred
to
that
we're
tending
including
ongoing
meetings
I'm
going
to
skip
over
the
overview,
but
feel
free
to
look
through
these
on
how
it
currently
works
and
a
related
topic
so
go
over
this,
and
this
would
be
kind
of
related
to
some
of
that
feedback.
So
part
of
the
as
part
of
the
effort
on
this
sand
CF
project
for
CNS,
we
are
trying
to
make
it
easier
to
do.
A
Testing
with
network
functions
on
kubernetes
and
containers,
as
well
as
I
would
say.
How
can
you
recreate
the
tests
and
environments
on
hardware
so
bare
metal
type
stuff?
A
lot
of
the
focus
has
been
on
packet.
We've
also
started
working
towards
supporting
other
environments,
so
FD
io,
which
is
a
project
at
Linux
Foundation,
has
a
lab
and
we've
had
access
to
those.
So
we've
been
trying
to
do
testing
there
and
be
able
to
reuse
the
same
code
with
modifications
to
work
in
both
platforms
and
then
taking
those.
A
A
So
you
can
continue
using
it
and
all
of
that,
so
the
different
stages
are
usable,
so
trying
to
make
and
add
more
documentation
for
folks
to
start
seeing,
what's
going
on
outside,
of
what
we're
doing
and
ideally
be
able
to
use
the
different
components
individually,
as
well
as
as
a
whole.
So
if
you
want
to
bring
up
an
entire
cluster
kubernetes
cluster,
that
has
layer
two
support
in
it
and
deploys
network
functions
and
certain
scenarios.
A
Well,
we
have
some
reference
code
that
we're
working
on
that
we're
going
to
be
demoing
at
the
FDA,
a
mini
summit
as
well
as
Q
con,
and
for
doing
that
and
then
the
different
pieces
are
broken
down.
So
if
you
wanted
to
dig
into
individual
parts
itself
and
everything's
open
source
and
the
test
and
harness
that
we're
using
is
nfe
bench
and
t-rex
for
generating
packets,
so
that's
probably
at
their
diesel
some
upcoming
presentations
at
coop
con
and
around
there.
A
A
A
B
A
B
B
So
there's
this
great
article
by
451
research
that
was
done
over
2017
and
it
talks
about
people.
You
know
that
the
public
cloud
is
not
necessarily
ever
and
there's
this
idea.
That's
been
floating
around
since
last
year,
a
little
bit
before
last
year
of
what
we
call
cloud
repatriation,
which
basically
suggests
that
you
know
people
are
moving
away
from
public
cloud.
B
There's
been
a
lot
of
talk
about
folks,
moving
away
from
public
cloud
for
no
more
different
reasons,
but
these
folks,
you
know,
went
ahead
and
dug
into
it,
and
you
can
see
on
the
right-hand
side
of
this
chart.
These
were
the
reasons
for
people
shifting
their
workload
from
public
cloud,
but
it
wasn't
a
it's
not
a
the
idea
of
repatriation
right
is
like
it
wasn't
working
his
total
failure
on
the
other
of
the
other.
You
know,
entities
part
it's
a
permanent,
you
know
shift
and
that's
not
the
case
in
IT
infrastructure.
B
So
the
public
discussion
is
not
really
reflect
what
it's.
What
actually
happens
on
the
ground
and
IT
departments
right,
so
they
move
away
from
public
cloud,
not
because
clever
clouds,
not
necessarily
not
working.
It's
just
that.
There's
other
reasons.
You
know
like
performance
and
availability
issues.
You
know,
improves
on-premises
cloud,
so
maybe
they
didn't
have
a
cloud
than
a
private
cloud
and
they
stood
up
one
or
you
know
again,
and
none
of
the
other
things
that
on
the
right
hand
side
there,
and
then
they
asked
you
know:
okay,
if
you're
moving.
B
B
So
the
question
is,
you
know,
turn
to
hybrid
cloud
right
and
the
ideas
that
people
have
another
different
reasons
for
using
one
of
the
other,
both
and
so
looking
through
that
linens
right,
the
open
source
cloud
ecosystem
is
very
large,
as
many
public
cloud
service
providers
cost.
You
know
the
whole
world
didn't.
Basically
it's
hundreds
of
open
source
projects,
there's
all
types
of
clouds,
open
source
and
proprietary
software
working
together
and
then
there's
users.
You
know
up
and
down
the
spectrum
all
occur
and
and
across
the
spectrum.
B
They're,
normally
using
it
with
a
number
of
other
tools
together,
and
so
some
of
the
motivations
of
open
lab
is
or
people
participating.
That
matters
like
you
have
explicit.
Customer
requests
like
they
need
support
for
product
X
from
vendor
Y,
and
you
don't
want
to
be
the
only
person
dealing
with
the
pain
and
anguish
per
se
of
trying
to
deliver
that,
because
sometimes
those
customer
requests
are
across
different
companies
or
across
different
businesses
within
the
same
company,
and
so
you
won't
be
able
to
have
some
idea
of
what
is
and
is
not
working.
B
There's
also
technical
requirements
select
the
need
for
feature
function.
You
don't
want
necessarily
be
the
company
carrying
that
patch.
You
know
for
a
particular
project
for
infinity
right
just
for
protect
just
because
you
have
a
particular
customer
someone
else
more
than
likely
has
that
same
customer
or
that
same
feature
of
function,
requests,
and
so
it's
best
to
given
a
spirit
of
open
source
to
work
together
on
that
and
number
of
other
things.
B
So
what
we
did
was
we
focus
primarily
on
OpenStack
when
we
initially,
you
know
again
got
started
last
year,
and
these
are
some
of
the
motivations
we
way
were
to
pull
out
in
terms
of
paying
points
across.
You
know
five
different.
You
know
aspects
of
things
that
I'd
showed
earlier
in
terms
of
the
open
source
ecosystem.
B
B
The
larger
open
source
ecosystem
as
well
open,
lab,
really
kind
of
sits
and
finds
its
value.
So
these
are
the
five
you
know,
of
course,
oversight
foundation,
and
then
these
initial
partners
were
the
ones
who
said
okay,
we'd,
like
the
idea,
you
think
it's
valid.
You
made
a
case
and
it's
kind
of
work
on
it.
But
of
course
it's
not
just
about
those
companies,
but
we
also
had
people
from
different
communities
on
board.
So
we
focused
on
primarily
initially
was
go
for
cloud
terraform,
kubernetes
and
OpenStack.
B
So
again,
the
two
biggest
open
source
projects
that
we
could
think
about
and
that
were
really
in
you
know-
need
of
support,
were
kubernetes
and
OpenStack
working
together,
not
necessarily
the
project's
individually.
Because
again
we
focus
on
integration.
So
open
lab
essentially
has
a
governance
model
is
very
loosely
structured.
It
will
evolve
over
time,
of
course,
and
here's
just
a
again.
An
idea
of
you
know
where
we
were
able
to
marry
together
software
components
that
open-source
public,
private
and
hybrid
clouds
would
collect
their
proprietary.
B
You
know
in
the
aspect
of
things
and
then
also
we
brought
in
academia
for
lab
and
project
support
and
what
we
were
able
to
have
initiative
front
was
so
we
have
a
CI
environment,
and
this
is
our
initial
capacity.
As
of
today
we
started
off
with
two
and
in
that
year
we
grew
to
add
four
more
of
providers
for
the
CI
system,
which
basically,
they
provide
virtual
machines
across
six
clouds
and
then
also
we
have
dedicated
infrastructure
as
we
made
available
to
us.
B
Initially,
we
did
not
have
any
dedicated
infrastructure,
but
now
we
have
six
providers
giving
us
dedicated
infrastructure.
As
you
see,
there's
a
lot
of
dedicated
servers
here,
quite
a
few
IOT
devices
and
then
there's
the
WS
n
is
for
your
way.
It
stands
for
the
basically,
this
network
relay
is
wide
spectrum.
Networking
I
believe
it's
what
it
is
also
again
just
just
kind
of
given
an
idea
of
what
we've
been
able
to
accomplish.
We've
got
additional
prodded
more
projects
on
them.
I
said
we
only
had
gopher
cloud,
terraform,
kubernetes
and
OpenStack.
B
You
know
like
standing
up,
OpenStack
environment
testing,
these
things
out
so
showing
folks,
you
know,
there's
some
ways
in
which
we
do
high
availability
or
here's
like
reference
architects
for
that
help.
Them
focus
on
zero,
downtime
and
skip
level
upgrades
and
things
of
that
nature,
so
making
making
resources
available
for
people
to
not
only
see
these
things
working,
but
also
try
them
out
on
their
own,
so
that
they're
not
destroying
production
environments
and
also
helping
folks
to
shift
their
culture
to
a
more
DevOps
centric
culture.
B
But
if
they
can
test
stuff
out
in
the
lab
first,
then
it
helps
to
allow
them
a
sink.
You
know,
for
example,
if
they
need
a
test
environment
to
work
with
they're
open
to
work
with
their
production
environment
where
they
do
like
canary
canary
deployments,
are
rolling.
You
know
rolling
out,
you
know,
features
of
that.
You
know
new
features
and
stuff
like
that.
They
were
able
to
kind
of
test
things
out
in
a
lab
before
doing
it
in
production
and
again,
there's
just
ways
to
get
involved.
B
You
know,
essentially
you
could
you
could
get
involved
in
issues
just
by
sharing.
You
know
if
things
are
too
like
that
you
think
that
should
be
integrated
tested
together,
you
can
leverage,
you
know
the
SDKs,
the
test
that
we
doing
all
the
stuff
that
we're
doing.
You
can
also
leverage
something.
So
you
guys
share
so
give
input,
but
you
can
also
take
what's
already
available
and
try
things
out
and
you
know
give
a
create
a
feedback
loop
for
us.
B
You
can
still
contribute
infrastructure,
but
we
have
quite
a
bit
of
as
you
already
seen,
and
so
primarily
you
know
we're
looking
for
more
people
to
contribute
these
tests.
You
know
test
cases,
integration
requests
and
also
participate
in
helping
to
fulfill
some
of
those.
The
work
that's
related
to
to
making
some
of
those
things
happen.
B
They
can
make
it
better.
You
know-
or
you
can
make
it
better
for
yourself
and
you
don't
have
to
continue
to
do
it
so
I'm,
just
gonna,
let
the
video
play
like
say
just
an
air
right
here,
so
I'm
logging
in
right
now
you
can
see
that
there's
two
experiments
that
I
have
running
I'm,
going
to
show
you
profiles
which
we've
saved
and
so
instantiate
them
are
just
simply
just
clicking
the
play
button
there,
but
I'm
showing
you
a
topology
of
one
of
the
experiments.
B
So
when
you
create
experiments
by
default
in
this
particular
test,
that
OpenStack
is
the
default,
but
you
click
the
change
profile
button
to
choose
another,
and
so
like
we
have
one
for
kubernetes.
We
have
some
that
are
like
just
two
nodes
connected
together.
This
one's
got
a
bunch
of
nodes
connected
multiple
layer,
two
links,
multiple
sites
that
you
see
with
that
profile
could
be
just
you
know,
small
little
nodes.
You
know
it
doesn't
have
to
be
as
large
as
the
other
one.
B
So
you
just
use
the
Create
topology
button
here
you
can
drag-and-drop
machines.
So
these
are
those
dedicated
servers.
You
know
thousands
of
dedicated
servers
or
programmable
network
switches.
Okay,
multiple
sites,
you
can
just
drag
them
from
one
side
to
the
other,
so
you
want
to
test
things.
You
know,
I'll
have
this
site,
and
you
know
the
u.s.
I
have
another
site
in
Canada
or
someplace
else.
You
can
emulate
that
or
you
could
actually
spin
up
nodes
that
are
in
those
two
different
sites,
so
it
doesn't
necessarily
have
to
be.
B
Emulating
could
actually
be
what
is
happening
in
the
physical
world,
so
I'm
just
showing
again
how
easy
it
is
to
move
these
virtual
machines,
as
well
as
dedicated
servers
together
how
to
link
them
together
by
simply
dragging
the
line
there.
You
can
connect
the
devices
themselves
or
connect
them
directly
to
the
link
by
dragging
over
to
the
link
there.
So
that's
a
link
between
those
devices
and
then,
if
things
get
a
little
messy,
you
can
click
the
tidy
View
button
there
to
clean
it
up.
B
B
That's
basically
one
shell
in
there
and
then
you
can
also
add
like
a
tarball.
It
likes
an
archive.
If
you
want
to
drop
an
archive,
you
know
on
the
note
after
you
created
and
then
executed
command,
you
can
do
that
as
well,
and
that's
just
focusing
on
provisioning
the
devices
when
you
want
to
stand
them
up
and
again,
there's
a
number
of
other
things
you
can
do
after
the
fact
right.
B
You
can
use
ansible
if
you
wanted
to
from
your
local
machine
or
you
could,
you
know,
have
ansible
on
in
that
archive
apply
that
command
as
well
right.
So
if
you
have
any
questions,
here's
basically
how
to
get
in
contact
with
us
and
the
websites
there
and
that's
pretty
much
it
so
in
terms
of
this,
why
I
thought
it
would
be
beneficial
to
present
open
lab
to
C&C
FCI
working
group
is
I,
think
we
have
very
compatible
missions
and
we've
already
been
working
with,
for
example,
egg
I'll
stop
sharing
my
screen
right
now.
B
If
that's
I
we've
been
working
with
it
from
packet,
as
you
saw.
Hopefully
you
saw
in
the
presentation.
It
is
a
packet
is
one
of
our
sponsors,
and
you
know
we're
looking
forward
to
doing
more
with
ed
and
along
with
like
offering
resources
for
the
CN
CF
cluster,
but
for
its
answers.
I
work
in
group
I
think
we
could,
you
know
again,
offer
it
physical
resources
to
augment
some
of
what's
already
available
as
well.
A
B
All
that
and
that
particular
component
I'll
have
to
I
think
is
using
Jax
I
think
it's
called,
but
it's
it's
an
open
source
component
that
they're
using
everything
all
of
this
open
source
and
I
can
get
it
to
you
and
you
can
try
to
figure
out
how
to
you
know,
make
it
work
in
a
way
that
you
know
that
you'd
like
it
to.
B
And
what's
cool
about
it
also
is
that
the
the
software,
if
you,
for
example,
like
to
feel
like
there's
one
there's
one
person
who
like
they
only
had
a
I,
think
I
had
like
a
hundred
gig
networks
which
and
so
like
there's
no
100
gig
network
switch
and
in
the
hole
you
know
fed
it.
So
a
lot
of
these
resources
federated
those
you
know,
thousands
of,
knows
they're
federated
across
multiple
sites,
and
so
you
can
take
the
actual
software
that
I
showed.
A
B
You
know
because,
like
Nick
says,
for
example,
is
a
big
holiday
push
or
something
and
there's
a
lot
of
you
know,
usage
going
on
and
then
after
that
couple
months
you
know
five.
They
only
using
about
five
of
those
knows
well,
you
could
actually
turn
off
the
Federation
per
se
for
that
few
months
and
then
turn
it
back
on,
for
just
those
five
notes.
Are
all
ten
so
there's
a
lot
of
flexibility
there
too.
C
A
C
Looks
good
okay,
so
so
before
we
get
started
so
so
I'm
one
of
the
because
you
say
co-creators,
Co
contributors
of
network
service,
mesh
and
so
I
won't
go
too
much
into
I,
won't
go
into
the
project
itself
as
to
what
we're
trying
to
do.
But
one
of
the
problems
we
were
running
into
is
that
we
needed
a
kubernetes
cluster
in
the
in
our
CI
system,
so
we
use
circle
CI
and
we
actually
have
a
build
pipeline.
I'll
see
if
I
can
show
you
an
example
of
what
one
such
build
pipeline.
C
So
this
one
looks
like
it
passed
and
so
we're
going
to
show
the
chicks
and
we'll
just
go
to
one
of
them
and
go
to
the
workflow.
So
we
have
this
basically
building
multiple
images
and
so
on.
We
have
this
packet
deploy
and
a
set
of
integration
tests
that
run
after
that,
and
then
we
destroy
the
cluster,
so
pretty
pretty
simple
setup
so
before
and
this
this
could
be
something
maybe
for
your
v2
planning
when
you're
moving
forward.
C
One
of
the
problems
we
ran
into
is
we
were
originally
using
the
cross
cloud,
CI
stuff
that
you
all
built
and
I
think
it's
absolutely
fantastic
with
the
stuff
that
you've
built
out
so
far.
One
of
the
problems
we
ran
into,
though,
was
it
when
the
cluster
would
come
up.
These
integration
tests
that
take
about
a
minute
and
a
half
to
to
run
we're
taking
15
to
20
minutes
to
run
with
the
cluster
that
was
spun
up
through
the
through
the
cross
cloud,
CI
and
so
for
the
short
term.
C
C
Let
me
go
ahead
and
open
up
the
start
from
the
beginning
as
to
what
we
do.
So
we
have
our
big
files
and
we
actually
include
various
like
kubernetes
make
targets
and
so
on,
and
eventually
we
drew
a
set
of
includes.
We
include
this
is
the
stuff
that
we
run
for
our
packet.
So
if
you
see
we
have
a
packet
start,
so
you
can
do
make
packets
start
and
then
it
runs.
Terraform
apply
and
then
it
runs
a
script
that
installs
kubernetes,
so
the
terraform
apply
should
be
pretty
straightforward.
C
So
so
we
start
off
with
basically
a
set
of
copying
over
some
of
the
scripts
over,
so
we
ins,
so
we
copy
over
to
each
of
the
systems
the
kubernetes
install
script.
The
kubernetes
install
script
is
just
this
is
just
just
install
cube
admin,
so
this
is
lifted
straight
from
the
cube
from
the
cube
and
install
page,
so
pretty
pretty
generic
installs
cube
admin,
cube
control
and
docker.
C
So
after
we
do
that,
then
we
run
two
scripts
in
parallel.
One
of
them
is
a
start
master
and
the
other
one
is
we
don't
want
to
wait
for
our
worker
images
to
download,
so
we
also
run
the
download
the
worker
images
on
systems
that
are
not
the
master.
So
the
way
that
that
the
way
the
bed
looks
is
so
we
have
start
master
and
which
runs
a
cube
admin
in
it.
We
pass
in
a
pod
network
cedar
range
and
we
print
out.
We
go
in
to
copy
the
configure
to
the
home
directory.
C
This
is
all
what
this
actually
is
running
on
packet
itself.
We
then
queue
control,
apply
a
ACMI
plugin,
so
we
can
have
some
networking
untain
tower
notes
and
we
then
do
cube
admin,
token,
create
and
print
join
command,
and
so
what
this
print
join
commands
does
is
little
is
it'll,
create
a
script
that
it
does
cube.
Admin
join
with
the
proper
tokens
and
an
email
or
necessary
the
proper
tokens
and
and
addresses
necessary
in
order
to
join
the
master,
and
we
store
that
to
the
join
cluster
to
a
join
cluster
script.
C
The
workers,
just
if
they're,
not
the
master
all
they
do,
is
they
just
pull
images,
and
that
takes
about
a
minute
and
a
half
to
two
full
images.
And
then,
after
that,
we
then
copy
the
join
cluster
that
script
it
was
generated
and
we
run
it
on
there.
We
have
a
full
working
cluster,
so
it's
a
pretty
a
pretty
simple
script
and
then
the
only
thing
that's
left
to
do
in
our
circle
CI
system
is
to
download
the
is
to
download
the
cube
config
file.
C
C
We
run
make
packets
start.
So
we
we
decided
to
use
the
Ubuntu
1604
our
images,
because
packet
they're,
the
only
ones
that
have
the
fastboot,
so
the
other
ones
stick
around
think
around
for
three
or
four
minutes.
C
To
start
this
this
is
set
to
start
in
should
be
under,
should
be
under
a
minute
or
around
a
minute
on
average,
so
it
shaves
off
some
time
on
our
on
our
builds
and
you'll
see
the
script
after
about
after
a
few
moments,
you'll
see
a
start
to
kick
it
off,
and
you
know
how
you'll
see
a
full
running,
a
full
running
system.
So
I
don't
know
if
you
want
to
read
around
for
the
full
working
system
or
not,
it
takes
around
two
or
three
minutes
and.
C
C
So
that's
that's
a
question
that
that's
popped
up
and
that
we've
that
we've
been
discussing
was
so
the
the
idea
that
we
came
up
with,
as
maybe
a
next
further
step,
but
we
need
to
do
some
investigation
on
that.
First.
Well,
actually,
there's
two
steps
for
a
network
service
mesh,
so
the
the
idea
would
be
to
create
a
namespace
for
each
for
each
NSM
eye
and
for
each
NSM
integration
test,
and
the
second
thing
that
we
want
to
do
is:
is
we
have
to
make
an
assembly
right
now?
C
The
the
CR
DS
that
are
installed
are
not
named
spaced
their
cluster
scoped.
So
we
need
to
change
that
so
that
the
end
so
that
the
CRTs
are
our
namespace
scoped
instead,
so
once
we
do
that,
then
we
can
just
create.
Have
an
online
cluster
that
just
continuously
runs
create
the
create
namespace
run.
All
the
tests
delete
the
namespace
and
everything
should
just
should
be
deleted
with
it.
A
C
And
I
should
have
ran
that
with
time,
but
it
takes
roughly
three
and
a
half
minutes
or
so
to
fully
provisioned
with
packet
new
systems
and
spin
up
the
kubernetes
cluster.
The
scripts
themselves
are
not
designed
like
there's
no
reason
why
you
couldn't
run
the
Create
kubernetes
cluster.
The
one
area,
though,
is
right.
Now
we
do
pull
information
from
terraform
so,
but
there's
no
reason
why
this
couldn't
be
adapted
to
perhaps
provide
a
list
of
masters
in
the
list
of
workers,
and
then
you
could
just
loop
over
them
and
run
the
right
commands.
C
So
it
should
be
relatively
easy
to
to
adapt
to
it.
The
one
area
where
I
think
cross
cloud
CI,
based
on
our
previous
conversations
that
you
may
have
if
you
wanted
to
adapt
something
like
this,
is
that
I
suspect
that
you
probably
want
to
run
off
with
the
latest
leading-edge
kubernetes
system
and
there's
two
challenges
that
I
could
see.
C
Cubed
been
having
so
number
one
is
the
where
you
download
the
kubernetes
images
from
appears
to
be
hard-coded,
so
there's
a
GC
or
communicate
slash,
GC,
GC,
r,
io
F,
and
so
it
doesn't
look
like
you
could
build
your
own,
publish
to
your
own
repo
and
then
pull
off
of
those,
and
a
second
issue
that
you
may
end
up
running
into
is:
there
is
a
cube
admin
they
do.
They
are
starting
to
put
in
some
some
support
for
other
I
guess
you
would
say
configurations
so
show
of
an
example.
C
So
we
have
cube
so
cube
in
it.
You
can
pass
in
a
kubernetes
version,
but
that's
only
going
to
download
from
from
that
GC
r
GC
our
end
point,
and
you
can
see
the
see
you
see,
I
misspelled
it
yeah.
You
can
see
the
paths
that
it's
pulling
for
him
on
here
and
so
being
able
to
I.
Don't
know
if
the
latest
versions
that
from
the
career
DCI
are
published
to
this.
C
If
they
are,
you
might
be
able
just
pull
those,
but
if
you
want
to
compile
a
specific
latest
cific
version
on
master,
then
this
this
may
impacting
some
issues
with
that.
So
the
so
those
are
the
only
the
only
real
problems
that
the
only
real
problems
I
can
see.
There's
as
much
me
there
are,
there
is
a
the
ability
to
to
start
manipulating
some
of
the
phases,
so
you
can
start
adding
or
changing
things
about
the
about
the
cluster.
C
If
you
go
with
a
cube,
admin
path
would
be
to
work
out
what
knobs
you
want
to
expose
and
then
you
may
end
up
having
to
contribute
to
the
cube
admin
project
in
order
to
in
order
to
get
those
knobs
in
so
and
I,
don't
know
what
the
process
would
look
like
that
or
how
long
it
would
take,
or
anything
like
that.
But
it's.
A
Awesome
yeah
and
just
really
it
they
and
we've
been
on
attending
the
cluster
lifecycle,
kubernetes
cluster
lifecycle
and
some
of
the
other
groups
were
they
it's
related
to
cube,
ADM
and
and
how
the
clusters
are
brought
up
as
as
far
as
in
a
kubernetes
way,
and
there
is
needs
for
supporting
different
sources
just
for
those
binaries
and
the
sites.
The
binaries,
which
would
be
a
related
thing,
would
be
on
the
cross
cloud,
CI
project
and
the
dashboard
itself
and
we
were
supporting
source
builds.
A
So
then
you
could
turn
on
very
specific
flags
that
may
not
be
in
any
any
type
of
regular
build
and
that
would
allow
plugins
and
everything
else
to
go
in
and
I
think
there's
that's
in
line
with
stuff,
that's
desired
for
cube,
ADM
and
and
a
cluster
lifecycle
in
general.
It's
just
maybe
lower
priority.
A
As
far
as
to
support
that
for
cross
cloud,
the
ability
to
add
in
or
use
cube,
ADM
for
part
of
it
I
think
is
compatible
and
specifically,
the
features
that
are
needed
for
NSM.
Doing
these
fast
loops
for
the
testing
would
be
being
able
to
use
binaries
I
think
that's
that's
been
on
our
agenda
for
quite
a
while.
So
that's
definitely
something
we'll
keep
in
mind
for,
like
the
next
version,
reusing
either
built
artifacts
so
that
you
don't
have
that
build.
A
C
C
Rather
so,
if
we
can
get
the
cloth
cross,
CLABSI
I
back
to
to
a
point
where
we
can
just
rely
on
that
by
for
me,
that'd
be
the
best
the
best
scenario,
because,
like
we
don't
we
don't
want
to
be
maintaining
scripts
over
time
that
are
specific
to
to
network
service
mesh
and
would
love
to
would
love
to
use
the
cross
cloud
CI
stuff.
Instead,.
A
A
Consider
the
cross
squad
to
be
vanilla,
we're
not
we're
trying
to
lay
it,
strap
it
on
to
the
systems
as
straightforward
as
possible.
Following
like,
if
you
followed
the
docs
with
a
cube
a
demon
and
did
a
build
on
a
given
OS
like
core
OS
or
bun
2,
then
it
should
be
vanilla,
kubernetes,
but
we'll
get
an
issue
up
and
to
look
into
that
and
happy
free
to
feedback.
C
Ok,
yeah
and
we'll
have
to
rummage
little
bit
through
the
mysteries,
the
git
history,
to
get
the
exact
state
back
in
terms
of
deploying
the
the
packet
sorry
for
deploying
the
cross
cloud
on
to
other
pakka,
but
yeah
it
was,
it
was
reproducible.
So
so
we
should
be
able
to
work
that
out
and
my
like
operational
skills
with
trying
to
debug
why
a
pod
is
slow
is
not
particularly
strong.
So
any
help
with
that.
C
A
C
A
So
probably
have
I
guess
we
can
post
something
on
we'll
post
something
on
the
cloud
native
slack
in
the
sand,
CFC
I
channel
to
get
feedback
on
that
and
to
start
like
here's,
where
we
can
gain
feedback
for
the
planning
and
what
we
may
want
to
do
with
cube
ADM.
If,
if
you
can
create
an
issue
on
the
cross
cloud
project,
I'll
drop.
A
link
in
here
for
for
that,
for
the
performance
issue
would
be
great.
Okay,
yeah.
A
No
apology
necessary
time
is
understandable
and,
like
I
said,
I
think
you'll
come
up
with
different
ideas
and
that
can
contribute
it
to
different
projects
and
I'm.
Happy
first
staff,
like
they
open
lab
staff,
that
Melvin
is
doing
its
we
need.
We
need
different
things
going
on,
so
those
ideas
can
be
shared.
I.
C
A
A
A
A
This
CI
working
group
and
my
mind
is
about
anything
that
helps
with
the
infrastructure
that
we're
doing
for
kubernetes
and
anything
within
Cuba,
Nettie's
and
CN
CF
community.
So
please
invite
them
to
join
again.
This
is
monthly
we're
not
going
to
have
December
cuz
it's
on
the
25th
Christmas,
so
the
next
one
will
actually
be
in
January.
So
at
the
time,
if
you
want
to
prepare
something
or
get
someone
involved,
there's
a
mailing
list
and
on
cloud
native
joined
this
Sancia
CI
slack.