►
From YouTube: CNCF CI WG Meeting - 2018-09-25
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
The
docs
did
this
as
part
of
getting
a
quick
start
for
using
CN,
CF,
CI
and
they're
planning
on
some
updates
for
the
overall
dashboard,
the
UX
and
one
of
the
things
that
is
rolling
in
place
and
should
be
live
sameness
showing
na.
Whenever
the
an
earlier
step
and
film
then
will
show
the
status
badge
will
be
na.
A
Also
working
with
several
groups
on
using
the
various
technology
behind
the
CNC
FCI,
the
network
service
mesh
working
group
is
using
the
provision
the
kubernetes
provision
called
cross
cloud.
That's
what
does
the
multi-cloud
provisioning
and
starting
to
use
that
for
testing,
and
we've
also
been
helping
the
network
service
group
directly
on
some
of
the
other
CI
issues,
testing
for
the
project
across
different
network
cards
and
other
things.
So
it's
not
just
cloud
specific
that
some
hardware
specific
things
to
the
group
s
TOA.
B
A
B
A
A
And
this
is
outlining
some
of
the
thoughts
for
next
goals,
probably
a
big
focus
on
the
UI
and
having
some
more
screens
about
projects,
the
clouds
and
then
looking
at
some
things
with
user
scenarios
where
we
actually
do
integrations
between
the
projects,
and
this
lists
some
other
things
as
well,
including
switching
over
to
sauna
boy
and
some
other
items
like
cube.
Atm
I
can.
A
Thanks
for
that,
Andrew
and
I
think
what
we're
thinking
is.
It
would
be
an
additional
support,
so
we
have
cloud
in
it
and
then
optionally
have
direct
ignition
support,
gotcha
and
there's
some
other
things,
but
this
is
kind
of
what
what
are
the
parts
of
the
tech
that
we
may
adjust
putting
time
cube
ATM
in
place
for
the
actual
bootstrapping
of
kubernetes
would
probably
be
something
on
the
sooner
than
later
list.
The
ignition
or
anything
else
is
probably
further
out
since,
like
you
say
it's
a
course
specific
now.
A
Potentially,
we
could
go
as
far
as
using
the
cluster
API
and
with
cube
ATM
if,
when
an
F,
it
gets
to
that
point
and
it
will
exactly
right
now.
What
we're
looking
at
is
the
after
the
terraform
provisions,
the
resources.
How
do
you
get
kubernetes
installed
and
brought
up
and
have
the
cluster
connected
so
probably
be
looking
at
time
cube
idiom
at
that
point
and
then,
as
new
features
make
it
to
production.
As
far
as
in
the
cube
ATM
space,
we
start
looking
at
if
it
is
a
good
match.
A
Working
with
a
lot
of
groups
like
in
the
cluster
API
and
just
mentioning
and
open
CI,
a
lot
of
other
and
CI
CD
groups
that
are
in
C
and
C,
F
and
Linux
Foundation
we
are
some
of
us
are
that
are
on
this
team
are
actually
at
ons
and
Amsterdam.
So
we'll
be
meeting
with
more
folks
that
are
here.
If
you're
here
then
reach
out
some
of
the
time,
we'll
probably
be
at
the
CN
CF
booth,
and
otherwise
you
can
ping
us
on
slack.
If,
if
you're
in
Amsterdam
I'll.
A
A
So
upcoming
events,
we
are
attending
regularly
the
network
service
mesh
meetings,
that's
going
to
be
a
big
feature
added
into
kubernetes
and
we're
helping
with
some
of
the
testing
on
CNS
the
kubernetes
conformance
working
group
trying
to
keep
track
on
that
sort.
Part
of
that
we
will
be
coupon
china
and
doing
an
intro
on
deep
dye
offensive
focus
on
the
project,
adding
projects
and
if
anyone's
enjoyed
you
there.
A
A
A
B
Unfortunately,
yeah
in
the
middle
have
tried
to
get
stuff
working
and
then
to
quarter
it
so
yeah
I
want
to
go
ahead
and
show
my
screen
sounds
good
hold
on.
Let
me
zoom
miss
being
zoom
if
I
share
my
screen.
While
so
when
you
go
fullscreen,
my
zoom
automatically
goes
full
screen
and
that
since
I'm
on
a
Mac,
it
takes
me
to
a
desktop
a
new
desktop
and
then
I
can't
share
it
from
there.
So
that's
working
I,
don't
know
why
this
isn't
anyway.
B
B
A
conformance
yakety
stands
for
yet
another
kubernetes
installer
thingy,
and
it
very
closely
follow
it
very
closely
follows
Kelsey's
hard
way
right
so
for
the
dirty
secret
about
me
is
I
worked
on
docker
as
a
developer
for
a
year
without
ever
running,
doctor
I
worked
on
the
storage
and
didn't
have
to
touch
docker
I.
Just
wrote
the
the
components
I've
been
involved
in
kubernetes
and
recently
wrote
my
first
job
spec,
so
I'm
still
learning
and
it's
it's
a
lot
to
learn.
B
So
part
of
this
was
was
learning,
but
what
it
does
is
what
it
came
out
of.
It
I
think
it's
something
interesting
that
could
be
useful
for
other
people.
It
is
essentially
a
giant
shell
script
and
that's
OK
in
my
opinion,
because
I
use
shell
check
to
ensure
that
it's
both
POSIX
compliant
as
well
as
there
are
no
errors.
And
so
what
does
this
shell
script?
Do
them
stands
up
kubernetes
now?
Why
is
that
interesting?
B
Well,
the
in
the
example
here
you
can
take
two
completely
disparate
Amazon
instances
that
are
in
different
networks
and,
if
ahead
of
running
this-
and
this
assumes
you've
got
the
hardware
up,
yakatori
is
for
standing
up
the
hardware.
This
is
the
VM
already
exists,
or
the
instance
exists.
You
get
a
sed
discovery,
URL
with
the
size
of
the
number
of
nodes
in
your
and
then
that's
sorry.
The
number
of
controllers
in
your
control
plane,
number
of
control,
plane
nodes.
So
that's
a
discovery.
B
I'm
gonna
get
a
size
of
one,
that's
the
number
of
control,
plane,
nodes
right
and
then
you
basically
run
yakety
on
both
of
your
nodes.
One
is
the
controller
number
of
controllers
and
it
says
it's
a
controller
and
it
says,
but
the
total
number
of
nodes
is
two
on
the
second
one.
You
say:
hey,
you're,
a
worker.
There
are
there's
one
controller
and
there
are
two
nodes,
so
basically
the
same
on
each
except
one's,
a
controller,
one's
the
worker.
B
B
B
Hopefully
that
doesn't
blow
up
so
that's
running
yakitori,
which
bootstraps
the
hardware.
Well,
what
makes
all
this
interesting
is
that
it
synchronizes
the
communication
between
the
nodes
and
what
I
mean
by
that
is
that
they
discover
one
another.
So
when
you
stand
up
kubernetes
normally,
if
you're
using
cube,
ATM
or
you're
using
the
other
tool
right,
you
need
to
start
standing
these
nodes
up
in
a
surreal
order.
You
see
if
one
up
and
then
you
can
start
to
stand
the
others
up
with
this.
You
could
stand
them
up
all
at
once.
B
B
Sorry
I
didn't
do
a
better
presentation
plan,
so
it
discovers
the
sed
cluster
members
from
that
discovery,
URL
and
because
it
knows
how
many
controller
nodes
are
supposed
to
exist,
it
get
every
node
can
wait
until
they
all
appear,
so
they
all
join
the
NCD
cluster.
Then
the
node
configures
itself
with
@cd
control
and
it
uploads
information
about
itself
to
the
Etsy
B
host.
Then
all
of
the
nodes
wait
for
all
the
nodes
appear
to
appear
in
that
CD.
Once
all
the
nodes
have
appeared,
they
everything
kind
of
concedes.
B
As
normal
DNS
entries
get
created,
routes
get
created
in
Qbert.
You
know
the
CRI
CRI
is
installed,
kubernetes
is
installed
and
that's
sort
of
it,
but
again
I
think
the
interesting
thing
about
it
is
the
self
discovery
aspect
of
it.
It
allows
you
to
stand
up.
You
know
a
hundred
nodes
really
quickly
because
they
can
all
just
self
discover
one
another
and
again
on
different
networks
as
well.
So
we've
we've
discussed
that
totally
like
how
can
we
you
know
divorce
this
from
the
HDD
discovery?
B
Services
doorway
to
do
that
and
we're
kicking
around
some
ideas
but
I
think
it's
a
pretty
at
least
it
it's
it's
a
way
to
do
it
I,
let's
say
nifty,
but
it's
a
way
to
do
it
so
that
the
the
nodes
can
all
self
discover
one.
Another
I
haven't
seen
anything
like
that,
but
maybe
there
is
again
I'm
still
sort
of
nascent
to
it
all
and
it's
still
creating
the
load
balancer.
Oh,
my
gosh
I
filed
an
issue
with
hash
Igor
that
there
load
balancer
Creator
takes
forever
because
it
waits
for
it
to
be
ready.
B
B
And
then
you
can
just
follow
the
tests
with
the
tea
log
and
then
you
can
just
turn
down
the
cluster.
It
also
has
the
ability
to
download
enough
the
test
results
in
the
flowed
than
the
GCS
as
Eric
and
I
will
never
say
his
name
last
them
correctly,
because
I've
not
been
told-
and
it
looks
like
vegeto
Farida
I-
don't
know,
but
as
eric
vegeta
said
essentially
I.
B
You
know
recreate
parts
of
prowl
here
and
that
this
will
probably
be
what
we
move
or
migrate
into
prowl,
we'll
still
under
the
covers
use
yakety
to
stand
up
the
clusters,
because
there's
still
not
a
great
solution
for
that
for
us,
but
for
actually
running
the
tests
once
the
clusters
are
up
will
probably
be
migrating
to
cube,
destined
and
prowl
at
some
point
in
the
future,
without
the
load
balancer.
By
the
way
this
takes
about
a
minute
and
a
half.
So
it's
pretty
quick.
B
It
has
the
option
to
use
an
Amazon
load
balancer
in
order
to
make
the
cluster
accessible
externally,
which
is
useful
for
things
like
Travis
CI,
so
I
don't
have
a
I
can
show
you
what
the
end
result
of
this
looks
like.
Unfortunately,
it's
not
great
at
the
moment
in
fact,
I'd
even
have
to
go
over.
Where
would
I
have
to
go
to
show
you
have
to
go
over
here
because
it's
been
failing
forever,
because
we
I
don't
know
if
how
many
of
you
in
sync
testing
but
I
mentioned
that
we
hit
a
time
out.
B
It
turns
out
that
yeah
it
turns
out
that
Travis
is
limited
to
50
minutes
for
free
builds
yeah,
it's
not
active,
so
mucking
am
I
even
going
to
get
a
build
history.
Okay,
there
we
go
so
where
is
do
I
have
a
50
minute
builds
so
show
more
all
right,
I'm,
just
saying
if
I
had
the
50
minute
one.
Oh,
you
know
what
I
sent
it
to
I
sent
that
link
to
the
group.
Sorry
I'm,
sorry
again
that
what
I'm
apologizing
so
much
and
say
that
I'm
don't
have
it
ready.
B
Yeah
here
we
go,
let's
calm,
I,
don't
know,
I
didn't
find
it,
probably
because
I
reran,
oh,
it
was
marked
as
an
error.
That's
why,
as
I
was
looking
for
green,
but
it's
working
it
just.
It
died
because
of
the
timeout,
but
I
mean
it
ran
through
50
minutes
of
the
conformance
tests
and
for
anyone
who
you
know
familiar
with
them
and
it
would
eventually
have
uploaded
them
if
it
had
been
able
to.
B
Luckily
and
I
love,
Travis,
see
I've
used
it
for
years,
any
mail
them
this
morning
and
they're
gonna
increase
the
time
limit
for
us
for
a
few
weeks
until
we
can
figure
out
how
to
get
too
proud.
So
this
bit
up
the
cluster
over
here,
it
waits
for
it
to
come
online.
So
long
thanks
for
the
fish.
So
this
is
the
load
balancer
a
couple
of
interesting
things
about
it.
B
It
uses
an
engine
engine
X
proxy
at
the
front
so
that
it
can
answer
on
pure
HTTP,
so
that
the
Amazon
load
balancer
can
actually
access
the
backend
cluster.
If
also
includes
some
nice
information
like
what
are
the
artifact
that
was
used
to
stand
this
thing
up.
So
if
you
want
to
know
that
or
VMO
you
know,
here's
the
job
that
you
can
use
job
spec
that
you
can
use
to
actually
run
the
test.
So
you
could
schedule
that.
So,
if
I
do
a.
B
Yeah
so
I
mean
that's
like
I
said:
I
probably
I
need
to
put
together
some
slides,
but
I
I
just
been
so
busy
trying
to
wrap
up
the
end
of
the
quarter.
I
didn't
want
to
punt
this
demo
again.
Let
me
last
thing:
let
me
SSH
into
one
of
the
boxes
so
that
I
can
show
you
actually
this
one's
up,
so
I
can
show
you
this
pseudo
journal
control,
see
you.
B
Oh
did
I
kill
that
ahead
of
time
failed
to
install
Oh
son
of
a
gun,
that's
an
old
one,
so
the
exit
that
my
I
can
still
show
you
that
if
there's
an
error
there
I
said
I'm
trying
to
get
the
store.
Let
me
see
if
it
shows
the
waiting
now,
just
just
too
much
in
the
way.
Let
me
let
me
go
to
a
controller
node
that
will
show
it
141.
B
B
And
I'm,
finding
a
lot
of
people
don't
do
that.
A
lot
of
people
expect
the
cubelet
to
be
upon
the
worker
node,
which
I
find
interesting.
I
start
on
the
control
plane
now
I
find
interesting.
Because
to
me,
that's
just
like
a
security
surface
issue
like
don't
put
the
cube.
Look
there.
B
If
you
don't
need
to
so
one
of
the
aspects
of
this
is
that
might
be
interesting
to
you
tailors
it
doesn't
rely
on
shared
DNS
because
it
actually
runs
Core
DNS
on
all
the
control
plane
nodes
and
it's
not
using
coordinates
for
the
service
DNS,
although
it
could,
it
has
the
option
to
use
that
instead
of
cube
DNS,
although
when
I
enable
it,
the
DNS
tests
fail,
but
DNS
works
for
service
I.
Don't
I,
don't
understand
that
I
haven't
tried
to
debug
it.
B
What
it
does
is
it
installs
the
core
DNS
and
all
the
control
plane
nodes,
and
so
once
all
of
the
nodes
are
aware
of
each
other.
They
all
register
themselves
with
DNS
and
what
they
do
is
they're,
there's
a
round-robin,
a
records
so
that
you
can
just
do
host
you
know.
K8S
is
the
name
of
the
cluster
and
all
the
nodes
know
about
KS
vmware
dot
CI,
even
though
that
doesn't
exist
outside
the
cluster
and
co.
One.
Oh
do
I
not
have.
B
What
is
this
one?
Nine
two
votes,
one
six,
eight
three
one
forty-one
and
making
too
many
changes
both
Co
one
VMware,
that's
yeah,
yeah,
I,
guess
I
didn't
have
the
suffix
it
up
correctly
yeah.
So
then,
then
they
wait
on
all
the
hosts
to
appear
and
I
believe
I.
Believe
I
actually
got
this
part
working
the
other
day
yeah.
B
So
it
also
sets
up
a
cname
for
the
load
balancer
so
that
it
doesn't
paperclip
to
try
to
access
it.
If
you're
trying
to
access
the
load
balancer,
so
it
doesn't
yeah,
it
doesn't
need
the
the
shared
DNS
because
it
just
stands
up.
Dns
inside
oh
I
keep
forgetting
sorry
because
they
all
discover
one
another.
All
of
the
certs
have
both
the
IP
addresses
and
the.
B
So
you
can
see
that
it
has
the
addresses
of
the
controller
nodes,
even
though
they
weren't
known
at
the
they
wouldn't
have
been
known
ahead
of
time,
because
this
is
actually
using
DHCP.
None
of
these
are
using
static.
Ip
addresses
what
you
do
is
you
can
either
pass
any
certificates
to
yakety
or
yakitori.
B
However,
you
want
to
look
at
it
and
if
you
pass
in
a
CA,
it
will
sign
everything
using
that
CA,
you
actually
can
run
an
up
command
like
I
just
did
without
a
CA
and
although
terraform
isn't
great
at
being
modular
and
the
aspect
that
you
don't
get
to
say
only
use
Amazon.
If
these
environment
variables
are
enabled
I
tried
that
with
Kross
cloud-
and
it
just
doesn't
work
because
if
you've
got
the
provider
and
abled,
it
wants
to
configure
itself.
So
what
we
actually
do
is
the
entry
point
of
the
the
docker
image.
B
If
there
are,
if
it
detects
the
AWS
access
keys,
it
actually
copies
in
the
load,
balancer
config
and
the
external
cube
config
generator
into
the
project
inside
the
container,
so
it'll
generate
the
CA
for
you.
So
this
whole
thing
is
using
TLS,
but
with
essentially
a
a
C
that
gets
generated
for
this
cluster
and
you
can
access
it
outside
of
docker
as
well.
Obviously,
because
the
cube
config
file
just
gets
dropped
here,
I
mean
Taylor.
B
This
should
look
very
you
know
similar
to
you,
there's
the
the
cube
config
file
and
when
you
actually,
if
you
actually
run
the
tests,
so
I
won't
I'm,
not
gonna,
run
a
new
one.
I'm
gonna
use
an
old
one
here,
t
log.
This
will
actually
show
you
what
happens
when
you
do
a
log
against
an
existing
test.
That's
run
there.
B
I
just
ran
one
of
the
tests
to
make
it
quick
but
yeah,
so
that
would
tail
the
logs
until
they
were
done
and
then
you
could
do
T
put
to
upload
them
to
GCS,
so
that
I
mean
that's
it
it
the
self
discovery
you
don't
need
to
share.
Dns
all
certs
have
IPS
and
domain,
and
then
the
yakitori
driver
stands
up
vSphere
and
makes
some
you
know
makes
running
the
ete
tests
against
beast
here.
Pretty
easy
I
think
yakety
is
the
more
interesting
part
to
most
people.
Probably
many
questions.
A
A
B
And
the
not
and
having
certs
with
both
IPS
and
names
was
nice,
although
in
fairness,
the
fact
that
cross
clouds
didn't
help
to
catch
a
bug
in
the
kubernetes,
in
my
opinion,
like
they,
they
changed
in
a
default
setting
and
all
of
a
sudden
they
preferred
IPS
over
domain
names,
but
I
like
that.
It
can
support
both
and
I
like
that
it
can.
B
Just
you
know
you
can
connect
two
nodes,
one
on
GCS
and
one
on
Amazon,
so
it's
a
force,
reverse
lookups
and
yeah
just
kind
of
works,
hopefully
what
yeah,
except
when
it
doesn't
but
yeah.
So
if
anyone
has
any
other
questions,
we're
going
to
take
a
look
at
it,
please
feel
free
to
hit
the
github
page,
it's
linked
in
the
docs.
A
B
A
D
A
D
Unfortunately,
something
around
my
ex
set
up
and
it's
various
things
so
I'm
gonna
use
the
chat
area
or
should
I
use
the
maybe
if
I
use
the
CI
working
group.
Yes,.
A
D
They
are
yeah
so
first
of
all
taste,
API
snoop
interface
and
if
you'll
click
on
that
little
bit
there,
and
one
of
the
things
that
we
wanted
to
be
able
to
do
is
show
our
change
in
coverage
over
time
to
see
how
well
our
community
is
doing
in
minimum,
hitting
the
various
endpoints
that
are
provided
by
the
kubernetes
interface,
and
we
spent
a
lot
of
time
trying
to
figure
out
and
spinning
up
the
clusters
and
getting
the
data
in
a
place
where
it
can
be
consumed.
D
This
is,
is,
is
a
problem
and,
as
we
dug
into
the
various
tool
chains
that
are
available
within
our
community,
we
found
the
the
CI
systems
provided
by
tests.
Infra
and
cig
testing
are
kind
of
where
they're
coalescing
it
for
for
kubernetes
for
sure,
and
if
you,
if
you
go
to
the
there's,
a
there's,
a
drop
down,
it's
not
quite
obvious
as
to
drop
them.
D
At
this
moment
it's
still
somewhat
manual,
but
we
have
found
a
much
better
place
than
trying
to
generate
the
data
manually
ourselves
and
I'm.
Gonna
drop
a
link
here.
This
is
test
grid,
test
grid
and
I'll
drop.
Two
links
test
grid
is
where
all
of
the
jobs,
some
of
the
jobs
proud
jobs,
get
published
in
a
way
that
we
can
group
them
together
in
meaningful
ways.
The
second
link
there
is
specifically
for
our
conformance
Crowl
jobs
that
get
grouped
together.
D
Underneath
that
you'll
see
we
have
GCE
for
master
112,
111,
110,
1
9
for
Deb
and
release
and
the
same
thing
for
kind
and
digitalocean
and
OpenStack
and
I
do
and
there
it's
it's.
It's
a
process
to
get
there,
but
because
of
the
work
we've
done
to
integrate
the
audit
logs
into
the
main
process,
that's
now
available
for
every
every
job
that
at
least
it's
part
of
conformance,
and
we
found
it
I.
D
Think
one
of
the
things
we're
going
to
be
working
with
in
API
snoop
is
to
take
automatically
pull
in
what's
available
via
test
grid
and
and
of
make
that
render
it
in
a
way
where
we
have
one
rendering
of
the
API
Snoop
starburst,
but
other
really
we're
gonna.
Allow
anybody
to
spin
up
the
stuff
and
have
a
different
view
and
provide
their
input,
and
that's
one
of
the
like
this
week
we're
doing
some
refactoring
of
the
UI.
D
So
the
next
next
thing
I
want
to
show
you
if
you,
if
we're
in
the
conformance
group
and
I'm,
going
to
click
on
GCE,
112,
dev
and
it's
passing,
and
so
that
takes
us
to
to
here
and
then,
if
I
click
on
about
and
I
go
see,
these
results
in
Gruber
Nader,
it's
kind
of
a
little
dance
to
get
down
to
it,
but
that
takes
us
to
this
particular
test
and
all
of
its
jobs
that
have
popped
out
of
it,
I'm
going
to
click
on
job
139.
So
we
see
yeah.
D
It
was
a
beautiful
test
and
all
the
things
passed
and
inside,
but
beyond
the
logs
and
I'm
gonna
go
click
through
a
couple
steps
we
can
get
through
and
pull
down.
All
of
the
wonderful
data
that's
available
on
the
master
node
and
the
best
thing
there
for
API
snoop
anyway,
is
the
API
server
audit
log,
so
I'm
going
to
copy
that
link
location.
D
This
wouldn't
be
possible
without
prowl.
So
if
there's
any
questions,
I
feel
free
to
stop
I'm,
not
I'm,
just
kind
of
free-formin.
Here,
I
don't
have
a
Prezi
Prowse
been
really
great
and
that
it
does
a
couple
of
things
one
it
manages
they're
merging
so
that
our
communities,
particularly
the
many
many
new
repos
inside
kubernetes,
can
be
merged
in
a
safe
way.
It
also
provides
some
plugins
for
managing
the
conversations
and,
and
things
now
I'll
go
into
that
a
little
bit
later,
but
I
wanted
to
acknowledge
it.
D
It's
a
really
great
tool
that
would
be
great
to
use
beyond
just
kubernetes,
for
possibly
in
the
CN
CF
community
at
large.
That's
probably
enough
for
API
s--
new
query.
We
need
more
data
and
I
think
we
need
data
beyond
what's
available
in
just
the
audit
logs
I
know
that
Kathryn
did
some
really
beautiful
work
too.
Modify
that
go.
Let
the
go
test
infrastructure
to
when
you
create
a
binary
augmenting,
the
binary
so
that
it
generates
every
second
on
HTML
file
with
test
coverage.
D
It
looks
really
cool,
but
we
need
to
integrate
that
so
that
the
artifacts
are
available
inside
the
inside
the
gubernatorial,
Nader
artifacts,
and
so
that's
one
thing
will
be
working
on
soon,
but
one
of
the
CAPM
issues,
I'm
gonna,
now
kind
of
step
away
from
me.
I
start.
D
D
That
might
be
useful
within
the
CNCs
as
a
whole,
but
it
is,
it
requires
some
some
technical
things
and
I
thought
it
might
be
interesting
to
see
what
would
it
be
like
if
we
automated
some
of
the
interactions
with
our
community
to
allow
them
to
connect
and
benefit
from
some
of
this
infrastructure.
So
we're
going
to
do
this
in
the
form
of
a
short
skit
I'm,
going
to
step
out
of
the
way
and
become
a
robot
I
will
be
played.
D
E
A
CNC
F
project
admin
I
would
like
to
request
some
ciara
automation
from
CNCs
in
order
benefit
from
the
CIA
infrastructure
maintained
by
our
community.
Given
that
the
repo
that
is
included
in
CNCs
landscape
when
I
create
a
ticket
within
the
repo
and
tag
it
at
CN,
CF,
CI
and
I
grant
CNCs
AI
admin
to
my
repo,
then
CNC
I
see,
will
automatically
configure
my
project
for
prowl
and
add
other
services,
so
at
CNCs
CIA
did.
D
You
guys
oh
hi,
hi
H
I'm,
not
about
yet
but
but
I
can
be
in
order
for
the
scenes.
Yes
to
provide
the
CI
services,
you
need
I'm
gonna!
Need
you
to
grant
me
admin
access.
You
can
go
to
this
little
link
here
and
an
add
me
I'll
message
you,
when
I've
accepted
that-
and
we
continue
at
that
point-
I'll-
be
able
to
configure
your
web
hooks
change.
Your
labels
merge
your
PRS
and
pass
everything
that
passed.
D
Your
CI
test
and
I
work
with
a
lot
of
stuff
there'll
be
some
issuing
/
commands
that
are
available
here.
You
can
read
about
them,
I'm,
just
like
you
know,
and
CNCs
yeah
editors,
madmen
awesome.
While
I
was
away
and
honestly
added
me
as
an
admin,
I
configured
the
clash
commands
for
you,
but
there's
some
other
things
that
I
could
configure
for
you,
including
prowl,
and
all
its
various
plugins.
D
D
What's
really
cool
about
being
added
in
this
way
is
your
results,
past
and
future
will
be
available
via
governador,
including
your
logs
and
your
artifacts,
and
what
that
allows
is
for
some
really
cool
integration
with
the
rest
of
our
community,
including
projects
such
as
API
scoop
and
the
CNC
F
cross
cloud
dashboard.
If
you
need
to
help
beyond
this,
please
join
the
CNC,
FCI
slack
or
attend
meetings
like
this
public
CNCs
OCI
working
group
meeting
or
doing
the
mailing
list
thanks.
D
D
A
Crickets
like
this
and
the
visuals
on
that,
especially
if
you
can
get
maybe
some
slides
together,
the
that
last.
The
last
part
reminded
me
of
something
edie
from
the
network
service
mesh
put
together
for
intro
and
it's
kind
of
a
skit,
it's
all
drawings
and
slides,
and
they
can
go
through
the
interactions
between
a
user
and
someone
coming
in
to
use
it.
Maybe
taking
what
you
have
and
having
it
written
up
so
that
you
can
get
the
visuals
to
tie
along
with
the
story
would
be
good.
A
C
C
One
project
uses
a
build
kite,
and
all
of
this
is
has
different
requirements
for
different
things,
and
some
stuff
is
historical,
because
we
added
circle
because
we
couldn't,
we
originally
couldn't
publish
our
docker
images
with
and
our
github
releases
with
with
Travis.
So
we
we
ran
tests
in
Travis
but
ran
our
release,
builds
in
circle,
and
so
we're
just
like.
We
want
to
consolidate
this
and
and
wanted
to
see
what
the
CNC,
if
was
actually
doing
about,
testing
and
builds,
and
you
know
see
what
they
work.
C
What
else
yeah
or
we-
and
we
have
one
project
that
we
use
build
kite,
because
we
need
to
be
able
to
do
building
and
testing
on
non
x86
platforms
and
also
testing
on
non
Linux
platforms.
So
we
have
some
BSD,
builds
and
and
and
some
some
PowerPC
builds
because
the
architecture
specific
compiles
need
to
be
tested.
A
Okay,
we've
brought
up
the
this
document
that
you've
shared
in
the
notes
and
our
thoughts.
The
I
guess
I
think
we
may
have
spoken
either.
I
spoke
with
you
Ben
or
someone
else
many
many
months
back
a
little
bit
about
what
we
were
doing
from
a
CN
CF
support
side
for
helping
with
CIC
the
Cross
Club
the
dashboard
and
the
the
tools
underneath
would
be
one
of
the
projects.
There's
a
lot
of
other
projects.
A
Hth
is
talking,
I
was
just
talking
about
some
of
them
and
I
think
probably
be
good
to
just
maybe
chat
and
go
through
some
of
these
and
look
at
what
the
big
Nate's
are
and
maybe
HH
should
be
interested
in.
Taking
a
look
at
this
sure
we
could
do
that
tomorrow
or
any
other
time.
Okay
and
then
HH
if
I
mean
if
this
is
something
you
have
time
for
and
want
to
check
out,
know
that
you're
involved
with
a
lot
of
the
different
groups.
A
Kubernetes
lies
is
on
the
testing
side
and
across
and
CF
the
been
the
open
CI
that
was
asked.
Did
we
have
a
link
earlier
in
the
slides,
I?
Think
I
may
have
trying
to
go
back
as
the
open
CI
group
might
be
something
to
get
involved
with,
because
there's
a
lot
of
different
groups
on
that
that
are
doing
a
lot
of
different
systems
that
they're
using
and
that
could
be
helpful
for
Prometheus,
sure
and
then
I
think
y'all
have
access.
A
Yeah,
if
you
can
catch
us
while
we're
at
the
booth
and
then
and
then
maybe
hop
on
slack
on
the
cloud
native
slack
and
the
CNC
FCI
Channel
yeah.
A
Great,
if
you
want
to
drop
the
link
and
maybe
tag
HH
and
myself
and
let's
see
I,
don't
know
anyone
else
does
inch
Melvin
I,
don't
he
may
be
interested
I
know
that
there's
a
lot
of
different
things
that
were
each
of
us
are
doing
and
we're
trying
to
get
the
action,
the
CI
working
group
itself
and
any
of
the
projects
that
would
be
beneficial
for
others.
So,
ideally,
if
there's
something
that
Prometheus
needs,
we
can
try
to
have
something
in
place.
That's
reusable
by
other
CN
CF
projects.
A
Cool
and,
let's
see,
I,
think
the
only
other
item
on
the
agenda
with
us.
We
were
having
some
build
failures
with
Prometheus
actually
on
the
CNC
f,
CI
dashboard
in
my
CI
dev
environment,
and
it
looks
like
it
could
be
something
with
dependencies
that
we
need
to
add
to
the
docker
container.
So
it's
something
I
think
we
need
to
defer
and
maybe
reach
out.
We
have
a
we'll,
have
a
public
ticket
and
maybe
tag
tag
y'all.
If
we
need
some
help
system.
Oh
yeah.
A
Sounds
good
and
then
we
noticed
this
actually
started
in
two
four
zero,
so
it's
we've
been
doing
various
changes
on
our
end
to
support
the
new
clouds
and
various
things
so
could
be
tied
into
that
and
dependencies
don't
know,
but
we'll
take
a
look
and
then
a
right
shadow
doors.
I!
Think
that's
it.
Unless
someone
else
has
anything
stop
here.
A
So
okay
slide
28.