►
From YouTube: CNCF CI WG June 27 2017
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
B
A
D
D
D
Before
I
go
through
the
slide,
deck
I
wanted
to
go
ahead
and
start
the
demo,
so
it
can
happen
in
parallel
to
going
through
this
slide
deck,
because
it's
kind
of
boring
just
to
watch
the
little
green
lights
come
on.
So
this
is
our
one
of
our
projects
that
we're
a
pipeline
running
with
the
core
DNS
pipeline
I'm,
just
going
to
click
on
run
pipeline
and
choose
a
particular
tag
that
we're
interested
in
go
and
click
create
pipeline.
I'm
I'll
come
back
to
this
later,
as
we
let
it
go
and
do
it
think
so.
D
I'll
go
back
over
to
my
slides
now,
I
think
I
can
just
hit
present.
It
looks
fullscreen
so
why
profs
Godsey
is.
Our
working
group
has
been
tasked
with
demonstrating
the
best
practices
for
integrating
testing
and
deploying
projects
within
the
CNCs
ecosystem
across
multiple
providers,
our
particular
approach
across
cloud.
Wouldn't
it
specifically,
we
separated
into
two
main
components:
cross
project,
which
is
for
each
of
the
CNCs
repos,
creating
pipelines
that
generate
artifacts
that
we
can
consume
and
use
in
various
ways.
D
D
D
Again
back
to
the
cross
project
and
cross
clouds
approaches
each
project.
Repo
can
have
a
pipeline
where
we
build
the
intent
test,
components,
compile
the
binaries,
create
the
packages
and
tie
that
together
and
through
release
/
commit.
When
any
of
these
change
it
can
trigger
a
cross
cloud
pipeline
at
the
bottom.
There
is
our
cross
cloud
pipeline
which
collects
all
of
the
artifacts
for
deploying
across
the
clouds.
D
These
are
the
six
repos
that
are
part
of
this
demo.
Today,
Prometheus
has
three
repositories
that
compose
a
pound
chart
that
we
have
combined
together
to
produce
those
different
artifacts
course
for
DNS.
Canadians
are
straightforward.
The
cross
cloud
repo
is
where
we
have
our
cross
cloud
container
for
deploying
across
multiple
clouds.
It
also
has
our
CI
pipeline
that
combines
the
artifacts
from
these
other
repositories.
D
Here's
a
little
look
at
the
cross
project
pipelines
for
these
free
refills.
They
do
similar
things
for
these
three
building,
our
end-to-end
test,
Suites,
building
the
binaries
setting
up
the
containers
and
creating
a
release.
Those
links
can
take
you
to
those
pipeline
builds
that
are
part
of
that.
If
you
want
to
take
a
look
later,
those
artifacts
trigger
across
cloud
and
end-to-end
test
pipeline,
starting
out
with
the
collection
of
the
artifacts,
the
cross
cloud
deployment
of
kubernetes
across
the
clouds,
the
deployment
of
the
projects
across
those
clouds
and
our
matrix
of
in
10.
D
D
The
first
stage
is
the
built
that
compiles
the
binaries
in
ten
tests.
The
second
stage
builds
tarball
containers
and
pushes
those
to
a
registry,
and
the
third
stage
collects
those
into
an
artifact
Pennington
fix
that
we
pass
on
to
the
downstream
cross
product
Kirk
commit
we
generate
binaries
that
can
be
downloaded
as
artifacts
I
avoid.
These
are
some
links
to
those
artifacts
for
those
specific
jobs
based
on
the
tag
you're
interested
in.
D
We
also
generate
the
container
images
which
anyone
can
use.
They
don't
have
to
use
our
approach
for
deploying
these
artifacts,
but
it's
important
to
see
that
this
set
of
pinnings
allows
us
to
do
per
commit,
deploys
combining
them
from
multiple
projects
across
project
deploys
at
this
point.
So
what
can
we
do
with
these
project?
Artifacts?
D
D
These
are
the
four
stages:
the
artifacts,
where
we
pull
in
the
configuration
to
point
to
each
of
the
artifacts
for
all
of
the
commits
were
interested
in
the
cross
cloud
stage,
where
we
use
the
kubernetes
artifacts
from
its
pipelines
to
deploy
across
our
supported
cloud
providers
and
then
our
cross
project
deploys.
We
deploy
each
of
the
projects
onto
each
of
our
clouds,
using
our
home
charts
and
using
the
environment
variable
pointing
to
specific
released
with
that
commit,
and
then
we
run
our
end-to-end
tests
on
a
matrix
of
cloud
and
project,
including
communities.
D
This
is
what
the
cross
artifacts
stage
looks
like
at
a
high
level
for
each
of
these
projects,
we're
pulling
in
the
URL
for
the
image
and
the
tag.
So
if
you
were
to
run
dr.
pol
and
specify
kubernetes
image,
:
chromatic
tag,
you
would
pull
down
that
API
or
the
DNS
or
the
Prometheus
image.
Our
cross
cloud
deploy
stage
uses
our
cross
cloud,
repos
provisioner,
which
is
a
docker
container,
/
clouds
and
it
uses
the
environment
variables
to
apply
in
this
CI
dialog
to
deploy
to
AWS
g
c
e
g,
ke
and
back
effort.
D
D
So
here's
our
test,
suite
we
put
together,
we
generate
some
end
to
end
test
containers
that
we
deploy
in
the
same
way
that
we
deploy
our
our
applications
or
projects
and
here's
some
of
the
outfits
that
for
kubernetes
it's
bit
different,
we're
just
using
the
upstream
conformance
test
and,
as
those
finish,
they'll
update
the
status
in
that
column
and
at
last
column
we
can
do
some
interesting
things.
At
some
point,
power.
C
Yeah
I
had
a
quick
question
just
relating
to
how
much
you
reuse
the
existing,
build
and
test
and
deploy
scripts
that
the
individual
projects
themselves
used
by
default.
Do
you
use
those
or
do
you
have
you
kind
of
written
your
own?
It
sounds
like
you're
across
cloud.
Provisional,
for
example,
knows
how
to
provision
on
all
onto
all
of
the
clouds
itself,
rather
than
using
whatever
the
default
mechanism
that
the
project
provides.
Is
that
correct,
yeah.
D
We're
intentionally
separating
out
the
deployment
of
the
clouds
so
that
we
have
a
way
to
test
all
of
the
projects
together.
There
is
a
good
place
to
put
a
specific
ones
for
that
project
separately
in
the
cross
project
pipeline.
For
that
specific
project,
our
focus
right
now
is
on
the
cross
cloud
kind.
All
of
those
projects
together
in
a
single
status.
D
D
We
are
looking
at
qu
Badman
to
be
a
replacement
for
that
and
we're
going
to
be
working
more
with
the
kubernetes
testing
sig
to
not
only
deploy
in
a
way
that's
supporting,
not
just
their
projects
of
all
the
other
projects
as
well,
including
the
way
that
they're
building
artifacts
is
changing
particularly
remember
the
name
of
the
build
concepts
member
say
who
basil
yeah
so
integrating
the
basil
approach.
I
think
we
talked
about
that
our
lap,
CI
call
and
yeah.
C
Let
me
make
no
sense.
The
reason
for
my
question
is
that
I
think
all
of
these
projects
probably
have
moving
targets
with
respect
to
what's
required,
to
build
them
and
deploy
them
and,
to
whatever
extent,
I
guess
you
can
reuse
what
what
those
projects
are
doing
already.
The
less
effort
you'll
have
to
put
into
keeping
up
to
speed
with
what
the
latest
greatest
way
of
building
things
is.
D
D
It's
when
we
go
cross
projects
and
time
together,
multiple
projects
that
we're
trying
to
have
a
larger
conversation
that
would
be
really
useful
to
to
find
the
best
approach
as
we
all
work
together.
Our
cross
cloud
pipeline
stages
are
collect
all
the
artifacts
and
further
down
where
we
got
at
this
point
for
the
cross
cloud
deployments
and
after
we
do
our
cross
bob
deployment,
we
have
a
coupe
config
that
points
to
each
of
our
clouds,
and
then
we
keep
going
one
more
slide.
D
D
Charts
to
decide
which
of
the
projects
we
were
going
to
tie
together
and
after
we
do
our
project
deploys.
So
how
for
DNS
inform
each
answer
now
I've
been
running
on
multiple
clouds
on
the
next
slide,
we
do
our
end-to-end
tests
again.
These
are
deploying
our
end-to-end
tests
as
much.
We
could
model
what
is
available
upstream,
for
example,
in
kubernetes,
so
we're
going
to
try
to
tie
together
specifically
how
we
can
use
existing
upstream
and
then
test.
This
is
just
a
click
tie
together
here,
some
some
testing.
D
After
so
to
summarize
again,
here's
our
cross
project
pipelines
for
each
project
we
wanted
tie
in
as
much
upstream
as
possible.
Ideally,
we
would
love
to
just
add
a
simple
yellow
file
to
each
of
these
repos,
that
is
under
the
control
of
the
project
that
we
can
submit
PRS
to
and
it
drives
their
process
for
per
project
pipelines,
and
then
all
of
us
can
have
different
approaches
for
how
we're
to
find
some
best
practices
for
deploying
the
next
part,
which
is
our
cross
cloud.
This
is
what
our
dashboard
looks
like
right
now.
D
So
this
is
a
kind
of
suit
to
visit,
though,
how
do
we
doing
this
right
now,
so
our
overview
is
that
unified
CITV
platform
is
good
lap
and
again,
this
is
just
adding
a
simple
Yano
file.
Each
of
these
repos,
our
cross
cloud
repo,
contains
this
provisioning
tool
based
on
care,
form
and
cloud
in
it
at
the
cloud,
an
it
point
where
we
can
do
multiple
approaches
and
we're
looking
at
some
different,
exploring
some
event
going
forward.
D
There's
our
retail
and
if
you'll
click
on
that
link,
Taylor
the
CNCs
one
here
and
then
load
that
and
you
can
go
to
the
core
DNS
project,
so
that
looks
just
click
on
pipeline
thanks
a
lot.
Alright,
that's
our
demo
and
where
we're
headed
anybody's
got
me
questions
we
can
review
that
or
we
can
I
think
we're
getting
at.
We
have
four
minutes
before
the
halfway
point:
that's
hairy
a
little
oily.
C
C
Yeah
I
don't
see
any
major
weaknesses
as
far
as
I
can
see
so
far,
and
it's
great
that
it's
sort
of
decoupled
into
containers
and
that
these
individual
pieces
of
the
CI
workflow
are,
you
know
individually
usable,
as
you
mentioned,
one
doesn't
necessarily
need
to
use
a
good
lab
to
all
these
pieces
together,
they're
independent
containers,
which
can
be
wired
up.
You
know
using
in
theory
at
least
any
other
CI
process.
A
Yeah
I
agree:
the
decomposition
is
a
huge
value
here.
I
would
a
that's
very
encouraging
to
hear
have
definitely
been
a
little
bit
of
wandering
the
woods
to
get
to
this
point
and
I
think
we.
We
definitely
not
need
to
backfill
now
and
have
a
more
detailed
design
document
and
such
for
other
people
to
come
in
and
be
able
to
look
at
this
project
and
how
to
extend
it
like
and
once.
C
Imagine
that
that
actually
getting
a
pipeline
like
this
to
be
labeled
in
sense
that
you
know
with
all
the
new
PRS
and
things
going
in
that
it
will
continue
to
tasks
and
not
produce.
You
know
false
false
negatives
if
I
can
put
it
that
way
where
it's
not
failing,
because
some
project
has
changed
and
the
infrastructure
is
not
all.
You
know
self-aware
enough
to
to
incorporate
that
change
properly,
but
I.
Guess
that
will
any
you
know,
experience
will
show
us
to
what
extent
that's
the
problem.
C
D
It's
not
a
what's
tired
of
string
right
now
are
we're
retired
to
upstream
by
sinking
every
15
minutes
or
so
and
I
didn't
show
these
components,
because
it's
one
of
the
other
again
it's
less
stable.
Just
like
you
were
you're
mentioning.
There
was
a
change
in
how
kubernetes
built
stuff
on
head,
and
so
that
particular
configuration
for
how
we
do
it
was
failing.
So
we
couldn't
actually
show
that
pipeline
working
I
think
until
you're
believe
he
said,
I,
don't
think
we'll
know
the
answer
to
that
until
we
get
it
up
and
runs
it.
D
When
I
we
first
looked
at
this,
the
reason
we
have
CI
and
the
tag
name
is
those
tags
represent
on
gif,
glass
or
github
the
specific
releases
for
each
of
those
projects.
For
me,
if
they
got
to
the
point
where
it
was
an
alpha
and
was
in
beta,
but
was
there
sailor
release,
it
was
a
good
place
to
have
a
stable
starting
point.
So
Taylor,
if
you
hear
in
here,
would
you
mind
going
to
click
on
the
cross
club
pipeline,
real,
quick,
the
other
cross
links
there
and
then
I'm
not
on
the
cross
cloud
link.
D
Just
go
to
this,
the
repo
just
click
on
repository
and
then
our
get
led
CI
mo
file,
just
scroll
down
and
click
on
that
get
left
CI
mo
file
at
the
top
of
that
you'll
see
our
pinnings
for
the
projects.
These
are
what
we
concluded
at
the
time
today
to
be
a
stable
artifacts
that
we
wanted
to
follow.
For
this
demo.
We
have
another
repository
Taylor.
If
you'll
drop
down,
CI
stable
and
go
to
CI
master,
our
CI
master
pipeline
actually
pulls
from
master
freed
in
these
projects,
so
pretty
much
from
head.
D
C
To
be
clear,
I
think
I
think
there
is
value
you
know,
even
if
we
never
got
this
monster,
one
that
you
have
on
screen
working,
reliably,
I,
think
just
being
able
to
validate
that
all
of
the
stable
releases
of
all
of
the
CMC
F
projects
actually
pass
their
respective
sets
intent
as
soon
as
I
planned.
It
is
independently
of
value,
and
we
could
you
know.
Ideally
we
would
want
to
smash
this
stuff
to
work
as
well
and
I'm
also
curious
to
what
extent
the
tests,
I
guess,
you're
only
running
the
conformance
test
at
the
moment.
C
D
Right
now,
this
was
primarily
a
a
rigging
pull
in
the
artifacts
that
we
generate,
because
the
upstream
projects
don't
currently
pull
in
and
deploy
all
of
their
artifacts
with
like
careers.
Emma
Prometheus
doesn't
pull
together
all
of
their
artifacts
for
their
test.
They
test
per
repo,
and
so
is
showing
and
communicating
with
a
project
that
Alpharetta
tries
to
combine
together
their
current
approach
for
testing
one
one
repository
pipeline
with
combining
those
repositories
that.
B
Yeah
I
can
Emel
share
the
nice
yeah.
B
Alright,
I
don't
have
a
slide,
so
I
have
to
show
my
code.
We
are
we
absolute
kubernetes
promises
for
the
answer
field
passed
to
dinner,
lack
of
the
kubernetes.
You
say
the
used
basil
to
use
diesel
beauties.
So
I
our
I
record.
A
basil
ii
made
the
top
image,
so
it's
a
hat.
It
has
a
physical
appraisal
environment.
B
B
Write
her
a
moment,
the
convent
has
a
talker
talker
container
at
the
top
remedy
from
prince.
Also
also,
I,
the
Google
as
a
colony
environment,
I,
went
the
containers
that
it
is
wrong
program.
It
will
be
Tulum
the
kubernetes
for
the
tutor
into
the
container
and
the
wrong
to
build
or
test
action.
We
transfer
the
data
using
the
environment.
Variables
like
this
is
the
kubernetes
kita
Austria
and
what
we
do
build
test
or
compared
to
communities
and
Rafael
it
last
day
is
the
band
banner
FL
loaded
to
a
respected
repository.
B
We
can
count
the
commonly
used
a
darker
age.
It's
a
total
drunk
amount
or
used
proper
90s.
We
also
had
I
will
submit
a
young
fell.
We
can
use
well
use
the
Kubrick
cover
control,
compounded,
bouncer,
rounded
complimentary
in
the
communities
caster
a
crazy.
If
you
want
to
view
the
Canadian
master
list
out
idiom
Emery
under
to
2
CPUs,
we
encapsulated
three
projects,
then
we
add
a
profile.
B
B
B
B
B
B
B
B
C
Just
wanted
to
clarify
a
couple
of
things:
am
it
correct
me
if
I'm
wrong,
sir,
so
clearly,
this
timer
will
run
in
a
console
rather
than
me
any
kind
of
GUI
interface,
but
but
I
understand
there
is
a
GUI
associated
container
of
us,
so
you
just
haven't
set
that
part
of
them
are
up
yet.
So
we
had
a
discussion
prior
to
this
and
I
understand
that
con.
You
can
do
that
demo
and
a
couple
weeks
time,
if
we
interested
is
there
that
correct,
Connie,
yeah.
C
C
B
Yeah,
it's
not
around
e
to
e
test
and
we
are
working
on
this
and
we
want
to
deploy
the
kubernetes
banner
fail
to
the
erosion
google
out
and
the
sense
deploy.
Another
promise
use
an
accordion
s
and
good
working
together
sirens
on
a
cloud
native
application
to
test.
Okay.
C
In
this
case
it
is
container
ups
and
in
my
case
previous
demo,
it
was
using
get
labs
pipeline
that
a
reasonable,
very
high-level
summary
of
the
differences
and
clearly
this
this
demo
is,
is
not
as
progress
to
the
previous
one
in
a
sense
that
the
cluster
build
and
intend
tests
are
not
caring
here.
Yet
neither
is
a
GUI.
A
Quicken,
would
you
mind
just
spending
60
seconds
on
I'm,
not
quite
sure,
I
get
vision
up,
container
ops
and
I'm
totally
happy
to
wait
for
our
call
in
two
weeks
or
four
weeks
or
whatever,
but
I
just
want
to
love
to
see
the
sort
of
high-level
aspiration
I
mean.
Would
you
sit
container
ops
aspires
to
be
complete
replacement
for
Jenkins
I.
C
Think
one
is
probably
better
equipped
to
answer
this
question,
but
my
understanding
is
that
any
one
of
these
believe
they
called
stages
can
can
encapsulate
essentially
anything
right
now.
They're
encapsulating
in
a
basil
build
steps
in
whatever,
but
they
can
also
invoke
Jenkins
jobs.
So
it's
not
strictly
speaking
an
either/or.
You
can
actually
use
it
to
integrate
multiple
pieces
of
infrastructure
that
can
that
can
perform
tasks
in
the
workflows.
Oh.
B
Yeah,
okay,
so
contain
abscess
design
for
encapsulate
everything
in
container
and
the
wrong
with
kubernetes
casters.
So
our
stack
is
encapsulated
into
a
container.
It
has
the
environment,
it
has
the
input
data
used
environmental
variables
and
the
we
can
collect
all
to
put
the
other
to
the
data
from
the
SPD
odd,
so
I
think.
C
Yeah,
it
sounds
like
if
anything
container
ops
is
similar
to
that
pipeline
portion
of
get
lab.
That
was
used
in
the
previous
demo,
the
difference
being
that
it
is
fundamentally
built
on
top
of
kubernetes.
So
it
uses
kubernetes
to
run
all
of
the
containers
and
get
all
of
the
input
and
output
between
them
and
parallel
eyes,
etc.
C
Whereas
I
would
guess
and
I
don't
know
a
gift
lab
framework
at
all,
but
I
would
guess
that
it
is
not
built
on
kubernetes,
it
probably
runs
its
own
docker
containers
or
bash
scripts
or
whatever
it
goes
to
it,
invoke
each
of
those
stages
and
get
the
outputs
and
in
between
each
other.
Is
that
correct,
Chris.
D
Actually,
each
of
the
stages
that
we
use
is
primary
driver
is
a
container
that
we
are
running
on
kubernetes
runners,
not
yet,
but
it
is
something
that
we
can
move
and
do
that's
right.
This
particular
patch.
We
couldn't
go
with
using
the
running
those
containers
on
a
communities
cluster,
because
the
focus
was
kind
of
a
cross
bow
and
the
gitlab
CI
itself
focuses
on
using
a
single
kubernetes,
endpoint
Perico,
and
we
found
it
to
be
a
bit
limiting.
We
wanted
to
be
able
to
do
multiple
Kubb
in
points
per
repo,
okay,.
C
D
C
D
Really
interested
in
seeing
if
we
can
overlap
in
our
upstream
reuse
of
the
projects,
particularly
I
found
going
through
the
kubernetes
process.
They
build
their
container
for
doing
their
build
within
their
build
process
and
then
then,
the
very
next
step.
It's
all
one.
One
flow
where
they
use
that
container
to
build
and
I'd
love
to
kind
of
make
sure
that
we
communicate
with
those
teams
from
the
CI
working
group
mailing
list
to
coordinate
some
of
that
over
the
coming
weeks.
A
And
I:
okay,
oh
I'm,
sorry
Quinton,
I
I
dropped
out
for
a
second
there,
but
I
just
want
to
say
that
good
lab
is
built
on
top
of
kubernetes.
There
see
id
CD
pipeline
and
extremely
they're,
also
using
prometheus,
and
so
we've
actually
talked
to
them
about
the
possibility
of
them
contributing
their
project
to
CMC
F
and,
unfortunately,
it
looks
like
we've
hit
a
roadblock
with
their
experiences.
The
bell
for
anyone.
Thinking
of
doing
a
start-up.
Please
call
your
project
something
different
than
your
company.
C
Yeah
I
was
just
going
to
mention
that,
following
on
from
Chris's
previous
comment,
that
my
observation
I'm
no
longer
an
expert
on
the
kubernetes
build
and
test
frameworks,
but
my
superficial
observation
is
that
a
lot
of
them
are
kind
of
not
very
well
decomposed.
So
it's
not
possible
to
reuse
pieces
of
the
workflow,
as
you
I
think,
alluded
to
it's
kind
of
like
one
big
bash
script.
C
That
does
you
know
a
then
B,
then
C,
then
D,
then
e
and
none
of
those
pieces
are
easily
reusable,
which
I
think
is
a
value
that
both
of
these
demonstrations
bring
is
to.
You
know,
break
these
into
much
more
self-contained
pieces
could
encapsulate
in
containers
with
well-defined
kind
of
data
flow
between
them
and
so
that
it's
possible
to
plug-and-play
these
things
in
in
different
ways,
as
opposed
to
just
run
a
big
fat
bash
script
with
a
very
large
number
of
parameters,
but
tillich
what
to
do.
C
So
we
briefly
discussed
the
possibility
last
week
of
kind
of
improving
the
collaboration
between
these
two
projects,
or
at
least
maybe
identifying
areas
where
you
both
spending
time.
You
know
solving
the
same
problems
and
maybe
kind
of
partitioning
that
work
out
more
intelligently,
so
that
we
can
avoid
duplicate
effort.
Is
there
any
obvious
area
to
the
two
demos?
This
question
goes
where
you
could.
Perhaps
we
use
the
each
other's
work.
C
D
Kwangji
makes
some
really
good
first
steps
and
going
into
battle
Vasil,
because
we
haven't
had
a
chance
to
look
at
that.
Yet
it
has
been
mentioned
by
several
folks
that
that's
the
build
system
we
will
have
to
use
moving
forward,
so
we
should
have
some
conversations
around
how
kubernetes
and
if
there's
starting
to
be
a
lot
more
projects
using
that
building
in
it
I'd
love
to
leverage
that
work
with
community.
C
Sounds
excellent,
the
other
one
that
occurred
to
me
that
that,
in
the
other
direction
is
that
you
guys
seem
to
make
good
progress
on
the
cross
cloud
aspect
of
it
being
able
to
deploy
on
to
I
think
three
at
the
moment,
but
your
ambitions
are
broader
than
that
and
doesn't
seem
like
money's
got
to
that
point.
Yet,
where
he's
figured
out
how
to
do
the
standardized
deployment
onto
multiple
cloud
providers,
so
that
might
be
an
area
that
he
could.
You
know,
consult
with
you
guys
we
use
your
containers
or
whatever
work
it
done.
There.
D
D
So
on
it
maybe
move
away
from
the
demo
try
to
capture
some
action
items
from
this,
so
we
follow
it
up
in
our
agenda
Taylor.
Can
you
grab
that
just
I
was
like?
We
want
to
look
at
collaborating
a
bit
on
our
Basile,
the
vassal
approach
in
the
coming
weeks
to
support
the
new
build
system.
D
Please
have
conversations
on
that.
How
to
do
upstream.
This
will
affect
primarily
the
kubernetes
rebuffs
and
the
artifacts
that
are
starting
to
be
split
out
from
that.
So
kubernetes
is
starting
to
get
a
different
combatant
in
repo
and
as
they
split
those
artifacts,
a
good
place
that
might
overlap
for
us
to
hang
out
a
cross
project.
Ci
folks
is
the
kubernetes
testing
sig
it
meets.
We
play
today,
I
forget
that
we
were
in
the
middle
of
a
international
transition
on
time,
so
I've
got
to
check,
but
I
would
I
think.
C
C
C
D
A
But
just
before
we
we
sign
off.
Let
me
just
try,
please
urge
both
quantity
and
Chris,
if
you
guys
could
update
the
read
needs
of
your
project.
So
that's
new
people
coming
to
this
can
get
an
overview
of
what
you're
each
doing.
It
would
be
incredibly
helpful
and
just
not
have
such
a
high
learning
curve
for
new
people
diving
into
this
work
group
later.
C
C
And
I
would
put
it
in
the
readme
files
for
those
projects
as
well,
just
so
that
people
do
stumble
across
the
project
will
be
able
to
go
and
watch
them
minor.
At
issue
or
question
here,
I
have
a
calendar,
invites
which
I
think
I
sent
out
a
month
or
so
ago
to
everybody
which
has
an
old
link
to
the
old
hangouts.
Is
there
a
new
calendar,
invite
with
this
link
in
it
and
because
I
don't
seem
to
realize
that
I
want
to
figure
out
how
we
just
figure
than
tell
them
stuff
great.
A
Yes,
so
I
will
send
a
new
calendar
invite
to
see
MCFC
out
public
with
the
zoom
info
on
it.
There
is
an
issue,
though,
which
is
that
normally
we
would
meet
on
July
11,
but
the
QC
has
moved
their
meeting
there,
so
it
doesn't
interact
with
the
July
4th
holiday.
So
I
think
we
should
actually
probably
just
cancel
our
meeting
on
the
11th.
Unless
people
want
to
just
move
it
back,
an
hour
would
be
the
other
possibility
and
it
would
be
9
a.m.
Pacific
instead
of
8
a.m.