►
From YouTube: CNCF CI WG Meeting - 2019-03-26
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
Thanks
Taylor,
this
is
Lucena
from
operative
and
here
are
some
upcoming
events
I'm
ed
from
packet.
Let
us
know
about
the
Lennar
o
connect
in
Bangkok
Thailand
and
that's
the
first
week
of
April
looks
like
that'll,
be
around
ci
systems
and
scheduling,
so
there's
a
link
to
that
event.
Also,
the
first
week
of
April
is
the
open
networking
summit,
North,
America
2019
and
we've
got
a
couple
of
stand-up
testbed
events
on
the
calendar
on
Wednesday
of
next
week.
There's
the
tutorial
on
the
CNF
test.
A
Bed
driving
telco
performance
with
the
cloud
native
network
function,
testbed
and
also
next
Wednesday,
we'll
be
presenting
the
pad
has
set
up
the
CNF
test
bed,
which
is
reproducible
following
the
steps.
The
end
of
May
is
cube,
con
cloud
native
con
Europe
and
in
Barcelona
there
will
be
a
CNC,
FCI,
intro
and
deep
dive
two
sessions.
There
also
be
180
five-minute
session
on
the
CNF
test
bed,
birds
of
a
feather
and
in
the
meeting
notes.
There
are
links
to
those
events.
You
add
them
to
your
calendar
q.
A
You
all
right
sounds
good,
so
next
on
the
agenda
and
feel
free
to
add
any
agenda
items.
If
you'd
like
I'd
like
to
give
a
status
update
on
the
CNC
FCI
dashboard,
we
can
take
a
look
at
the
v2
0.1
and
v2
1.0
releases,
as
well
as
what's
in
progress
and
what's
next
and
then
HH
we'll
talk
about
API
snoop
and
the
prowl
automation.
A
A
The
new
test
column
has
been
added
after
deploy
and
that
one
will
show
the
status
of
the
projects
end-to-end
tests
right
now,
you'll
notice,
they're
all
n/a
as
the
e2e
tests
are
inactive
at
this
time.
So
it's
part
of
our
goal
in
increasing
collaboration
on
the
C&C
FCI
dashboard.
With
the
scenes
you
have
project
maintainer
x',
to
meet
with
those
contributors
and
start
building
those
end-to-end
tests.
A
And
then
on
March
21st
last
week
we
really
used
to
be
2
1
0
and
at
a
high
level
we
updated
how
to
update
project
details.
So
that's
the
first
step
in
starting
the
increased
collaboration
with
C&C,
f
project,
maintainer
x',
so
that
we
can
add
more
scene
CF
projects
to
the
dashboard
faster.
So
the
first
step
was
to
update
the
project
details,
and
that
is
what
we
see
in
the
project
details
column
here.
So
it's
the
logo.
A
It
is
the
display
name,
the
subtitle,
and
then
this
is
also
a
button
clicking
on
that
button
goes
to
the
C
and
CF
projects,
github
repo.
We
also
created
a
contributing
guide
with
steps
on
how
to
update
those
project
details
and
we
resolved
a
last
updated
counter
that
was
showing
no
unexpectedly.
So
now
it
is
working
as
expected.
A
So
the
link
to
the
contributing
guide
is
included
in
the
slide
deck
there.
It
will
be
incremental
e,
updated,
as
we
add
the
steps
for
each
column.
Essentially
so
we've
gone
through
the
first
step
of
the
project
column,
how
to
add
and
modify
those
details.
The
next
step
will
be
the
release
column
and
then
we'll
work
on
the
external
integrations
with
the
CI
systems
for
the
builds
and
the
deploys
and
the
end-to-end
tests.
A
Here's
our
bug
with
look
like
last
updated.
No
and
now
it
shows
12
hours
ago
or
the
correct
time
since
the
3
a.m.
Eastern
refresh
of
the
scenes,
the
FCI
dashboard
we've
got
several
items
in
progress
as
well.
The
first
one
is
that
drop
down
that
I
mentioned
earlier.
We
currently
see
on
scene,
CFC,
I,
the
kubernetes
stable
version
only
and
we
are
working
on
that
test
environment
drop
down
so
that
you
can
toggle
between
stable
or
head
for
kubernetes,
and
this
is
a
building
block
to
adding
arm
support
so
adding
arm
support.
A
Our
goal
is
to
add
arm
support
to
kubernetes
and
Sancia
projects
on
the
dashboard.
After
we
get
the
kubernetes,
stable
and
head
test
environments
also
working
on
an
arm.
In
addition
to
the
current
machine,
the
Intel
machine,
we
will
add
arm
support
to
Cordy
and
s,
and
so
this
is
the
the
design
mock
for
adding
arm
support
to
kubernetes.
In
the
first
iteration
we
did
receive
an
enhancement
request
and
we
have
that
in
our
design
trades
now
to
iterate
on
that
idea.
A
A
So,
what's
next
is
to
add
arm
supports
core
DNS,
and
this
is
what
we
anticipate.
It
will
look
like
in
a
perfect
world.
The
kubernetes
head
environment
on
arm
will
be
provisions
your
bare
metal
packet,
the
provisioning
phase
will
be
a
success
and
then
core
dns
will
build
its
latest
release.
One
nine
zero
on
to
the
arm
machine
and
that
build
will
be
a
success
both
stable
in
head
and
then
those
build
artifacts
will
be
deployed
on
the
other
provision
packet
machine
and
that
will
all
be
successful.
A
To
practice
changing
project
details,
we
have
ticket
77
where
we'll
be
updating
the
logos
on
CN
CF
CI
will
be
replacing
all
of
the
project:
logo,
icons
and
the
CN
CF
logo,
with
SVG
versions
and
I'll.
Follow
my
we'll
follow
our
documentation
and
improve
it
in
case
any
steps
were
in
the
case.
Any
steps
need
to
be
updated
in
that
contributing
guide.
A
A
The
roadmap
can
be
found
across
cloud
CI
across
called
CI
roadmap
markdown
file
in
this
month.
We'll
continue,
adding
arm
support
so
kubernetes
and
core
dns
on
CN
CF
CI
next
month
will
continue,
adding
arm
support
to
additional
graduated
CN
CF
CI
projects.
It
will
update
the
own
apps
table
and
head
release.
A
Kubernetes
to
114
0
also
will
change
how
the
release
details
are
added
to
stance
the
FCI
add
support
for
those
general
integrations
and
write
up
the
documentation
and
step
skipping
ahead
in
May
at
the
cube
con
Europe,
we
plan
to
do
an
intro
in
a
deep
dive
and
a
deep
dive
session
will
be
how
to
add
a
project
on
stands.
The
FCI
so
we'll
be
working
on
those
steps
incrementally
to
have
all
of
the
steps
ready,
as
well
as
the
contributing
guide,
up-to-date.
A
Slide
so
we
welcome
your
feedback
and
your
enhancement
requests
and
any
questions
that
you
may
have
feel
free
to
add
any
issues
to
the
cross-code
CI
CI
dashboard
issues.
If
you
haven't
already
joined
our
select
Channel,
please
join
the
C&C
F
dot
slack
the
IO,
the
CN
CF
CI
channel.
You
can
also
join
the
mailing
list
since
the
FCI
public,
and
these
calls
are
monthly
on
the
fourth
tuesdays.
A
A
B
We
have
a
lot
of
projects
within
the
CNCs
and
one
of
the
things
when
we
started
that
scene.
Yes,
yeah
we're
opposed
to
try
to
find
some
ways
to
no
help
what
the
CI
and
in
working
with
our
own
project,
the
api's
link
we've
been
needing
our
own
deployments.
That's
where
we're
going
alone
and
I
thought
it
might
be
useful
to
share
some
bar
approach
just
to
get
some
feedback
and
thoughts
from
the
greater
CI
working
group.
I'm
gonna
try
to
share
my
screen,
and
hopefully
this
is
super
easy
and
works
wonderfully
I'm.
A
B
B
Let
me
try
again
next
time,
I'm,
sorry
for
that
I
don't
want
to
take
up
your
time,
I
can't
present
and
I.
Also
because
I
can't
see
I
can't
unshare.
Oh
wait
this!
This
might
work.
Let's
try
this.
B
B
A
C
C
So
they
for
folks
who
don't
know
CNF
test
bed
trying
to
create
a
fully
reproducible
environment
for
testing
network
functions
on
open
sac,
kubernetes
doing
various
use
cases,
most
of
them
in
performance
focus
and
there
there
will
be
others
testing
different
things.
Functionality
resiliency
whatever,
but
most
of
those
have
been
performance
right
now
and
we're
testing
on
packet
as
the
primary
area
and
there's
also
some
collaboration
with
FBI
OCC
testing
lab
where
part
of
the
test
cases
are
being
replicated
there
on.
A
C
Linux
Foundation
systems
so
unpack
it.
We
create
the
machines
provision,
the
resources
from
scratch,
bring
them
up
and
those
have
been
primarily
Mellanox
networked
hard
based
systems
and
we've
been
able
to
do
that
for
docker,
KVM,
kubernetes
and
open
sac
and
the
the
big
update.
That's
happened
has
been
just
bringing
this
on
the
opposite
side,
so
we've
been
adding
support
for
ubuntu
18.
This
would
be
one
of
the
biggest
ones
getting
in
parity
with
the
cube
root
at
Eastside,
so
using
kubernetes
as
a
host,
and
this
is
bringing
everything
up-to-date
across
the
board
for
OpenStack.
C
These
systems
are
using
the
V
switch
for
open
sac
can
be
swappable,
so
it's
by
default,
you're,
going
to
use
OBS
and
of
an
open
sac
for
doing
the
switching
and
networking
for
doing
the
high
performance,
we're
using
an
FD
io
project
called
VPP,
and
that
allows
us
to
do
high
performance
network
connectivity.
I'm
access
to
the
the
cards
that
the
NEX
themselves
and
we're
able
to
do
the
same
type
of
connectivity
on
both
open
sack
and
kubernetes,
so
that
we're
doing
comparisons
in
the
open,
sac
side
you're
using
Neutron.
C
If
you're
familiar
with
that
for
all
the
networking
and
then
on
the
kubernetes
side,
we
keep
the
same
flat
layer,
3
network
connection
and
then
add
additional
interfaces
and
right
now
those
network
interfaces
and
kubernetes
are
manually
stitched
together,
we're
looking
at
adding
in
a
sim
support
which
is
tied
in
to
this
next
item.
But
on
the
open
sec
side,
some
of
the
items
that
we
wanted
to
get
to
was
supporting
an
additional
type
of
system.
That's
coming
up
on
packet,
so
the
Intel
network
cards
they're.
C
They
don't
require
additional
drivers,
select
the
Mellanox
that
don't
they're
proprietary
drivers
on
the
Mellanox
NICs.
There
are
public
systems
you
can
get
them
now,
but
the
Intel's
don't
require
any
additional
drivers.
They're
built-in
with
Linux,
also
another
Linux
fetish
project
BDK,
and
you
can
use
that
out
of
the
box.
So
we've
been
working
to
add
support
for
the
installation
of
the
OpenStack
VPP
that
we
have
the
deployment
and
to
add
update
it
to
a
bunch
of
1804
and
that's
happened
at
this
point.
C
C
Well,
they
actually
came
out,
but
they
don't
have
enough
systems
provision
for
the
public
yet,
but
those
into
that'll
be
an
into
extra-large
and
they
have
Intel's.
And
what
this
means
is
anyone
can
go
and
take
the
the
code
in
the
CNF
test
bed
and
currently
it's
in
tools
area
and
you
can
deploy
an
open
set
cluster
and
you
can
decide
whether
that's
OVS
or
you
can
do
the
EVP
P
either
way
you
are
going
to
be
able
to
configure
your
network
for
whatever
test
case.
C
You
want
using
standard,
Neutron
configuration
and
you
can
do
that
on
on
the
publicly
available
machines
and
then
on
the
kubernetes
side,
you
can
deploy
a
cluster.
Also
on
the
same
machine
to
Mellanox
and
the
Intel
systems
and
then
run
any
of
the
same
sort
of
tests,
so
the
other
item
would
be
moving
towards.
How
do
you
how?
How
do
we
configure
the
network
connections
that
you're
going
to
run
CNF
sun
for
kubernetes?
C
So
at
some
point
we
may
be
doing
some
test
cases
and
configurations
to
show
Malta's
and
other
scene
I
plugins
right
now.
Those
are
stitched
together
using
ansible
and
some
other
items
that
we
configure
the
CNS
at
deploy
time
we
use
helm
charts
and
the
interfaces
show
up
in
the
host
system
going
forward
we're
looking
at
using
something
like
network
service
mesh
in
addition
to
exploring
C&I
plugins.
C
The
V
switch
using
a
helmet
art
to
existing
kubernetes
cluster,
at
which
point
you
can
use
that
V
switch
and
the
way
that
we're
using
it
or
potentially
otherwise,
we'll
be
talking
with
FBI,
oh
and
the
VPP
group
about
potentially
having
a
container.
That's
one
of
the
items
that
we
want
to
do
from
here
is
having
that
as
a
public
container.
That
could
be
useful
for
other
people
to
use
in
in
different
deployments
on
top
of
kubernetes,
and
so
that's
something
that
we're
thinking
that'll
come
out
of
this
beyond
just
the
test
case
here.
C
The
other
related
item
is
unprivileged
CNS,
so
for
performance
reasons
and
other
considerations
on
kubernetes,
such
as
pinning
specific
cores
to
containers
and
for
performance
reasons.
We
have
a
lot
of
the
test
cases
we're
running
in
with
privileged
containers,
and
so
we've
been
working
towards
having
privileged
and
unprivileged
containers,
because
those
are
different
use
case,
as
you
could
see
in
the
real
world
and
that's
another
item
that
we
have
in
progress
right
now
and
it's
related
to
in
SM.
C
C
And
I
think
that's
it
as
far
as
the
current
staff
potential
new
item,
which
would
love
to
get
some
feedback
on,
is
about
additional
use
cases
and
one
in
particular,
looking
at
SRO
V
use
cases
and
working
with
some
of
the
folks
that
are
on
the
network,
service,
mesh
and
other
groups
that
are
interested
and
we,
the
SRO
V.
Is
that
a
way
to
do
acceleration
and
the
VM
world?
It's
been
used
a
lot
in
KVM
there's
places
you
could
use
it
in
open
sac
and
it's
very
common
thing
with
telcos.
C
So
what
we're
looking
at
is
what
are
some
real-world
use,
cases
that
are
using
SRV
and
other
type
of
performance
use
cases
that
we
can
take
and
re-implement
in
the
testbed,
so
that
other
people
can
rerun
these
things
and
share
and
understand
them
and
then
take
that
and
then
implement
a
cloud
native
version
that
would
be
and
following
the
methodologies
we
would
expect
on
kubernetes.
So
that's
a
goal
and
this
one's
just
getting
going.
C
D
C
Nothing
right
now
as
far
as
implementing-
and
it's
definitely
been
mentioned
as
as
far
as
it's
in
use
with
some
people.
But
if
there's
any
specific
use
cases
on
arm
most
of
the
use
cases
were
looking
at
would
be
on
Intel
CPUs
and
not
even
AMD.
Cpus.
We've
attention
intentionally
avoided
the
packet
AMD
machines,
and
a
lot
of
that
has
to
do
with
specific
performance
tuning
that
a
lot
of
the
folks
involved
know
about
that's
available
on
the
Intel
machines,
so
I
think.
C
E
C
C
C
A
C
Okay,
there's
no
other
questions
and
hippie
acker.
Are
you
wanting
to
try
again
that'd.
B
B
Not
because
what
I'm
going
to
be
doing
is,
oh
there's
a
lot
of
links
that
I
click
on
and
I'm
going
to
share
via
zoom,
so
bring
up
the
web
URL
I
just
shared
and
the
zoom
sharing
side
by
side.
So
the
right
you
own
your
right
hand,
side.
You
might
have
the
web
browser
on
the
left
hand
side
you
might
have
this
URL
or
the
zoom
sharing
on
the
right
and
the
web.
Url
I
gave
you
on
the
Left
I'm,
going
to
attempt
to
pair
my
single
browser
session,
really
quick.
B
Let's
try
that
it'll
be
a
little
different.
If
you
go
ahead
and
share
that
share
the
URLs
as
we
go,
then
I'm
gonna
start
off
with
just
a
quick
overview
and
then
go
into
the
mirroring
and
the
pipe
ones
and
how
that
was
going
to
environment
and
then
go
into
our
cluster
overview
of
how
things
are
connected
via
yeah,
the
the
jobs
and
down
at
the
name,
space
center,
pods
and
also
I'm
digging
directly
into
a
particular
appointment
and
probably
not
getting
all
the
way
to
the
bottom
to
build.
B
But
just
give
you
a
taste
in
what
what
we're
up
to
so
I'm
going
to
go
ahead
and
focus
on
the
overview
for
a
minute.
The
software
we're
doing
has
some
back-end
and
front-end
and
it
all
needs
to
be
glued
together,
and
we
looked
at
net
Liffe
I
and
a
few
other
CI
things
that
didn't
provide
the
flexibility
in
deploying
complex
app,
and
we
want
something
as
simple
as
this.
B
When
somebody
creates
a
PR,
we
wait
for
this
CNC
API
and
that's
just
a
bot,
that's
responding
and
saying:
hey,
here's
the
results,
the
pipeline
and
the
URL
and
so
I'm
going
to
do
that
really
quick
as
a
tldr.
So
we
can
see
how
far
I
would
like
to
take.
This
I
think
what
I
might
do
is
have
you
share
I'll
start
a
new
PR
on
on
our
branch,
so
I'm
going
and
I'll
drop
a
link
to
this
real,
quick.
B
I'm,
going
on
tics,
121
and
I'm
gonna
create
a
new
PR
and
I'm
gonna
drop
the
PR
in
the
channel
real
quick
once
I.
Do
it
demo,
demo,
PR
or
CN
CF
CI
working
group
and
I'm
going
to
create
a
pull
request.
Now,
I'm
gonna
drop
that
PR
in
to
our
channel
for
everyone.
I
just
created
that
one
and
there's
some
fun
things
around
the
automation.
This
is
including
some
of
the
things
that
the
kubernetes
community
is
using,
including
prowl
so
automatically
on
that
job.
B
The
moments
after
I
created
the
CN
CFC
I
bought
is
going
I'm.
You
don't
have
a
release.
Note
block.
We've
got
a
process
around
that
welcome
to
the
community
if
I
was
first.
Committing
to
this
repo
would
say:
welcome.
Thank
you
for
your
first
commit
that
was
delightful
I'm
here,
some
of
the
other
things
that
you
might
need
to
know.
It
also
added
a
note
around
the
release.
B
No,
like
I,
said
there,
and
we
have
some
automation
around
who,
within
this
repo,
should
we
contact
and
apply
to
an
approver
and
a
reviewer
I
won't
go
into
the
specifics
of
that
I
think
it's
a
whole
other
see
I
work
in
repainting,
but
all
of
these
tooling
developed
by
the
kubernetes
community,
clearly
sig
testing,
which
we
spent
a
lot
of
time
with
I,
think
has
a
great
benefit
than
broader
use
of
the
CN
CF
and
it's
yeah
group
community.
The
last
thing
that
did
here
as
far
as
the
CFC
I
bought
was
adding
size.
B
So
it
says
this
is
a
fairly
large
change.
We
were
using
all
these.
All
these
failed
pieces
are
part
of
net
loaf
I
our
application
has
a
back
end
and
a
front-end
so
deploying
the
static
content
being
nullified,
wasn't
quite
and
work
we're
exploring
some
options
to
remove
that.
But
in
the
interim
we
went
ahead
and
added
our
this,
this
other
approach
using
gonna
using
given,
so
you
can
see
at
the
bottom
if
we
were
to
have
all
these
go.
B
We
have
a
past
CI
pipeline
and
I'll
click
on
that
to
give
us
the
details
on
the
pipeline
itself,
you
can
see
they
had
a
build
and
then
we
have
a
review
but
where
it
hasn't
got
to
this
next
step,
yet
because
we're
still
in
review
phase
if
I
click
on
the
review.
Oh
sorry,
you're
not
following
me,
so
click
on
the
other
details.
B
Next
to
the
pipeline
there
on
the
right,
the
bottom
there's
a
green
check
box,
the
only
green
one
yep
and
then
the
review
in
the
middle
go
click
on
that
and
at
the
bottom
there's
a
URL.
It
says
URL
there
and
it
says:
HTTP
a
th
new
PCI,
it's
up
yeah,
so
one
you
have
to
copy
that
for
a
moment
and
I
will
kind
of
show
another
flow
with
that
later
and
this
new
branch
has
an
ability
to
filter
stuff
won't
go
into
actually
doing
this.
This
isn't
make
a
snoop
thing.
B
This
is
a
CI
think
so
I'm
gonna
go
through
our
shared
presentation.
Go
here
and
say:
we've
created
a
PR,
we
waited
for
CNCs
and
we
got
a
deployment
Gorrell.
This
is
the
TLDR
I'm
going
to
go
ahead
and
close
that
and
back
out
to
our
larger
overview
and
dig
into
this
just
a
little
bit
on
my
if
I
click
on
these
on
my
side
and
they'll
open
up
into
a
URL
I,
think
I'm,
awake
and
publish
this
quick
book.
This
goes
through
to
the
settings
for
the
mirroring
this
Ben
quicker.
B
We
don't
go
to
the
or
else
I'll
just
talk
about
it.
We
do
some
mirroring
for
each
commit,
there's
a
set
of
branches
and
pipelines
and
jobs
that
flow
through
and
that
ends
up
being
a
specific
environment
and
review
branch.
I
mean
those
links
will
follow
that
there,
when
I,
want
to
publish
it
later
for
a
specific,
commit
we're
actually
on
that
specific
commit
right
now.
B
So,
if
you'll
Taylor,
if
you'll
go
back
to
the
to
the
pipeline,
that
you
had
a
just
kind
of
follow
if
there's
a
commit
over
there,
if
you'll
click
on
the
commit
of
something
yep
on
the
commit-
and
this
is
where
you
can
see
the
parts
of
the
pipeline
and
that's
the
the
commit
to
the
pipeline
and
the
build
and
deploy
jobs-
are
those
two
jobs.
B
If
you'll
mess
over
the
first
and
second,
with
via
stages
that
there
it
says
it
built
it
and
it
passed
it
if
you
click
on
the
build
real,
quick,
we'll
just
take
a
look
see.
These
are
the
the
jobs
that
this
is
the
specific
example
and
underneath
that
build
job,
you'll
see
the
the
dr.
Kanter
getting
built
and
pushed
to
a
registry.
If
I
go
over
to
the
registry
on
the
left
hand,
side
there's
a
mouse
over
at
underneath,
see
ICD
yeah.
B
There.
You
go
and
you
can
see
the
the
different
containers
that
we
pushed
while
we've
been
doing
these
different
releases
now.
The
next
thing
is
to
you're,
not
logged
in,
so
you
can't
see
these
things.
We
actually
have
some
environments
and
get
to
a
console
on
these
I
can
quickly
add
you
or
okay.
We
won't
go
through
that
this.
If
it's
the
pipelines
and
setup
so
I'm
gonna
close
that
out
there
any
questions
before
I
go
any
further.
B
B
So
if
I
ran
Cupido
get
namespaces
there,
you
can
see
the
we
have
a
px
nuptse
I
set
up
a
day
ago,
and
we
have
all
the
rest
of
the
cluster
a
while
back,
and
so
that's
the
different
namespaces
we
have
namespace
get
lab
managed
app
is
where
all
of
our
you
can
get
lab.
Many
stuff
goes
and
the
C
API
snoop,
CI
and
main
spaces
where
our
deployments
and
views
go
inside
the
get
lab.
Apps
manage
app
namespace.
We
have
a
bunch
of
containers
and
we'll
go.
B
B
We
have
our
different
production
review
and
staging
deployments,
and
the
replica
sets
that
are
better
that
are
there
quickly
to
get
down
to
inside
of
our
pods.
This
is
a
set
of
pods
that
we
have
just
now
a
few
more
running,
including
the
ticket
that
we
have
there
and
I
won't
go
through
the
details
on
that
pot
for
time.
So,
if
we
widen
out
our
view
to
go
to
the
next
step
about
digging
into
the
deployment,
these
are
the
URLs.
B
If
you
have
write
access
to
the
repo
to
that
particular
branch
and
so
I
need
to
do
a
little
bit
on
on
three
headed
up,
but
within
this
on,
we
can
get
a
terminal
and
also
get
in
and
see
the
artifacts
for
that
and
underneath
here
this
maps,
our
deployments
and
our
deployments
to
a
pod,
and
since
we
have
a
potato
pod,
we
can
start
executing
commands
on
that
note.
So
this
is
just
a
cool
little
command
that
looks
inside
the
namespace
for
our
CI
stuff
and
goes
inside
the
particular
review
deployment
pod.
B
The
last
thing
is
this:
some
exec
shell-
and
this
will
actually
give
us
a
show
into
that
node
so
execute
that
I
think
our
other
command
with
CD
to
where
the
data
is,
and
now
we
can
see
we're
inside
that
production
or
this
review
ticket
no
I
think
that's
it
for
naturally
get
quick,
quick
overview
of
the
details
of
what
we're
we're
doing
and
get
some
initial
feedback.
I.
B
Think
it
might
be
useful
for
other
CI
are
there,
since
you
have
projects
that
are
looking
to
have
some,
that
they
have
a
product
that
needs
to
be
looked
at
via
the
the
web,
so
they
can
have
commits
they
come
in
and
do
a
deployment
I
think
it's
separately.
This
is
just
one
little
aspect
like
what
gitlab
does
I'm
actually
really
interested
in
seeing
prowl
and
some
type
of
CNC
fci
type
bot
interacting
with
our
community,
in
the
way
that
the
in
the
way,
the
extreme
success
that's
being
used
in
the
kubernetes
community.