►
From YouTube: Antrea Community Meeting 06/06/2022
Description
Antrea Community Meeting, June 6th 2022
A
Now
so
good
morning,
good
afternoon
or
good
evening
to
everyone
thanks
for
joining
the
systems
of
the
entire
community
meeting
today
is
a
tuesday
june.
The
7th
for
our
friends
in
the
west
coast
is
still
monday
june,
the
6th
6th
and
in
the
agenda.
For
today
we
have,
I
believe,
a
very
interesting
conversation
with
john
xiong
and
gavin
about
some
update
and
developments
regarding
our
andreas
ci.
A
B
Thanks
tarvardo,
can
you
say
my
screen?
Yes,
we
can.
I
can
at
least
thanks
hello,
everyone.
My
name
is
xu
yang
and
today
junction
and
I
will
share
an
introduction
of
our
jenkins
ci
migration
from
the
public
cloud
to
entry,
private
lab.
B
A
B
I
will
give
an
example
here
previously:
our
set
on
public
cloud
once
has
some
accounted
account
related
issues,
but
at
that
time
there's
nothing.
We
can
do,
but
only
ask
for
the
cloud
team's
help,
so
that
means
building
a
jenkins
pipeline
that
is
fully
managed
by
the
entry
side.
Team
is
very
important.
B
That's
why
we
choose
to
do
this
job
migration
project
for
jenkins
said
pipeline.
Don't
want
to
only
deploy
this
on
the
private
lab,
but
it's
very
important
for
us
to
support
this
capacity.
In
case
we
have
issues
on
the
public
cloud,
okay.
So
let's
move
to
the
design
overview
and
from
this
diagram
you
can
see
both
say
our
workflow
and
our
set
test
battery
situation.
B
B
But
since
at
that
side
we
really
need
to
support
the
private
level
solution
to
improve
the
stability
for
the
cell
pipeline.
So
you
can
see
the
blue
arrow
shows
our
plan
to
support
the
job
migration
between
these
two
jenkins
hosts
and
after
the
job
migration.
Most
of
our
ci
tasks
run
on
the
private
lab.
B
We
add
a
sme
service
to
handle
the
github
web
hook
and
then
the
sme
client
will
receive
the
message
from
the
service
channel
and
then
jenkins
will
also
build
the
images
on
the
bootstrap
vr,
but
the
difference
is
it
need
to
allocate
iep
for
the
control
plan
before
deploying
the
cpv
cluster
and
then
create
cluster
with
cpu
support
and
get
the
test
results.
B
Well,
I
understand
that
there
are
many
components
and
you
may
have
some
question
about
them,
so
we
have
an
introduction
to
these
key
components
to
show
their
unique
function
and
why
we
use
them
in
our
cell
pipeline
on
this
topic,
I
will
introduce
smee
host
local
plugin
and
jinx.
Draw
builder
and
junction
will
cover
the
cluster
api
cp
and
the
cpa
topic.
B
B
B
Also,
we
need
to
set
up
the
same
channel
on
github
settings
and
then,
with
this
setup,
the
sme
client
can
successfully
receive
the
webhook
request
from
github
and
send
it
to
the
jenkins
with
github
builder
plugin.
So
we
can
see
on
the
right
side.
If
the
sme
is
available,
you
will
have
a
active
service
with
certain
channel
support
for
jenkins.
B
So
basically,
this
is
a
secure
way
to
receive
a
job
trigger
message
from
the
public
github
and
that's
why
smee
is
an
important
component
for
our
ci
private
lab
and
next
is
the
app
management
after
the
jenkins
start
to
run
the
stair
jobs
on
the
bootstrap,
a
need
to
allocate
ipv4
control,
plane.
Node
of
of
the
new
created
cpv
cluster,
the
cpv
used
to
have
a
hd
proxy
feature,
but
it
is
duplicating
now
so
we
have
to
allocate
a
static
iot
for
control
plane
in
the
cluster
control.
B
B
It's
very
convenient
for
us
to
both
allocate
and
release
ip
with
this
plugin
support
and
for
the
append
plugin
itself.
It's
just
like
a
key
value
store.
You
know
to
just
to
add
and
delete
entries
from
user,
and
you
can
see
the
basic
command
for
this.
For
this
plugin
on
the
right
side
is
only
add
and
delete
iop,
and
also
it
provides
a
json
file
that
can
you
can
use
recent
field
to
define
the
ip
range,
a
subset,
subset
and
other
configurations
and
for
the
container
id
parameters.
B
The
jenkins
provide
us,
a
two
called
the
jenkins
jaw,
builder
or
jjb,
is
support
to
verify
and
up
upload
the
jenkins
job
from
the
yama
file,
which
is
very
useful
for
us
to
restore
the
migration
job
quickly,
and
you
can
keep
your
job
description
in
human
readable
text
format.
It
also
has
a
flexible
template
system,
so
creating
many
similar
configured
job
is
easy.
Just
as
you
see
on
the
right
side,
we
have
a
job
template
yamo,
which
can
define
a
template
for
multiple
jobs,
and
when
you
define
templates,
you
create
jobs
with
a
project
definition.
B
The
variables
in
the
job
template
will
depend
on
what
is
supplied
in
the
project
yaml
and
the
macros
yaml
also
used
in
the
job
description
for
jenkins
builders.
This
allows
you
to
create
complex
actions
in
your
job
action.
The
jjb
has
more
job
fields
that
can
be
used
to
define
your
tasks,
but
we
will
not
cover
all
of
them
today.
So,
overall,
the
kjb
can
verify
and
restore
the
jenkins
job
efficiently
for
updates
and
migration
in
our
pipeline,
and
when
you
set
up,
you
will
find
your
jenkins
job
in
the
in
the
console.
B
So
here
is
our
current
job
migration
space.
You
can
see
the
entry
e
to
e
conference
and
network
policy
job
has
been
migrated
to
the
private
lab.
They
have
been
running
on
about
several
months,
our
primary
private
lab.
C
So
I
I
will
continue
so
thanks,
xu
yang
for
introducing
the
jenkins
migration
and
note
that
actually
the
class
api,
the
sme,
the
jenkins
job
builder
and
the
ip
allocation,
these
technologies
are
also
used
by
fire
project
and
this
project,
the
tooling
we
are
using
so
efficient.
So
this
helps
us
set
up
fire
ci
in
a
very
short
time.
So
I
will
now
introduce
the
cluster
api
and
the
class
api,
we
sphere
provider
and
aws
provider
and
also
introduce
how
we
are
using
them
in
our
ci.
C
C
Cluster
is
kind
of
limited
and
the
memory
and
the
cpu
and
the
storage
are
limited.
So
if
we
really
want
to
do
some
more
practical
tests,
we
want
to
test
it
on
real
cumulative
clusters
and
then
the
problem
is
that
how
can
we
reliably
and
fastly
quickly
deploy
accumulates
cluster
on
demand
for
entry?
Ci
and
for
each
pr,
so
the
answer
is
class.
Api
class
api
is
actually
a
bunch
of
crds
for
you
to
describe
your
kubernetes
cluster
and
how
your
next
class
is
like,
for
example,
in
on
the
on
the
right.
C
C
So
you
know
that
in
qnest
you
have
deployment
and
parts,
so
you
can
specify
how
many
parts
you
want
to
maintain
in
this
deployment.
C
So
this
is
the
same
in
the
machine
deployment
suppose
that
you,
you
can
have
three
contributing
nodes
and
then
you
can
have
a
machine
deployment
with
three
replica,
so
you
will
have
three
worker
nodes,
so
so
this
is
actually
a
common
standard
for
deploying
cabinets
on
different
ice
providers
and
each
ice
provider
actually
has
to
implement
this
trust
api
and,
as
you
know,
in
different
ice
providers,
they
offer
different
capabilities,
for
example,
on
aws
you
can
leverage
aws
vpc,
and
you
can
also
leverage
aws
volumes,
elastic
volumes
and
elastics
ip
and
elastic
load
balancer.
C
So
you
on
vsphere
we
can
leverage
vsan
and
we
sphere
clone
virtual
machines
and
so
on.
So
it
also
allows
you
to
specify
eyes
specific
parameters,
and
this
is
the
purpose
of
infrastructure
reference.
So
usually
we
will
create
a
template
for
clusters
for,
for
example,
for
bpsphere.
We
create
a
template
for
vsphere
and
specify
we
spare
specific
virtual
machine
parameters,
for
example.
How
many
cpus
would
you
want
to
assign
to
a
workload
cost
to
a
workload,
a
worker
node
and
how
many
cpus
want
to
assign
to
a
control
plane
and
in
abs?
C
You
need
to
specify
parameters
for
security
groups
and
vpc
and
so
on.
So
this
is
a
very
high
level
overview
of
class
api
and
then,
let's
go
to
specifically
to
cluster
api
provider
with
sphere.
So
this
is
the
current
class
api
provider.
We
are
using
for
entry
ci,
so
it
deploys
kubernetes
on
demand
on
with
a
cluster
and
actually
the
workflow
is
like.
C
Firstly,
we
need
to
deploy
a
bootstrap
cluster
and
this
cluster
is
usually
a
vm
with
a
current
clustering
site,
and
then
we
need
to
install
just
ignore
all
these
tags,
and
this
is
for
the
people
who
are
not
watching
the
recording
and
they
maybe
they
want
to
check
the
text.
And
after
you
deploy
a
current
cluster,
we
will
install
the
cluster
api
components
and
the
width
sphere
provided
inside
it.
C
They
are
installed
as
a
deployment
inside
your
current
cluster
and
then
we
will
configure
the
connection
to
vsphere
api,
and
then
you
can
define
the
cluster
api
crs
on
the
current
cluster
and
on
this
contrast
it
will
further
deploy
a
management
cluster
for
you.
So
one
many
management
cluster
is
needed.
For
example,
if
you
want
to
manage
500
or
1000
kubernetes
clusters
and
operate
on
them,
a
current
cluster
as
a
control
point
is
maybe
not
powerful
enough.
C
So
we
use
this
spreadsheet
cluster
to
deploy
a
management
cluster
yearly
and
then
your
management
cluster
is
perfect
enough
to
contain,
for
example,
three
control,
plane,
node
and
three
worker
nodes.
In
fact,
recently
we
deploy
a
scale
out
cluster
for
testing
entry.
I
ensure
interflow
visibility
about
300
nodes,
so
we
use
the
formal
management
cluster
for
that
purpose
and
then
from
the
management
cluster.
You
can
freely
apply
your
kubernetes
across
the
api
crs
and
it
will
really
deploy
the
workload
cluster
for
you.
So
it's
a
purpose
on
totals
it's
like.
C
It's
like.
This
is
a
funny
thing,
but
in
our
ci,
actually,
usually
people
just
submit
prs
in
a
very
not
very
so
frequently,
and
then
we
usually
manage
dozens
of
clusters
at
the
same
time.
So
we
don't
need
to
need
a
real
management
customer.
We
just
use
a
con
cluster
with
a
bootstrap
cluster
node.
So
let
me
go
to
the
next
slide.
C
So
just
now
should
I
mention
that
we
have
a
ip
allocation
method
and
why
that
that
is
needed
for
cpv.
I
will
talk
about
this
and
to
look
at
the
real
deployment
in
our
cpv
on
wemarco
and
on
our
lab.
So
actually
we
are
just
using
a
current
bootstrap
cluster
and
on
this
kind
of
bootstrap
cluster
we
will
allocate
a
namespace
for
each
job,
so
the
workflow
is
like.
C
So
you
see
that
this
box
is
a
namespace
inside
your
con
cluster
and
inside
this
namespace
we
will
create
a
kimnes
cluster
crs,
according
to
our
template
in
our
intro
ci
code,
so
it
it's
usually
con
consists
of
one
control,
plane,
node
and
two
worker
nodes,
and
all
these
clusters
will
be
actually
created
on
the
v
sphere,
so
with
sphere
will
offer
vsan
for
the
for
the
persistent
storage
and
for
the
node
volumes,
and
it
will
also
offer
in
this
logical
network,
for
your
node
network.
C
So
after
that,
your
kubernetes
custom,
your
kubernetes
virtual
machines
will
be
deployed
and
then
cpv
will
run
cube,
admin
on
the
cluster
nodes
and
then
to
form
a
real
kubernetes
cluster,
and
you
know
that
all
of
the
worker
nodes
and
the
controller
nodes
are
in
the
same
l2
network,
and
the
problem
is
that
in
v3
we
don't
have
a
built-in
low-balancer
solution.
So,
for
example,
if
you
have
three
control
plane
node
you
should
you
need
a
load,
balancer
solution
for
your
virtual
ip
in
for
your
kubernetes
api
server.
C
So
the
solution
in
cpv,
as
is
the
open
source
project,
it
doesn't
want
to
rely
on
any
commercial
load,
balancer
solution.
So
it's
using
the
cube.
We
solution.
Cable
is
working
on
l2
mode,
so
it
will
do
a
little
election
on
your
local
kubernetes
api
server
and
maintain
the
week
between
the
control,
plane
node.
So
the
l2
mode
for
cableweave
requires
that
your
veep
is
in
the
same
slider
of
your
node
slider.
So
that's
why
we
have.
C
C
So
then,
you
have
your
actually
your
clusters
up
and
then
at
this
time
there
is
no
cni
in
it.
C
So
in
our
jenkins
test
code
we
will
build
enter
image
on
the
khan
cluster
because
it
has
docker
in
it
and
then
it
will
distribute
the
images
to
our
nodes
and
then
deploy
enter
yaml
and
then
run
whatever
test,
for
example,
dt
test
and
conference
test
on
your
kim
next
cluster
and
then
after
the
test.
It
exports
the
logs
and
delete
everything
inside
this
namespace.
So
all
of
the
resources
on
on
the
underlay
and
on
the
whisper
physical
cluster
will
be
cleaned
up
very
fast
and
cleanly.
C
So
it's
a
very
convenient
way
for
allocating
a
new
kindness
cluster
on
demand,
and
this
ensure
that
we
don't
reuse
any
kiln
nails.
So
any
test
doesn't
have
to
consider,
for
example,
clean
up
leftovers
or
any
resources
inside
your
kubernetes
and
and
it's
convenient
for
us
and
ctv
is
also
one
of
the
ice
providers
in
the
tenzo
community
edition
and
tenzo
community
edition
actually
is
a
tool
for
you
to
deploy
kubernetes
cluster
and
manage
kubernetes
cluster
on
different
eyes,
so
onward
sphere
is
using
cpv
and
on
aws
it's
using
a
different
bunch
of
technologies.
C
So
this
is
the
actually
the
topology
and
the
overview
of
3
pv
in
our
lab
and
also
on
vmware
cloud.
So
do
you
have
any
questions
related
to
this
site.
C
Yeah,
so
you
do
have
a
question
right.
A
Yeah,
but
I
would
like
first
to
see
to
you
know
to
listen
to
all
your
presentation.
Perhaps
you
you
will
answer
it.
C
Okay
thanks,
so
let
me
introduce
the
class
api
provider,
so
aws
is
a
public
cloud
provider.
So
so
you
know
that
we
use
the
same
template
for
the
cost
api
common
part,
but
we
need
to
use
different
resources
for
aws
infrastructure
reference
resources,
so
the
cluster
api
deploy
if
it
deploys
a
cluster
inside
your
aws,
you
need
to
set
up
or
have
it
set
up
some
resources
for
you.
So
for
the
people
who
are
not
familiar
with
aws,
I
will
expand
the
resources
first.
C
So,
firstly,
you
need
the
vpc,
it's
a
virtual
private
cloud.
I
think
so
it's
an
isolation
and
you
can
think
of
it.
It's
like
it
has
a
root
router
to
the
internet
and
it
has
some
subnets
inside
it.
So
it's
like
a
virtual
private
data
center
for
you
and
then
you
can
attach
your
your
aws
instance
to
the
subnet
and
then
you
can
also
have
public
copies
bonded
to
your
interfaces
or
to
your
interface.
C
And
then
you
can
also
have
persistent
volumes
in
that
address,
and
then
you
can
assign
the
secret
groups
to
your
vms
to
your
instances,
and
these
security
groups
will
control
the
trafficking
out.
You
can
set
allow
rules
or
to
to
allow
some
traffic
in
and
out
so
you
can
also
have
the
elastic
load
balancer
in
aws.
It's
a
managed
load
balancer
in
aws
infrastructure.
C
So
you
don't
have
to
manage
your
own
load.
Balancer
solution
and
the
load
balancer
can
have
public
id
and
public
domain
name.
So
this
is
the
aws
overview
and
if
we
actually,
we
have
a
working
progress.
Proquest
to
add
the
aws
plus
api
support
in
enter
ci-
and
this
is
done
by
our
intern
in
beijing
ting,
but
his
location
in
shanghai
and
the
internet
is
shanghai
and
he
did
a
good
job
for
this
and
then
what?
What
does
this
pr
do?
C
This
pr
will
automatically
deploy
a
cluster
on
your
internets
and
it
allows
you
to
define
your
custom
security
groups
on
existing
vpc,
and
this
is
what
does
this
pr
do
and
the
difference
between
the
cpv
and
the
cpa
is
that,
firstly,
you
need
to
create
a
vpc
and
in
advance,
and
then
you
need
to
create
some
subnet
inside
your
pc
in
advance
and
specify
that
to
to
our
ci
script
so
that
if
you
create
a
cluster
inside
your
current
cluster
and
they
will
be
attached
to
the
subnet
and
they'll
be
attached
to
the
security
groups,
and
you
can
have
the
cpa
to
automatically
create
all
everything
for
you.
C
But
in
our
situation,
entry
to
test
cases
needs
to
access
each
to
control,
point
node
and
worker
nodes,
and
then
we
also
need
to
deploy
or
distribute
enter
images
on
these
nodes.
So
we
need
ssh.
An
entry
requires
that
some
ports
to
be
open
for
agent
to
connect
to
controller
directly
and
also
for
agent
to
maintain
high
availability
cluster
for
for
egress.
C
So
all
these
needs
some
special
customization
for
certificate
groups.
So
we
have
a
actually
have
a
combination
for
this
and,
firstly,
if
we
apply
a
cluster
on
the
aws,
it
will
automatically
create
the
load
balancer
for
you
and
assign
the
public
domain
name
for
you,
and
it
will
automatically
create
some
security
groups
for
the
load
balancer
and
then
it
creates
the
instances
on
the
specified
subnet
and
then
it
will
attach
the
instances
to
the
custom
security
groups.
So
this
is
the
major
difference
and
then
the
other
parts
are
the
same.
C
We
use
the
same
code
to
deploy,
enter
and
the
test,
so
the
only
part
that
is
different
is
the
ip
management
and
the
load
balancer
for
the
kubernetes
api
and
the
some
security
groups.
So
so
the
cost
api
perfectly
hides
the
details
between
the
different
providers
first,
and
we
just
need
to
use
the
different
cluster
templates
for
different
heists
and
note
that
after
your
cluster
is
deployed,
there's
no
entering
so
it's
not
like
eks.
C
So
you
know
that
eks
inc
in
in
the
bls,
your
part
will
get
aws
elastic
interface
and
get
the
subnet
ips,
but
for
the
cluster
api
it
has
no
such
support.
So
only
your
node
gets
the
elastic
interface
on
the
subnet.
The
part
is
still
using
the
node
ipam
controller,
so
this
is
perfect
for
us
to
test
enter
on
it
and
before
we
can
make
this
thing
work.
There
are
some
other
work
to
do.
Firstly,
we
need
to
finally
tune
the
security
groups,
as
I
just
mentioned.
C
Actually
we
are
opening
every
part,
and
then
we
we
will
open
only
necessary
product
and
find
out
if
it
work,
and
then
we
also
want
to
remove
the
elb
or
make
it
just
the
internal
load
balancer,
because
this
is
because
by
default
it
opens
your
kubernetes
api
to
the
to
the
internet,
and
this
is
not
safe
for
us,
and
maybe
we
can
also
tune
this
security
group
to
block
access
from
the
internet,
and
then
we
need
to
add
job
to
schedule,
adjob
to
jenkins
and
then
schedule
the
some
jobs
to
to
aws
and
for
aws
it's
more.
C
It
costs
more
than
our
lab
and
then
our
vmware
public
cloud.
So
we
will
only
schedule,
for
example,
daily
job
for
it,
and
this
is
because
the
vsphere
provider
has
to
use
some
older
kubernetes
releases
and
it's
not
releasing
according
to
the
kubernetes
cadence
very
in
a
very
timely,
timely
manner,
but
no
less
release
new
kubernetes
releases
in
a
very
timely
manner.
So,
for
example,
if
we
want
to
test
the
compatibility
with
kubernetes
1.23
24,
we
can
probably
rely
on
aws
for
this.
C
D
C
We
can
also
have
multiple
control
plane
nodes
if
that's
necessary.
D
Yeah
guys
do
we
do
it
do
we
do?
We
also
run
the
ports
on
the
control
node
or
it's
only
for
controller.
C
D
And
so
so
we
have
three
nodes
in
the
cluster
right:
okay,.
C
Okay
thanks.
So
another
point
is
that
for
simplicity
I
just
drawed
one
subnet.
Actually,
you
can
have
multiple
subnets
for
different
clusters
and
for
the
bootstrap
vm,
and
this
bootstrap
vm
will
be
connected
to
champions.
C
And
maybe
let
me
go
to
the
next
slide.
So
the
improvement
for
cast
api
usage
in
ci
and,
firstly,
we
need
to
recover
the
ci
in
winrar
cloud.
So
xu
yang
is
already
working
on
that
and
then
we
use.
We
want
to
use
newly
networked
cpv
images
and
the
current
images
are
too
old
for
for
the
kidneys
releases
and
then
we
want
to
freely
switch
between
the
wema
cloud
and
the
lab.
C
So
there
are
some
work
to
do
for
this
and
we
need
to,
for
example,
add
some
automation
to
disable
some
jobs
from
the
vmware
account
and
and
then
do
a
genuine
update
and
update
all
the
jobs
on
the
lab.
Or
if
we
want
to
switch
back
to
vmware
cloud,
we
we
need
to
disable
the
jobs
and
just
update
the
jobs
in
the
vmware
cloud.
I
disable
the
jobs
in
the
lab
and
update
the
jobs
in
you
know.
C
Vmware
cloud
and
the
benefit
for
running
jobs
inside
vmware
pro
is
that
we
can
expose
all
of
the
logs
to
the
github.
So
you
can
see
everything
and
what's
failing
and
what's
running,
what's
being
deployed
in
your
job
in
your
test,
but
in
the
lab,
everything
is
a
credential.
Oh
sorry,
it's
sensitive,
so
we
don't
want
to
expose
any
logs
for
now
and
then
you
want
to
do
some
daily
jobs,
some
aws
and
with
reason,
connect
releases
and
I'm
I'm
working
on
this
with
our
intern.
C
And
then
I
also
noticed
that
some
of
our
tests,
for
example,
ipam
multi-cluster
ipv6
and
some
other
jobs,
require
specific
topologies
from
the
underlay,
and
I
think
that
the
ipam
and
the
multi-cluster
jobs
probably
can
be
moved
to
use
the
cluster
api.
For
now
they
are
using
actually
pre-def
deployed
kubernetes
clusters,
and
I
think
the
core
requirement
for
this
job
to
work
is
to
configure
routing
static
routes
on
your
underlay
and
I
think,
on
nsx
and
on
aws
vpc.
We
have
the
ability
to
configure
static
routing
on
the
underlay.
C
So
if
we
migrate
these
jobs
to
class
api,
we
can
manage
less
predefined
testbed
and
we
use
the
same
technologies
for
more
and
more
testing
jobs.
And
another
problem
is
that
you
know
that
rhythm
recently,
we
find
a
bug
between
cube
weep
and
the
ipad
enter
flexible
ibm
and
if
we
use
actually
the
class
api
for
ipam
testpack,
we
will
discover
this
type
of
bug
or
confliction
more
timely.
C
And
then
I
also
have
idea.
So
you
know
that
recently
we
have
a
fire
project
and
we
also
borrow
some
code
from
the
android
project
and
you
know
that
the
cluster
api,
based
on
the
the
cust
api
code
for
android
and
the
fire
are
very
actually
almost
the
same.
C
So
I
think
maybe
we
can
extract
it
to
a
common
library
and
then
just
import
that
in
enter
and
the
fire-
and
this
is
in
this
way
we
just
add
the
aws
support
in
one
place
and
it
can
be
consumed
by
also
entry
and
fire,
and
maybe
in
future
we
can
add
more
providers
to
our
cluster
api
ci.
C
So
we
can
also
potentially
leverage
tensor
community
edition,
and
this
is
because
it's
essentially
does
the
same
thing
we
want,
but
it
depends
on
if
it
can
simplify
our
work
for
maintaining
the
underlying
special
special
topologies
for
our,
for
example,
ipad
and
multi-cluster
jobs.
C
A
Hello,
just
did
a
quick
question
regarding
the
sm
ee
channel,
so
we're
leveraging
a
third
party
for
opening
this
channel.
Do
you
think
there
is
a
any
security
concern?
Do
we
have
to
worry
about
any
security
concern
and
perhaps
deploy
our
own
sme
instance,
or
is
it
perfectly
okay?
Even
if
we
don't
have
any
security
guarantee.
C
Yeah,
this
is
a
very
good
question.
So
for
now
we
are
just
relying
on
the
public
sme,
and
this
is
because
sme
is
developed
by
github
itself,
the
same
guys
developing
github,
and
it's
also
operating
the
github
guys
on
the
azure
cloud.
So
I
guess
that
for
now
we
can
trust
them,
but
since
the
jenkins
is
running
inside
our
lab,
so
if
we
don't
trust
them,
it's
an
open
source
project.
C
We
can
also
have,
for
example,
set
up
aws
vm
to
host
the
sme
server
and
apply
a
public
certificate
in
the
domain
name
file,
and
we
we
just
maintain
our
own
sme,
but
that
this
will
take
more
effort
to
maintain
this
and
for
now
I
don't
want
to
invest
in
this
way,
but
people
can
discuss.
E
E
One
of
the
very
annoying
thing
I
found
out
about
ci
is
that
if,
in
some
cases
you
figure
out
something
really
little
to
change
to
the
pr
and
you
force
push
to
to
the
pr
a
couple
of
times
and
every
time
you
do
a
test,
all
people
needed
to
wait
until
the
earlier
jobs
to
finish
and
the
neural
jobs
are
queued,
so
the
if
so
they
needed
to
wait
until
all
these
in-flight
tests.
E
So
I'm
not
sure
that
with
the
new
sme
serves
or
whatnot
is
there
any
logic
you
can
implement
it
from
the
smde
side,
or
maybe
the
github
side
or
the
jenkins
site,
which
can
do
something
like
if,
on
a
specific
pr,
there
is
new
change,
then
we
sort
of
like
tear
down
or
stop
the
original
build
ci.
For
that
specific
pr
and
just
spawn
up
a
new
one.
E
I
I
don't
know
if
it's
as
easy,
as
just
deleting
the
quote:
unquote
name
space
you
mentioned
for
the
ca
capv
that
you
spawned
up
and
just
created
a
new
one,
or
it
is
more
involved
than
that.
This
is
my
question.
C
Yeah
yeah,
I
got
it
so,
unfortunately
in
jenkins,
if
we
kill
just
stop
a
job
forcefully
and
just
abort
it,
it
has
no
chance
for
us
to
do
any
cleanup.
So
it
relies
on
that.
For
example,
you,
your
job
finish,
is
either
successfully
or
failed,
and
then
there
is
a
catch
up
for
the
there's,
a
catch
for
the
exiting
signal,
and
then
it
performs
the
cleanup.
So
but
your
your
idea
is
very
good
and
I
think
we
can
maybe
add
some
more
comments
support
some
some
more
comments
in
the
pr.
C
For
example,
you
want
to
abort
something
just
abort
add
a
comment
about
something,
and
then
we
use
the
jenkins
api
and
we
also
use
the
trust
api
to
find
out
which
jobs
are
running
for
your
prs
and
then
we
kill
those
jobs.
And
then
we
clean
up
the
namespaces,
and
I
think
this
is
this-
is
the
way
to
do
it.
So
your
user
user
experience
will
be
that
you
trigger
a
test
door,
and
then
you
push
the
new
patch
and
then
you
comment
about
abort
or
or
about
some
particular
job.
E
E
Fixing
comments
of
the
pr
and
having
to
wait
a
a
a
pr
rn
that
you
know,
which
have
to
you
know,
run
again
is
kind
of
like
annoying.
C
Right
right,
okay
and
then
I
think
we
can
use
the
jenkins
api
to
fetch
all
of
the
jobs
and
determine
that
detect
that
which
jobs
are
related
to
this
pr
and
then
once
you
brought
all
of
up
or
stopper,
it
can
find
out
the
related
jobs
and
kill
them.
I
can
I
can
investigate
this,
and
I
will
I
will
investigate
investigations
after
the
meeting.
E
A
Okay,
so
I
have
a
very
stupid
final
question
and
now
now
that
we
are
moving
starting
to
do
jobs
on
obvious
aws
public
cloud,
do
you
made
an
estimate
of,
for
instance,
how
much
it
will
cost
to
run
a
ci
job.
C
Yeah-
and
this
is
a
very
good
question,
so
the
budget
I
said
for
us
is
200
u.s
dollars
for
a
month
and
I
evaluated
that
each
each
round
will
cost,
for
example,
enter
e3
will
cost
about
12
or
15,
and
then
I
probably
I
can
raise
the
budget
because
we
we
have
a
really
small
costs
on
aws,
usually
it's
less
than
200
dollars,
and
I
think
I
can
accept
the,
for
example,
300
400,
and
actually
we
have
a
very,
very
low
cost
on
google,
google
cloud
platform
and
azure.
C
Really
it's
not
it's
less
than
two
two
dollars
on
azure
and
20
less
than
20
on
google
platform.
So
if
we
extend
our
cluster
guide
to
azure
or
google
platform,
we
can
also
distribute
the
workload
to
these
ice.
So
maybe
it
will
further
cut
down
our
aws
costs.
A
Good
good,
so,
as
you
wrote
in
the
presentation,
as
you
discussed
in
the
presentation,
it's
mostly
we'll
mostly
use
aws
for
a
daily
job
at
the
moment.
Is
that
correct?
Considering
our
budget.
C
Yeah
yeah,
so
actually
the
airblast
costs
more
because
it's
expensive
and
sometimes
our
elk
compatibility
job
will
create
some
leftover
clusters
and
it's
not
cleaned
up
in
a
tiny
manner.
A
Good:
okay,
thanks
thanks
for
this,
for
this
clarification,
okay,
so
it
seems
that
there
is
no
additional
question.
I'll,
probably
just
wait
a
few
more
seconds
in
case
there
is
some.
F
Jobs
which
were
like
running
in
in
jenkins
before
would
they
like
move
to
the
new
platform
or
just
remain
as
they
are
like
we
had
the
ipv6
before,
etc.
C
Pardon,
could
you
repeat
the
question
again
more
slowly.
F
C
So
there
are
some
ipv6
and
like
some
windows,
jobs
implemented
in
some
private
repository
and
the
testbed
deployment
code
is
also
in
private
test
repository.
I
want
to
actually
open
source
this
code
because
there
is
no
sensitive
information,
it's
just
because
when
these
jobs
are
implemented,
we
do
we
do
it
in
the
ugly
way
and
then
we
decided
not
now
to
refactor
and
expose
it
to
the
public.
C
Just
it
just
works,
but
I
think
for
the
long
term,
we
we
should
refine
all
these
code
and
contribute
to
the
public
control
repo.
So
people
know
what
what
we
are
testing
in
fpvcs
and
in
windows.
F
A
Perfect,
it
appears
that
then
it's
all
for
this
presentation.
I
would
like
to
thank
again
the
presenters
for
for
this
for
bringing
up
this
presentation,
most
importantly,
for
taking
care
of
all
these
announcements
on
our
ci
framework,
and
now
it's
time
for
open
discussion.
Is
there
any
other
topic
that
you
would
like
to
bring
up
questions
grievance
complaints.
A
And
I
believed,
and
therefore
this
is
all
for
today,
so
I
would
like
to
thank
everyone
for
joining
and
thanks
again
to
the
presenter-
and
I
wish
everyone
a
good
day
good
afternoon
or
if
you
are
on
the
west
coast,
us
west
coast
as
usual.
I
wish
you
a
good
night.
A
Thanks
for
joining
and
see
you
again
in
two
weeks
time.