►
From YouTube: Antrea Community Meeting 09/25/2023
Description
Antrea Community Meeting, September 25th 2023
A
Good
morning,
good
evening
and
or
good
afternoon
today
is
Tuesday
September
26th,
and
this
is
the
entria
community
meeting
for
today
we
have
just
a
single
topic
in
our
agenda
and
it's
a
presentation
about
running
Andrea,
Matrix
texts
for
different
kubernetes
version
using
a
KP
and
sorry
I,
don't
know
if
I
pronounce
it
correctly
and
yes,
so
that's
the
only
presentation
that
we
have
and
let's
go
straight
to
the
presentation
which
is
going
to
be
given
by
pulkit.
So
Phuket.
Please
go
ahead
with
your
presentation.
B
This
is
a
general
architecture,
diagram
for
the
Capa
jobs
that
we
have.
So
we
there
is
a
v
inside
a
VPC.
In
AWS
we
have
a
bootstrap.
We
can
see
a
K-pop
bootstrap
VM,
on
which
a
kind
cluster
runs
that
that
act
as
a
paper
management
cluster
we
can
say
and
inside
that
worker
cluster
inside
that
management
cluster.
We
can
create
multiple
worker
clusters
and
on
that
worker
cluster
we
can
run
the
Andrea
anterior
test,
the
Matrix
compatibility
test
or
any
other
test
like
e2e
conformance
Network
policy.
B
So
we
can
create
multiple
workload
clusters
inside
a
management
cluster.
So
this
act
as
a
management
cluster
and
then
on
the
worker
clusters.
We
can
have
as
many
number
of
nodes
the
like
control,
pane
nodes
and
broken
nodes
as
we
want,
according
to
our
test
and
on
these
work,
these
control
plane
nodes
of
each
cluster.
B
They
are
connected
with
a
load
balancer,
so
this
is
a
classic
load
balancer,
which
is
there
we're
just
connected
to
the
control
plane,
node
balancer
and
this
can
and
the
cube
API
server
is
like
connected
with
this
load
balancer.
So
whenever
we
query,
we
are
querying
for
any
commands
any
Cube
CTL
commands.
We
are
not
directly
running
those
on
like
the
cube
CTL.
We
are
connected
on
the
control
plane,
node.
We
are
connecting
to
the
control
plane
node
via
the
load
balancer.
B
So
this
was
done
so
that
we
did
not
expose
our
control
plane,
node
directly
to
the
internet
or
to
the
user.
We
are
going
at
Via
load
balancer,
and
then
we
have
the
security
groups
configured
for
on
the
control,
plane,
nodes
worker
nodes
on
the
load
balancer
and
as
well
as
on
the
Custom
Security
groups.
So
there
is
also
one
thing
that
these
load
balancer.
They
are
connected
to
the
internet
gateway
for
now.
B
But
later
we
will
see
in
the
future
scope
that
we
are
planning
to
remove
this
connection
so
that
elb
is
not
directly
exposed
to
the
internet.
Then
we
have
a
subnet.
So
this
there
is
a
same
subnet
inside
of
VPC,
which
all
the
nodes
are
using
to
get
their
private
IP
and
their
work
on
that
particular
subnet
itself.
B
And
then
then
this
is
the
general
naming
that
we
are
using
the
job
name,
hyphen
build
number
or
the
for
the
namespace,
and
it's
the
same
for
the
cluster.
Also,
we
are
giving
the
same
name
for
the
namespace
as
well
as
the
cluster
in
our
present
Temple
present
implementation.
B
So
this
is
a
general
overview
of
how
the
architecture
of
kPa
job
is
and
how
the
Clusters
are
managed
and
created.
So
we
are
using
the
kind
distribution
here,
because
this
is
the
simplest.
This
is
the
simplest
kubernetes
distribution
that
we
can
use
to
to
create
a
management
cluster
and
then
later,
whenever
the
job
is
done
like
whenever
the
job
is
completed.
So
we
can.
We
have
two
options:
either
we
only
delete
our
particular
workload
cluster
and
the
management
cluster
remains,
or
we
can
also
like.
B
If
you
want
to
clean
up
the
management
cluster,
then
we
can
perform
the
cleanup
for
the
management
cluster
also.
B
B
So
when
switching
to
the
service
user
model,
we
modified
all
the
configurations
for
the
for
the
Clusters
like
there
is
a
cluster
CTL
in
the
cluster
AWS
ADM
resources,
and
then
we
have
the
load
balancer
instances
the
cluster
EML,
so
we
modified
few
things
and
we
used
the
IM
roles
instead
of
the
actual
access
key
and
the
secret
access
keys
that
were
there.
B
So
this
was
one
of
the
thing
then
I
set
up
the
K-pop
bootstrap
VM
installed
the
necessary
environment
settings
that
were
related
to
running
the
anti-related
Matrix
compatibility
test
and
registered
it
as
a
node
on
the
Jenkins.
So
this
Kappa
bootstrap
VM,
which
has
the
management
cluster,
is
registered
as
a
node
on
the
Jenkins
and
whenever
a
job
runs
a
workload
is
cluster
is
created
inside
that
management,
cluster
and
it
is,
and
the
job
is
run
there.
B
Then
the
next
thing
was
to
setting
up
the
Matrix
job
to
run
weekly
on
AWS.
So
the
purpose
of
this
Matrix
job
was
to
check
the
compatibility
of
Android
with
the
recent
four
releases
of
the
kubernetes
which,
whatever,
whichever
kubernetes
releases,
are
there.
So
we
have
basically
created
two
axes.
One
is
for
the
OS
and
another
one
is
for
the
kubernetes
version,
so
we
can
add
more
os's
here
as
per
the
requirement
So.
B
Currently
we
have
two
values
for
the
access
for
this
OS
VR
testing
it
with
Ubuntu,
20.04
and
22.04,
and
for
we
have
for
the
different
kids
version
version
1.25262728.
So
whatever,
whenever
this
job
is
triggered,
what
happens?
Is
a
cluster
is
created
with
this
OS
and
this
particular
kubernetes
version?
So,
similarly,
in
a
map,
it
runs
in
a
matrix
sort
of
way,
so
this
OS,
with
this
version
with
all
and
other
three
version,
and
then
this
OS
runs
with
all
these
four
versions.
B
So
a
clusters
are
created
accordingly,
so
the
cluster
ml,
which
is
there
so
it
adapts
to
the
OS
version
and
the
K8
version
and
the
instances
that
are
getting
provisioned
on
ec2.
So
they
will
have
the
Clusters
that,
like
the
one
control
plane,
node
and
the
two
worker
node,
they
will
have
this
particular
OS
and
this
particular
version,
and
then
our
e2e
conformance
and
network
policy
test
would
run,
and
this
Matrix
will
get
filled
up
as
per
the
result
of
the
test.
B
And
then
we
can
see
that
if
there
is
an
issue
like
Suppose,
there
is
an
issue
that
there
is
a
failure
of
job
with
version
1.26.1
and
with
this
particular
Ubuntu
release.
So
we
can,
then
you
know
cause
of
the
error
that
why
is
it
not
running?
Is
it
some
compatibility
issue
or
something
something
related
to
the
configuration
settings
and
all
so?
This
is
what
a
matrix
of
so
it
will
run
as
a
Cron
job.
B
It
will
run
every
week
on
Jenkins
CI
that
we
have
also
like
if
someone
wants
to
trigger
it
manually
like
if
they
have
made
some
PR
changes,
and
they
think
that
it
might
affect
the
compatibility
with
one
of
the
kubernetes
releases
and
they
want
to
run
this
Matrix
job.
So
there
is
a
trigger
phase,
also
that
is
test
K,
Pi
AWS,
and
they
can
use
that
trigger
phase
to
trigger
the
job
manually,
also
from
the
rpr.
So
this
also,
we
have
provided
any
questions
so
far.
B
Then
moving
to
the
Future
scope,
so
one
for
first
thing
and
the
major
thing
that
we
need
to
do
is
we're
creating
a
custom
DHCP
option
set
for
VPC,
so
that
the
public
IP
is
not
assigned
to
the
resources
which
are
created
so
now.
For
now,
when
we
are
using
the
cluster
API
and
applying
the
yaml
to
create
a
K-pop
cluster,
so
public
IPS
are
assigned
to
the
load
load
balancer
and
sometimes
they
are
also
assigned
like
previously.
They
were
also
assigned
to
the
control
plane
instances
in
the
worker
node.
B
So
we
do
not
want
to
get
them
assigned
to
a
public
IP.
So
for
that
we
need
to
create
a
custom
DHCP
option
set
and
assign
it
to
the
VPC
on
inside
which
we
are
creating
these
instances.
For
now
we
have
the
DHCP
option
set
that
is
created,
that
is
using
the
Amazon
DNS
servers
and
the
Amazon
domain
name.
B
So
we
need
to
modify
it
to
use
our
custom
DNS
server
so
that
this
public
IP
is
not
assigned
and
it
is
not
exposed
anywhere
and
the
next
thing
which
I
talked
about
earlier,
also
in
the
architect
during
the
architecture
diagram
that
you
need
to
detach
the
load
balancer
from
the
internet
gateway.
So
these
are
the
two
things
under
future
sport
and
first
of
the
option,
like
the
custom
DHCP
option,
that
is,
the
work
is
currently
going
on
for
there
for
that
part
and
then
on
the
testing
side.
B
So
we
have
to
configure
all
the
access
properly
like.
For
now
we
have
not
tested
the
complete
Matrix
that
was
shown
there.
So
we
need
to
test
the
complete
Matrix
with
all
the
possible
values.
For
now
we
have
tested
it
with
Ubuntu
or
20.04
with
the
different
versions
of
kubernetes.
So
we
need
to
test
the
other
versions,
also
with
the
with
the
kubernetes
version,
and
we
need
to
have
a
full
compatibility
Matrix
there.
So
this
is
to
be
taken
care
of
on
the
testing
site
in
the
future.
B
So
that's
all
from
my
side
now
I
can
go
on
for
the
demo.
If
you
don't
have
any
questions.
B
You
can
see
that
when
so
this
is
the
bootstrap
VM,
which
is
there
so.
C
B
So
this
is
the
management
cluster
which
is
running
so
it's
a
single
node
cluster.
It
has
only
the
control,
plane,
notes
and
if
I
show
you
the
boards
and
the
name
spaces
which
are
there,
so
we
have
the
kPa
controller
manager,
that
is
the
cluster
API
AWS
controller
manager
that
is
running
inside
the
cluster
API
AWS
system.
And
then
we
have
the
cluster
API
Cube
ADM
bootstrap
controller
and
the
bootstrap
control
plane
manager
that
that
are
running
in
the
K,
their
respective
name
equation.
B
And
then
we
have
the
Capa
a
Capi
controller
manager.
So
these
are
so
these
resources
they
are
created
whenever
we,
since
we
initialize
this
as
a
management
cluster
with
the
provider
as
AWS.
So
these
resources
are
provisioned
there
and
then,
when
we
trigger
the
job,
so
I
have
triggered
the
job
from
beforehand.
So
this
is
the
Android
weekly
kepa
AWS
Matrix
compatibility
test
So.
Currently
it
is
running
with
the
version
1.25.1
for
kubernetes
and
Ubuntu.
B
Os
version
is
20.04
and
after
the
after
the
instances
are
getting
created,
it
runs
the
Andrea
e2e
test
on
it.
So
you
can
see
like
in
the
instances
part.
So
we
have
these
three
instances:
getting
created:
The
Matrix,
K,
it's
one
which
you
can
see
so
one
is
for
the
control
pin
and
the
other
two
are
the
worker
nodes
instances
that
are
there
and
then,
if
I
use
the
cute
one
trick.
B
You
can
see
that
we
have
the
Android
running
on
it
and
then,
if
I
get
the
nodes,
so
we
have
this.
You
know
like
these.
Three
are
the
nodes.
We
have
one
control
pane
the
two
worker
nodes
which
are
there
and
you
can
see
the
version
is
version
1.25
to
5.1
for
two
point:
rupees
and
then
the
Andrea
e2e
confirmation,
Network
policy
test,
runs
as
they
runs
you
as
they
run
usually.
So
this
is
how
like
inside
a
management,
cluster
or
workload.
B
Cluster
is
getting
provision
and
now,
if
I
want
to
delete
this
cluster,
so
I
can
delete
the
cluster
from
here
using
the
cube
serial
commands.
This
will
free
up
the
resources
that
have
been
occupied
by
the
cube
CDL,
but
the
instances
that
were
created.
We
need
to
do
a
cleanup
from
the
script
for
those
instances
and
we
need
to
delete
the
delete
those
instances
from
a
separate
script.
These
instances
when,
when
we
delete
the
workload
cluster,
these
instances
are
not
automatically
deleted.
B
We
need
to
have
separate
commands
in
the
script
to
delete
these
instances
that
are
getting
created
and
the
load
balancer
that
is
created.
So
you
can
see
that
this
was
the.
There
is
a
load
balancer
which
is
there
and
this
load
and
on
this
load
balancer.
We
have
this
instance.
That
is
the
control
plane
instance
that
is
registered
here
and
then,
whenever,
whenever
Cube
API
server
queries
something
so
it
queries
to
the
load
balancer,
it
reaches
the
control,
plane,
node
and
all
the
operations
happen
like
this.
This
is
the
process
that
takes
place.
B
So
this
was
a
short
demo
kind
of
thing
and
then
so
this
is
the
Matrix
job,
which
is
then
we
have
currently
like
for
the
testing
purpose.
I
was
using
only
version
1.25.1
for
this,
but
otherwise
the
entire
Matrix
that
I
have
shown
in
the
slide.
So
entire
Matrix
will
be
shown
here
in
the
under
the
configuration
option.
C
I
have
a
question
on
the
resource
right
now,
so
I
think.
If
you
delete
the
cluster
from
the
bootstrap
cluster
through
the
workload
cluster
from
the
bootstrap
processor,
why
does
it
not
deleting
the
related
instances
from
the
AWS
I
think
If?
It
creates
those
instances
from
your
cluster.
It
should
be
deleted
in
those
instances
after
you
delete
the
cluster.
B
Okay,
so
what
happens
is
that
when
we
delete
the
cluster.aml,
that
is
there
and
like
the
entire
ml
that
we
have
applied
when
creating
the
instances
when
we
delete
that,
so
it
from
the
cube
CTL
side,
the
apis?
What
are
the
resources
that
it
was
held
like
when
we
perform
Cube
CTL
get
nodes,
so
we
won't
get
any
nodes
there.
B
It
will
free
up
the
resources
from
there,
but
it
won't
connect
to
the
AWS
API
to
delete
the
instances,
so
that
connection
is
does
not
take
place
like
it
won't
connect
to
the
AWS
API
and
the
cluster
will
still
remain
there
in
a
running
state.
So
we
need
to
delete
it
using
the
AWS
CLI
commands,
that
is
the
AWS
ec2
delete
instances.
B
So
this
command.
We
need
to
use
to
delete
the
instances
for
now
like
there
might
be
an
option
like
we
need.
We
can
debug
more
into
it
and
we
can
find
if.
C
A
C
Think
there
was
a
originally
there
was
an
assumption
that
in
the
crust
API
implementation
that
you
you
need
to
delete
the
cluster
only
and
then
it
will
clean
up
everything
for
you
and
then
you
can
delete
the
namespace.
Otherwise,
if
you
delete
everything
when
it
performs
the
API
performs
the
cleanup,
it
can
now
find
your
class
corresponding
cluster
and
it
will
fail
to
do
the
cleanup.
Maybe
that's
the
issue,
because
you
know
that
we
also
use
the
cluster
API
for
with
sphere
provider
internally,
so
we
run
into
such
issues
occasionally.
C
So
maybe
this
API
code
implementation
has
similar
issues.
You
can
check
whether
it's
possible
to
just
delete
the
cluster
and
then
whether
the
resources
are
cleaned
up,
yeah
sure,
because.
B
C
A
Looks
like
there
are
no
other
questions,
so
it's
been
a
great
presentation
for
kids
thanks
a
lot
for
for
sharing
this,
and
do
we
have
any
other
topic
for
today,
because
that
was
the
only
one
that
reading
the
agenda
is
there
anything
else
that
you
would
like
to
this
to
discuss,
bring
up
for
review.
A
Well,
I
believe
that
this
silence
seemed
to
explain
perfectly.
There
is
no
other
topic.
I'll
just
keep
waiting
a
few
more
seconds
in
case.
Anyone
wants
to
bring
up
something.
A
And
I
believe
that
it
might
be
all
for
today.
So
thanks
everyone
for
joining
and
I
wish
you
a
great
afternoon
or
or
a
good
evening
or
a
good
night,
for
our
friendship
is
in
the
west
coast
in
the
U.S.
So
thanks
a
lot
for
attending
and
talk
to
you
in
two
weeks
time.