►
From YouTube: Antrea Community Meeting 10/26/2020
Description
Antrea Community Meeting, October 26th 2020
A
So
the
very
scary
noise
from
zom
is
reminding
you
that
the
meeting
is
being
recorded
and
so
for
today
we
have
two
topics
on
the
agenda
both
proposed
by
abhishek.
A
So
the
further
these
comments,
the
first
one,
is
a
proposal
to
add
a
baseline
tier,
and
for
this
we
have
already
an
open
issue
which
is
issue
1362
and
the
second
are
potential
solutions
for
sorting
anterior
native
policies
based
on
higher
priority
and
policy
priority,
which
is
issue
1388..
A
B
Thank
you
for
sharing
the
screen
just
to
go
through
the
issues
and
maybe
one
proposal
for
the
first
issue.
So
ever
since
we
introduced
the
anti-native
policies
and
tiering
crds
in
the
in
the
past
couple
of
releases,
we
have
had
some
users
trying
them
out
and
we
have
already
received
some
feedback
in
terms
of
what
they
would
like
to
see
in
the
policies
or
entering
in
those
crds.
B
Few
feedback,
a
couple
of
things
actually
more
than
two
that
were
reported,
two
of
them
are
on.
You
know,
github
issues,
some
of
them.
I
have
made
maintaining
myself
like
a
list
of
things
which
I'll
open
up
issues
for
them
as
well.
So
between
those
feature,
requests
for
ask,
I
thought
you
know
these
two.
B
B
Essentially
his
use
case
is
that
okay,
maybe
maybe
I'll
back
up
a
little.
You
know
with
the
anterior
network
policies
and
gearing
what
we
introduced
is
that
an
ability
to
for
administrators
to
write
network
policies
which
will
override
a
developer
policy
or
a
developer
return
kubernetes
network
policy
so
that
the
tiers
in
which
we
create
are
always
evaluated
and
enforced
before
the
kubernetes
network
policies
are.
B
So
so
this
use
case
is
actually
for
something
to
be
done.
The
other.
B
That
is,
they
want
to
have
some
guardrail
rules
for
the
cluster.
Some
default
cluster
wide
policies
to
be
enforced,
but
they
want
to
allow
namespace
owners
or
developers
to
write
their
own
network
policies
to
overwrite
those
clusters
for
policies.
So
this
is
something
that
we
have
not
yet
or
our
tiering
does
not
solve.
B
So
so
that's
the
use
case.
So
I
think
if
we
go
back
to
the
diagram
you
know
on
this
community
talk
that
we
have.
Essentially,
this
is
the
layout
of
the
land
that
is,
you
know
we
provide
a
capability
for
users
to
create
their
own
tiers.
B
In
addition
to
that,
we
also
have.
We
also
have
some
system
generated
default,
tiers
that
are
created,
or,
I
should
not
say
devotees
I.
I
will
say
that
we
have
five
tiers
that
are
created
when
anterior
controller
is
initially
initialized,
so
for
default
consumption
of
your
of
the
administrators,
but
they
can
also
create
their
own
tiers.
Now
the
thing
is
that
if
the
cluster
administrator
decides
to
create
their
own
tears
today,
they
can
only
create
tears
which
will
be
enforced
before
the
kubernetes
network
policy
table.
B
So
in
this
case
now
we
have
like
three
tiers
in
the
in
the
in
the
diagram.
As
you
can
see,
the
emergency
tier
takes
precedence
over
every
other
tier,
so
any
policy
which
is
created
in
this
tier
will
be
first
and
post.
So
if
there
is
a
perfect
match
on
the
rule,
the
rules
action
will
be
taken.
B
B
Once
all
the
all,
these
tiers
are
evaluated
and
there
is
no
match,
then
the
the
traffic
will
be
matched
against
kubernetes
network
policies
which
are
developer
written,
and
these
are
all
upstream
policies
and
eventually,
when
none
of
these
traffic
pattern
or
rules
rule
matches
today,
what
we
do
is
that
we
either
allow
or
drop
based
on
whatever
the
baseline
goes
on,
so
so
in
general,
if
there
is
no,
if
the
pod
is
not
selected
by
any
of
those
network
policies,
then
the
traffic
will
be
allowed.
C
B
This
will
also
be
a
system
generated
like
or
the
proposal
is
that
we
would
like
to
create
this
baseline
here
at
startup,
and
I
don't
think
they
should
be.
You
know
we
should
allow
users
to
create
more
than
one
baseline
here.
You
know
or
provide
that
flexibility,
because
I
think
for
this
particular
use
case,
a
single
tier
below
the
kubernetes
network
policy
probably
solves
all
the
use
cases
that
would
be
out
there,
so
the
so.
B
D
So
any
layer,
any
tier
would
have
like
a
default,
allow
rule
and
basically
any
denial
would
apply
on
each
one
of
those
tiers.
B
There
is
essentially
a
collection
of
policies
and
in
each
policy
there
are
rules
so
that
so,
when
anterior
controller
starts
up,
a
tier
will
be
created
and
it
will
be
essentially
empty.
So
let's
say
if
a
user
does
not
create
any
policy
or
associates
it
with
the
baseline
here,
so
this
will
be
empty,
so
that
would
essentially
mean
today's
behavior
like
the
way
the
policies
are
today
that
is
by
default.
The
the
traffic
will
be
allowed
because
you
know
in
kubernetes
cluster
every
podcast
should
be
able
to
communicate
with
each
other.
B
B
B
B
B
So
I
think
the
use
case
that
foreign
brought
up
was
that
he
would
like
to
create
cluster-wide
default
denial
policy
and
with
certain
exceptions
for
communication
from
parts
to
the
code,
dns
parts
and
cube
system
namespaces,
and
he
would
want
the
namespace
owners
to
be
able
to
allow
traffic
on
top
of
this
denied
traffic.
So
so,
if,
if
the
namespace
owner
has
not
created
an
allow
rule
or
rather
created
a
policy
to
allow
some
traffic,
the
default
behavior
would
be
to
deny
that
traffic.
B
F
Yeah,
I
think
sorry,
so
I
think
that's
something.
That's
also
coming
up
on
the
cluster
network
policy
discussions
we
had
last
thursday
is
that
because
kubernetes
allowed
pause
to
talk
to
pause
by
default.
So
if
the
baseline
tier
is
created
under
the
equivalent
cell
policy
layer,
then
I
agree
that
in
this
tier
it
probably
doesn't
make
sense
for
people
to
create
a
loud
rule,
because
the
pods
communication
patterns
will
be
allowed
by
default.
F
But
you
know
I
don't
know
if
that's
some
hard
constraint
that
you
want
to
impose
on
this
tier
or
if
people
just
specify
allow
rules
in
this
tier,
it
would
simply
have
no
effect.
B
Yeah
so
essentially,
I
think
you
know
if
you
have
an
allow
rule
in
the
baseline
care.
It's
really
not.
B
It's
really
not
doing
anything
special
because
by
default
you
still
allow
traffic,
so
it
I
think,
I
think,
by
adding
the
baseline
here,
you're
just
giving
more
control
for
an
administrator
to
write
policies,
but
at
the
end
of
the
day,
what
kind
of
policies
they
write
is
something
that
we
would
not.
I
mean.
E
B
So
so
you
are
agreeing
that
we
should
only.
We
don't
need
to
really
provide
a
way
for
our
users
to
add
more
tiers
like
to
to
the
baseline
after.
E
B
E
B
B
B
B
A
The
current
proposal
will
put
you
in
a
situation
where
a
kubernetes
network
policy
can
override
the
baseline
here,
meaning
that
probably
the
cluster,
the
baseline
tier
as
a
policy
to
say
to
deny
traffic
on
a
given
port,
and
then
you
override
it
with
a
kubernetes
network
policy
to
instead
allow
traffic
on
a
given
port.
Is
that
possible.
B
Yes,
I
think
that
is
what
we
are
trying
to
achieve
with
this.
So.
A
B
Yeah,
so
so,
basically,
if
you
have
let's
say
in
this
diagram,
you
have
a
a
and
easy.
Let's
say
this
is
used
to
deny
traffic
for
port
8080
and
and
now
as
a
cluster.
Sorry
as
a
name
space
owner
I
for
my
main
space,
I
create
a
network
policy
z,
which
is
essentially
this
thing,
but
it
allows
certain
parts
to
allow
traffic
on
port
880
then
for
those
parts
it
will
be
allowed
and
for
other
parts
you
know
it
will
not
be
enough.
A
Yeah.
Okay,
sorry
that,
then
I
I
was
thinking,
I
was
thinking
about
a
different
use
case
where
you
were
using
this
baseline
here
for
for
having
a
baseline
security
behavior
that
users
could
not
override
status
over
apps.
It's
a
different
use
case.
B
E
Actually,
another
one
is
that
you
know
people
are
talking
about.
They
want
some
default
behavior,
but
isolate
the
namespace
right.
That
means
inside
and
space
you
can
communicate
freely.
You
cannot
do
any
course.
Names
course
names
risk
making
for
such
kind
of
a
policy.
It
cannot
be
defunded
with
all
kind
of
policy
expression.
B
Yeah,
so
yeah
go
ahead,
so,
like
I
mentioned
at
the
beginning,
is
that
there
are
multiple
feature:
requests
that
come
in
so
the
so
this
this
was
one.
The
one
that
you
mentioned
is
the
other,
wherein
people
want
a
very
simple
way
to
write
a
cluster
scope
policy,
which
kind
of
the
effect
is
that
my
intra
name
space,
all
in
international
space
traffic,
is
allowed,
while,
inter
name
space
is
not
allowed,
and
I
would
like
to
have
that
as
a
baseline
policy,
so
so
with
the
baseline
here.
B
I
think
we
provide
them
to
have
such
kind
of
policy,
but
our
policy
is
not
yet
expressive
enough
to
do
that,
like
a
namespace
isolation,
so
I
think
we
can.
We
can
add,
on
top
of
it
afterwards
and
have
our
cluster
scope
policies
the
the
ability
to
express
mean
space
isolation
in
an
easy
manner.
So
maybe
we
can
decouple
those
two
stories
did.
I
did.
I
make
sense.
B
Yeah
I
mean
I
haven't
yet
thought
through
how
to
implement
the
namespace
isolation,
but
we
can,
we
can
think
about
it
in
in
a
separate
user
stories.
What
I'm
saying.
E
B
But
do
you
are
you
suggesting
that
we
should?
We
should
do
the
name
space
isolation
only
for
these
baseline
tiers
not
does
it
make
sense
for
higher
order,
tiers.
B
Okay,
yeah,
I
think
I
think
we
need
to
have
our
policies
be
able
to
express
that
in
a
very
simple
fashion.
Today
it
is
not
yet
the
case.
Okay,.
G
Regarding
having
allow
rules
in
the
baseline
here,
I
had
like
a
question.
What,
if
I
want
to
define
like
a
baseline
policy
that
says,
unless
the
developer
decide
to
prevent
this,
maybe
allow
all
of
my
parts
to
talk
to
core
dns
pods
right,
and
I
can
do
that
using
like
a
clustered.
G
Well,
yeah,
but
if
I,
if
I
did
find
a
policy
that
selects
pod
a
and
say
point
a
can
only
talk
to
part
b,
I
mean
part
a
becomes
isolated
right,
and
so
it
cannot
talk
to
other
parts.
Even
though
I
don't
do
an
explicit
deny
just
doing
an
allo
means
if.
B
I
think
right
because
if
you
are
part,
a
was
supposed
to
be
isolated,
but
as
a
administrator,
if
the
isolation
rule
is
enforced
after
the
baseline
tier
and
the
baseline
here
add
like
a
then
we
are
not
truly.
We
are
altering
the
behavior
of
that
community's
network
policy.
G
G
Because
the
other
way
I
prevent
traffic
otherwise
with
a
humanities
network
policy,
is
by
making
a
pod
isolated,
basically
with
an
explicit
network
policy
and
then,
as
you
say,
if
I
allow
some
traffic
in
the
baseline
here
using
analog
rules
and
I'm
basically
changing
the
implicit
behavior
of
the
kubernetes
network
policies,
yeah
yeah.
F
Yeah,
so
I
feel
like
in
in
this
particular
case,
the
the
cluster
ming
might
be
better
off,
creating
that
policy,
not
in
the
baseline
tier,
but
in
some
tier,
that's
above
kubernetes
network
policy.
If
what
they
want
to
enforce
is
something
like
always,
you
should
always
be
allowed
to
talk
to
core
dns.
G
Well,
yeah,
but
then
the
same
thing
is
that
the
developer
cannot
override
it
right.
What
I
was
saying
was
that
you
should
allow
pods
to
talk
to
core
dns
unless
explicitly
denied.
Basically,
but
you
cannot
express
you
cannot
deny
it
with
the
currencies
network,
both.
B
I
mean
users
can
break
allow
policies,
it's
just
that
it's
not
really
going
to
have
much
of
an
effect
on
the
on
the
cluster.
G
Yeah
I
mean
it's
hard
to
see
how
it
would
have
like
any
effect
right,
because
if
you
don't
have
any
kubernetes
network
policies,
then
your
baseline
here
is
not
useful.
And
if
you
do
have
kubernetes
network
policies,
we
don't
want
to
be
able
to
negate
their
the
isolated
behaviors
that
they
provide
right.
So.
B
B
Yeah
so
so
there
we
could
introduce
a
couple
of
option
values.
One
is
you
know
deny,
which
is
you
know
you
just
deny
any
traffic
and
the
other
is
you
just
allow
the
namespaces
which
which
gives
you
the
default.
I
B
Now
yeah,
the
only
thing
is
that
it
doesn't
give
you
like
an
expression,
a
way
to
express
very
specific
rules
like
I
don't
want
certain
traffic
to
be
allowed
by
default,
but
maybe
the
administrator
name,
space
so
name
space
owner
can
change
that.
So
that's
an
alternative
which
works
for
most
use
cases,
but
maybe
not
all
right,
but
I
don't
know
if
there
exists
a
use
case
which,
which
needs
to
be
that
specific
for
for
a
baseline
sphere.
E
Yeah
first
thing
is
basically
is
more
flexible
yeah.
You
know
you
can
control
prevent
space
and
based
on.
B
E
And
the
for
its
landspace
as
long
as
I
stopped
over,
I
think
probably
makes
it
more
sense
to
be
based
on
only
otherwise
very
strange.
I
mean
I
fixing
up
here,
you're
saying
landslides.
Actually
you
mean
you
allow
water
in
international
space
traffic
that
will
override
the
other,
the
the
policy
below
the
tier.
So
it's
it's!
It's
pretty.
B
E
But
they
are
not
saying
a
name
specifically
you.
You
are
saying
sorry,
probably
I
I
I
mean
a
different
thing.
I
mean
the
behavior
of
actually
I
mean
two
behaviors
isolate
them
space
and
allow
internal
space
for
I
just
mix
them
yeah
you're
right.
If
you
just
want
that,
so
you
can
still
do
it.
Okay
make
sense.
G
Yeah,
because,
right
now,
based
on
the
use
cases
that
people
have
presented
to
us,
these
are
mostly
like
the
configuration
parameters
you
presented
right,
allow
within
a
namespace
by
default,
but
not
to
other
namespaces
or
default
deny.
So
those
are
the
actual
user
requests
we've
had,
but
I
agree
that
if
we
make
we
can
make
a
acnp
more
expressive
to
satisfy
those
use.
Cases
and
the
baseline
tier
semantics
become
more
useful.
B
All
right
so
yeah,
I
guess
maybe
that
could
be
the
next
next
issue
which
we
can
open
up
for
for
the
name,
speed,
isolation,
expression,
and
maybe
you
can
think
about
how
to
how
to
do
that
with
our
policies.
F
B
F
Yeah
that
works
for
me,
but
I
mean
I'm
probably
I'm
just
thinking
you
know
for
for
that
reserve
range.
Maybe
you
want
to.
We
want
to
pick
some
in
between
values
in
case
we
in
case
we
wanted
to
add.
You
know
more
baseline
tests.
B
G
Yeah
and
if
I
try
to
create
a
tier
above
250
right
now,
which
is,
I
think,
is
the
default
tier
priority
right
now.
The
the
web
book
is
going
to
give
me
an
error,
correct,
okay
and
right
now
we're
going
to
keep
that
behavior.
If
we
introduce
the
baseline
here
as
well,
so
I
won't
be
able
to
create
it
here
between
the
default
here
and
the
baselines
here
right,
correct.
B
So
yeah,
maybe
we
can
discuss
the
exact
implementation.
What
number
to
use
in
the
pr
the
next
one
is.
You
know
it's
more
cosmetic,
not
much
functional
behavior,
so
this
one
is
about,
as
you
guys
know,
that
you
know,
as
you
all
know,
that,
for
the
cluster
scope
policies
we
have
priorities
which
help
us
order.
A
policy
precedence
over
the
other
policies
higher
the
lower
the
number
value
short
value
of
the
priority
means
it's.
It
is
of
higher
order.
B
Similarly,
the
tiers
are
also
ordered
based
on
priorities,
so
there's
a
lot
of
priority
numbers
going
on
around
right
and
we,
since
we
don't
have
a
dashboard
or
a
way
to
visualize
things.
You
know
mostly
people
use
to
view
those
policies.
So
you
know
this
is
very
good
as
part
of
the
issue.
These
also
provide
an
example.
B
The
sorts
by
the
priority
field,
he
gets
all
the
all
the
clusters
of
policies
or
and
candidate
policies
based
on
the
increasing
order
of
the
priority
number.
But
they
are
you
know.
If
you
have
policies
spanning
across
multiple
tiers,
you
know
it
will
only
consider
this
the
policy
priority
number.
It
doesn't
consider
the
the
tiers
order,
because
the
tier
is
in
you
know
in
the
form
of
its
string
name
right.
B
So
so
what
we
are
seeing
is
that,
although
pa1
is
shown
at
the
top,
so
now
as
a
user,
I
would
think
that
this
is
the
highest
ordered
policy,
which
is
not
the
case,
because
you
know
the
pa1
belongs
to
application
tier
as
we.
As
you
know,
the
application
tier
is
one
of
the
lowest
priority
tier,
while
the
emergency
tier
is
the
most
highest
priority.
So
if
we
see
that
pe1
which
belongs
to
emergency,
has
is
technically
the
most
highest
ordered
policy.
B
So
so
what
happens
here
is
that
the
output
of
a
cube
cutter
is
not
really
giving
me
the
correct
picture
now.
Now
there
are
a
couple
of
ways
to
solve
this
problem,
one,
I
think
young.
Maybe
you
want
to
talk
about
it
because
you
also
kind
of
proposed
similar
thing.
I
think
I
had
an
offline
chat
with
chan
on
this
about
adding
a
new
field
on
on
a
effective
priority.
Sort
of
that
that
we
can
have.
B
B
You
know
normalize
this
into
another
priority
field
or
a
number
which
will
become
the
part
of
the
the
anti-native
policy
spec,
and
then
we
can
sort
on
that
one
field,
so
you
know,
I
think
I
also
spoke
about
this
approach
with
chan
offline,
and
I
think
you
know
maybe
china,
you
wanna
talk
about
why
you
were
kind
of
against,
or
you
did
not
really
like.
That
was
because
now
we
have
like
two
or
three
priorities
associated
with
with
the
single
policy.
G
Well,
I
was
just
gonna
say
I
was
also
thinking
about
the
status
thing,
but
it
kind
of
seems
wrong,
because
I'm
ordering
I
mean
I'm
adding
to
status
something
that's
entirely
determined
by
the
spec.
In
a
way
right
I
mean
it
feels
weird
that
I
have
to
rely
on
status
to
sort
those
objects,
those
resources
when,
in
the
end,
their
priority
is
like
completely
determined
by
the
specification
I've
provided
for
them.
G
B
So
for
the
status,
I
think
I
I
initially
proposed
that
I
think
tran
also
had
a
suggestion
that
maybe,
instead
of
the
status,
we
kind
of
do
this
in
the
mutating
web
itself.
So,
as
part
of
we
introduce
our
book
for
mutation,
which
you
know
before
it's
passed
to
the
controller
it
just
takes
in
the
tier.
Your
priority
takes
in
the
policy
priority
and
then
generates
it
in
the
spec
itself
and
adds
it
to
the
spec
instead
of
adding
it
to
this
as
part
of
the
status.
G
That
that's
a
really
good
point
too,
so
I
feel
I
mean
it
would
be
nice
to
be
able
to
do
it
with
cube
ctl
using
the
built-in
features,
but
at
the
same
time
I
feel
like.
We
also
want
to
make
nctl
better
and
like
a
to-go
place
for
users,
and
so
maybe
we
should
not
go
out
of
our
way
too
much
too,
to
like
define
some
solution
just
to
have
it
work
with
cuba,
solutions
that
would
add
complexity
right,
because
now
we
would
like
mutate,
the
mutate,
the
resource
and
so
on.
B
F
G
F
I
see
yeah
the
reason
I'm
asking
this
is
that
when
I
was
discussing
this
with
object-
and
he
brought
up
a
point
where
you
know,
if
we
have
ac
ampere
or
amp
enforcement
pods
on
different
nodes,
then
we
probably
want
to
you
know.
F
G
B
G
Yeah,
but
my
point
was
right
now:
if
the
user
actually
wants
the
crd
and
not
the
internal
resource
right
right
now,
nctl
only
gets
you
the
internal
resource,
it
doesn't
query
the
crd
from
the
current
disappears.
It
only
queries
the
internal
object
from
the
controller.
F
Yes,
but
from
my
point
of
view
like
as
long
as
as
long
as
then,
we
are
given
the
names
with
the
relative
order.
You
know
if
users
want
to
actually
look
at
the
crds
or
from
the
program
message.
Sorry,
the
kubernetes
perspective
they
can
always
you
know,
use
the
name
and
the
kubeco
command
to
actually
inspect
those
resources
as
long
as
in
encode,
we
provide
them
with
the
sort
of
like
the
ordering
he
is.
You
know
looking
to
see,
but
yeah
that's
my
opinion
on
this.
G
I
B
B
The
other
proposal
that
I
had
was
probably
you
know
requires
a
upstream
change.
It's
a
potential,
something
that
we
can.
You
know
if
the
upstream
folks
thinks
that
it
is
something
that
we
can
add
and
it's
valuable
to
the
output
output
is
today,
the
sort
by
sort
by
filter
criteria
picks
only
a
single
field,
and
it's
also
on
that.
B
I
don't
know
if
there
will
be
happy
to
accept
something
or
I
don't
know
whether
this
is
very
easy
to
perform,
but
we
provide
a
list
of
fields
and
the
sort
happens
on
each
of
those
multiple
fields
based
on
you
know
the
priority
in
which
they
are
sent
in
so,
for
example,
if
I
were
to
do
cuddle,
get
a
anc,
scmp
sort
by
spec,
dot
tier
commas
for
spec
dot
priority,
then
it
first
sorts
on
the
spec
dot
tier
and
then
next
output
would
be
starting
on
spectre
priority,
and
then
that
would
give
us
the
correct
order.
G
B
Okay,
so
I
I
said
that
maybe
we
need
to
mutate
the
tier
name
back
to
your
priority
and
then
user
use
that
output.
A
I
B
Yeah
or
you,
you
could
still
cover
that,
but
yeah
yeah,
but
I
think
the
ant
cutting
is
kind
of
like
a
good.
B
A
F
A
So
I
hate
to
interrupt
on
this
topic,
but
we
have
only
18
minutes
in
the
meeting
and
another
fairly
large
topic
for
today's
agenda.
So
I
would
like
to
ask
you
if
there
is
a
any
final
question
on
this
issue.
A
So
the
as
some
of
you
know
already
ray
in
the
past
few
months,
three
weeks
has
been
working
on
an
operator
for
andrea
and
his
work
now
is
a
very
mature
stage.
So
you
would
like
to
present
what
is
this
operator
for
andrea,
how
it
works
and
what
are
its
benefits
so
ray?
Are
you
ready.
G
I
Okay,
so
next
I
will
show
a
demo
about
the
entry
operator
before
the
demo.
I
will
have
a
brief
introduction
about
the
anterior
operation,
for
example,
what
the
anti-operator
can
do
and
how
it
works.
I
Okay,
the
the
main
task
or
main
target
of
the
anterior
operator
is
to
provide
a
automatically
way
to
deploy
and
show
plugin
unto
plugging
resources.
The
resources
includes
the
entry
and
controller
deployment.
Then
tray
agent
demonstrates
config
maps
and
other
resources.
I
Besides
the
deploy
the
deployment,
the
operator
will
will
also
have
the
ability
to
monitor
the
configuration
changes
from
the
user
and
apply
the
configuration
change
to
the
entry
resources.
I
And
also
the
entry
operation,
we
can
monitor
the
entry
resource
or
entry
object
status
and
to
maintain
maternal
resources
to
the
to
a
desired
state.
I
For
example,
if
we
have,
if
we
delete
the
actual
controller
or
summer
resource
by
mystic,
the
anti-operator
will
will
monitor
the
the
changes
and
will
create
the
resource
bank
again
to
meet
the
desired
states,
and
the
last
task
is
when
operator
is
running.
It
will
update
the
related
status
according
to
the
operator
reconciliation
process
or
results.
I
I
So
it's
the
many
function
of
the
entry
operator.
I
Okay,
so
this
is
about
the
components
of
the
android
operator.
The
entry
operation
consists
many
two
two
components
deployment.
The
entry
up
the
entry
operator
runs
as
a
deployment,
and
the
deployment
will
will
include
the
entry
resource
template
manifest.
I
This
template
is
from
the
anterior
all-in-one
yammer
file
actually
operator
will
use
this
template
to
generate
the
entry
resource
specs
and
on
the
operation.
We
also
have
a
custom
resource
which
name
the
the
entry
install
this.
This
case
of
resource
is
for
configurations
for
the
entry.
I
For
example,
the
user
can
specify
the
android
configurations
in
this
resource
because,
for
example,
the
anterior
controller
config
on
the
agent
config
the
signal
configure
of
the
entry
image
image
version.
I
This
config
will
be
right
to
the
to
the
anterior
config
maps
by
the
entry
operator.
Just
like
we
added
the
configure
maps
directly.
I
And
here
is
the
example
of
the
entry
install
custom
resource,
for
example,
in
this
bike
we
can
see
here
as
a
configs
with
just
a
list
of
above,
for
example,
we
can
set
the
agent
config
in
this
this
option
in
the
entry
original
config
option
and
set
the
control
config
in
the
controller
config
option,
and
also
we
can
set
the
image
we
want
to
use
by
the
entry
and
the
last
one
is
the
addressing
eye
config.
I
Okay,
so
next,
let's
say
how
enter
operator
works.
Azure
operator
will
just
works
as
a
controller.
It
will
run
the
reconciliation
logic
to
to
monitor
the
change
of
the
mindset,
change
of
config
and
apply
the
user
config
to
entry
resources.
I
I
So
these
two
file,
these
two
files,
will
contain
all
the
configurations
needed
by
deploying
by
deployment.
So
the
operator
will
use
this
config
options
and
render
this
options
in
the
anterior
dot.
Emo
template
file
and
generate
the
entry
objects
by
spec,
and
this.
This
template
file
is
just
a
copy
of
the
entry
ammo
from
the
andrea
upstream
and
just
when
the
operator
will
need
to
replace
the
configs
in
this
template
and
and
and
last
after
we
generate
the
own
entry
object
spike.
I
The
operator
will
apply
these
specs
to
the
api
server
and
to
create
the
entry
objects.
E
I
And,
besides
the
config
reconciliation,
I
feel
we
also
run
the
portal
reconciliation
logic.
It
will
monitor
the
states
of
the
entry
controller
deployment
and
the
entry
agent
inside.
If,
if
this,
if
the
resource
is
deleted,
the
entry
will
recruit
the
object
again.
I
And
during
the
reconciliation
process,
I'm
sure
we'll
we'll
we'll
show
we'll
set
the
states
of
the
of
the
process
are
the
results
to
some
resources
to
show
the
current
process
process
of
the
operator.
Here
we
list
three
releases,
three
resources
which
which
will
record
the
states.
I
I
We
can
take
an
example,
for
example,
we
just
make
a
configure
change
and,
as
the
operator
will
process
the
config
change
and
during
the
this
stage
the
processing
status
of
this
results
will
be
true.
I
It
will
mean
that
the
operator
is
working
on
the
process
and
another
is
the
entry
configuration
realizations
test.
I
We
can
remember
that
we
set
the
anterior
configs
in
the
entry
install
custom
resource,
so
this
this
this
this
status
will
record
if
the
configuration
is
realized
or
not,
and
the
last
one
is
the
custom
network
status.
This
is
this
resource
is
used
in
the
openshift
environment.
I
I
It
will
update,
update
this
customer
resource
to
record
the
current
status
currently
realizes.
This,
for
example,
we
can
see
here
is
a
100
additional
this
option
as
I'm
the
mtu.
It
means
the
the
operator
has
currently
set
the
m2
as
1040
and
400.
I
Okay,
here
is
the
a
brief
introduction
of
the
entry
operator,
so
any
questions.
C
Hi
hi
ray
I'd
like
to
know:
when
will
the
cluster
operator
shows
the
degraded
state?
When
will
the
degrading
status
is
true.
I
I
So
in
this
case
we
will
show
we
will
see
that
the
entry
cannot
be
cannot
be
deployed
properly
successfully.
And
if
this,
if
the
keys
are
seated
for
a
long
time,
the
entry
operator
will
show
will
set
this
degraded
status
and
force
and
true.
I
I
Okay,
next
stage,
I
will
show
the
demo
of
the
operator
on
openshift
four,
and
here
are
the
steps
I
will
show
I
will
list
in
the
demo.
So,
firstly,
we
will
show
the
classes
go
next
local
configuration
before
we
deploy
openshift
cluster
and
the
second
step.
We
will
use
the
configuration
which,
as
I
said,
and
to
deploy
the
upper
shifted
cluster
and
after
we
have
installed
an
objective
cluster.
We
can,
we
can
test
or
verify
the
entry
operator
functions.
I
For
example,
we
can
up
update
some
configs
of
the
entry
and
to
see
if
it
will
take
effect,
and
the
third
step
is
that
we
can.
We
can
delete
some
anterior
resource
and
to
see
if
the
operator
will
create
it
again
and
and
in
the
last
step
I
will
show
if
the
anterior
network
can
work
properly
in
the
shift
in
openshift
environment.
I
I
I
Okay,
firstly,
I
will
edit
the
cluster
scale
of
network
configuration
for
the
cluster.
I
Config.Yml
and
we
will
set
the
netting
networking
option.
I
The
field
we
need
to
sign
is
to
one
is
to
change.
The
network
type
here
is
ncp,
where
we
will
change
it
to
anterior.
I
I
After
we,
after
we
execute
this
command,
we
will
come.
We
will
have
some
manifest
files.
This
this
director
will
consist
the
yaml
files
or
the
yamaha
resources
which
needed
to
be
installed
to
the
to
the
cluster,
because
we
want
to
install
the
entry
operator
during
the
bootstrap,
so
we
will
need
to
copy
the
entry
operator
yaml
files
to
this
manifest
directory.
I
Okay,
next,
I
will
use
another
command,
the
create
ignition
configs
and
after
we
execute
this
command,
we
can
see
the
direct
contact
change.
We
can
see
here.
We
list
the
files
in
in
this
directory.
We
can
see.
There
are
three
one,
two
three
ignition
config
files,
this
three
config
files
is
used,
are
used
to
deploy
openshift
cluster
virtual
machines.
I
It
will
each
of
the
files
will
contain
the
virtual
machine
config
and
the
network
network
configurations
for
the
virtual
machines,
for
example,
the
bootstrap
dot.
Itunes
is
for
the
bootstrap
virtual
machine
and
the
master
and
the
master
file
is
for
the
mass
node
and
the
worker
worker.itune
is
for
the
work
node.
I
Let's
go
to
the
next
step
yeah.
If
we
want
to
deploy
or
openshift
our
class,
we
will
need
to
download
the
installer,
which
is
provided
by
openshift
to
help
install
the.
I
The
opposite
of
the
installer
will
use
the
terraform
for
the
deployment,
so
we
will
need
to
copy
the
virtual
machine
configs
in
the
terraform
terraform
config
file.
Here
is
the
terraform
dot
tf
vars.
I
Okay-
and
we
can
see
in
this
file,
we
have
already
set
some
options.
For
example,
the
the
top
lines
are
the
options
for
the
v
sphere,
this
very
center
options
we
want
to
use
and
we
can
see
the
the
middle
lines.
It
includes
the
control,
plane,
cons
and
the
computer
computer.cons,
and
this
line
is
bootstrap
ignition
url.
I
I
I
I
I
I
Firstly,
let's
get
the
status
of
nodes
I'll
be
using
oc,
get
nodes.
We
can
see
here.
We
have
successfully
applied
the
sixth
node
cluster
and
every
no
every
stage
of
nodes
is
ready,
so
it
is
written
now
it
means
it
also
comes
from
the
initialized
ci
plugin.
I
Next,
let's
see
the
entry
operator
stills,
so
we
can
see
here
we
can
see
that
the
status
shows
and
now
currently
the
processing
in
force
and
it
means
countries
the
states
of
the
states
of
config
have
reached
the
desired
state.
I
Okay,
we
can
see
that
the
entry
agents
and
the
entry
controller
acts
greatly
successfully.
I
I
I
You
can
see
currently
the
processing
in
force
and
then
I
will
save
the
configuration.
I
I
Okay
up
somewhere,
we
can
see
the
processing
from
switchbank
to
force,
and
we,
if
we
release
the
ports
we
can
see,
the
agent
is,
is
created
recently,
because
if
we
change
the
mtu,
it
will
need
to
recreate
agent.
I
Okay,
so
next
step,
I
will
delete,
delete
the
deleted
entry
controller
deployment
and
to
see
if
the
android
will
monitor
the
change
and
recruit
the
deployment
automatically.
I
I
So
from
this
step,
we
can
show
that
if
the
entry
resource
is
deleted
by
by
user
or
by
some
some
other
way,
the
operator
will
monitor
the
change
and
recruit
the
results
again.
I
Okay,
here
is
the
next
is
the
last
step
in
this
step
I
will
show
I
will
want
to
verify
the
function.
The
function
of
the
alternate
networking
we
we
will
test
if
the
network
negativity
between
between
port
and
between
port
and
the
service
is,
is
as
as
expected.
I
Nodes,
I
I
select
the
two
parts
from
the
different
node
and
the
user
command
for
the
test.
I
Yes,
here
we
can
see
the
say
this
is
the
output,
hello
communities,
so
it
means
the
the
connectivity
between
the
node
is
successfully.
I
Next,
let's
check
the
connect
connectivity
between
port
and
service.
I
I
Okay,
that's
all
for
the
demo.
A
Thank
you
very
much
ray
and
we
are
quite
a
bit
over
time.
So
is
there
any
question
on
for
ray
on
this
demo.
A
E
Okay,
maybe
let
me
start
with
several
quick
one,
so
so
first
do
is
support
killing
off
of
the
andrea
I
mean:
do
we
have
a
way
to
delete
entry
and
cleaning
up
all
the
resource
and
the
state
we.
E
E
Yes,
and
probably
even
better,
if
we
can
remove
any
state,
live
diagnostic,
like
ib
tables,
whatever.
I
Currently,
we
we
only
supported
to
deploy,
to
delete
the
deployment
and
the
different
sides
of
the
entry.
E
I
E
Actually,
I'm
not
sure
what
openshift
operator
or
other
operator
typically
do.
I'm
just
wondering
do
we
know
any
other
operator
or
do
we
know
openshift
operators
support
some
cleaning
up.
We
are
like
wrong.
A
Junction
from
what
I
know,
some
application
operators
when
you
delete
the
operator
they
they
perform,
they
perform
this
kind
of
cleanup
like
deleting
every
resource
that
was
managed
by
the
operator,
but
other
operators.
A
When
you
delete
the
operator,
they
just
do
nothing.
So
you
know
all
the
resources
that
are
managed
by
the
operator
are
just
left
there.
I
don't
know
if
this
is
the
correct,
behavior
or
not,
but
we
find
both
in
this
case,
in
particular,
when
you
install
the
operator
with
this
process
in
the
openshift
cluster.
A
We
can
surely
think
about
it,
I,
but
not
in
the
overshift
scope.
I
think
I
don't
know
right
all
right.
What
do
you
think
about
it?
That
would
be
like,
I
believe,
an
explicit.
There
will
be
something
that
the
operator
will
have
to
explicitly
do
because
you
know,
if
you
just
delete
the
operator
deployment,
then
you
know
the
the
operator
pod
will
die
and
will
not
collect
the
resources.
So
there
must
be
an
explicit
cleanup
action,
probably.
E
Got
it
another
question,
that's
do
we?
Are
we
able
to
support
multiple
versions
of
andrea
by
the
operator.
I
I
Yes,
yes,
because
some
options
will
be
different
between
different
actual
version.
E
And
then
for
the
for
the
master,
I
mean
untrained
master,
the
monster,
the
beats
from
master
punch.
Probably
we
don't
have
a
way
to
track
the
chain
right.
E
What's
up,
okay,
so
basically
we're
seeing
just
after
rv
until
version
released,
we
have
a
new
operating
motion,
but
for
the
master,
probably
we
don't
mean
we,
we
probably
don't
have
a
way
to
track
the
chaining
on
the
master.
E
Okay,
yeah,
probably
my
last
one.
Probably
we
don't
need
to
conclude
here.
I've
started
wondering
for
what's
kind
of
useful
feature.
Do
do
do
do.
Do
you
think
it
makes
sense
to
put
that
into
android
repo
itself?
I
mean,
if
we're
able
to
decouple
some
openshift
specific
thing
just
for
generic
operator
functionalities.
E
Is
that
possible?
We,
I
don't
know
it's
possible
now,
it's
like.
Could
we
try
some
some
generic
functionalities
and
the
maintenance
and
share
ripple,
and
then
that
means
even
even
from
unsure
itself,
we
we
have
some
useful
operator
features
first
in
a
second.
E
Probably
it's
about
it's.
It's
either
easy
easier
to
track
the
change
in
anterior
itself.
I
mean
if
we,
if
we
put
anything
on,
depends
on
the
anterior
config
poles,
some
other
behaviors,
it's
probably
easier
to
maintain
that
in
90
percent.
I
Yes,
currently,
we
want
to
to
provide
the
comp
compatibility
for
the
standard
communities
environment
in
current
rebel
and
for
further
things
you
have
mentioned.
I
think,
for
the
first
step
we
can
consider
to
expose
some
functions
or
some
data
structures
from
the
anterior
repo.
We
can
directly
use
the
this
exposed
structures
for
in
the
entrepreneur
entry
operator.
I
A
So
one
very,
very
quick
question
for
me
is:
do
you
think
this
installation
process
with
the
entry
operator
for
openshift
can
work
also
on
other
platforms
like
aws
as
your
gcp,
or
do
you
think
there
will
be
some
blockage
blocking
issue
there.
I
A
A
All
right,
so
thanks
for
staying
with
us
for
a
little
bit
longer
than
expected,
it's
been
a
great
presentation
from
roy
also
thanks
a
lot
to
abhishek
and
young
for
the
discussion
on
network
policy
and
financial
cluster
network
care
network
policy
topics,
and
I
would
like
to
thank
everyone
for
attending
and
remind
you
that
the
we
will
update
meeting
times
for
the
next
meeting
until
the
next
change
of
daylight
saving
time
in
the
next
spring,
and
the
new
meeting
time
will
also
be
updated.
Indian
tria
readme.