►
From YouTube: Antrea Community Meeting 04/10/2023
Description
Antrea Community Meeting, April 10th 2023
A
All
right,
good
morning,
good
afternoon,
good
day,
good
evening,
thanks
for
joining
the
resistance
of
the
Andrea
community
meeting
today
is
April
11th
and
fortunately
we
have
a
a
fairly
packed
agenda.
So
we
will
start
with
a
discussion
of
CI
improvements
led
by
shuyang
and
in
the
second
part
of
the
meeting,
is
that
we
will
review
some
entry
apis,
which
are
still
in
Alpha
status,
to
figure
out
which
ones
can
be
promoted
to
Beta
or
even
be
promoted
to
GA.
So
this
is
the
agenda
for
today.
A
C
B
B
Hello,
everyone,
I'm
shuyan
and
today
I
will
introduce
some
new
features
and
enhancements
for
the
entry
Jenkins
pipeline
for
the
agenda.
First
I
will
introduce
some
new
CF
features
and
the
related
trigger
phrases
that
developers
can
use
in
their
PR
and
also
I
will
talk
about
the
recent
enhancements
that
improves
the
stability
of
the
CFA
plan.
And
finally,
we
will
talk
about
some
future
improvements
like
the
new
features
or
enhancements.
We
want
to
bring
in
next
release
or
in
a
long
term.
B
So,
firstly,
I
want
to
introduce
some
new
features
for
the
Jenkins,
Pipeline
and
I
will
talk
about
new
features
for
Windows
test
pad
first
and
then
Andrew
will
introduce
the
implementation
of
the
canvas
AI
Support.
Also.
We
have
supported
Adventure
CI
since
entry
1.11,
which
is
also
a
new
feature
for
the
Jenkins
pipeline
and
okay,
let's
start
from
the
new
features
in
Windows
AI.
So
we
know
the
windows
has
a
very
different
test
bed
compared
to
Linux,
node
and
also
building
entry
windows.
B
Image
takes
much
more
time
than
building
Linux
image,
so
we
want
to
optimize
it
to
accelerate
the
whole
process,
not
only
for
can
pipeline,
but
also
for
other
windows
developers
who
want
to
quickly
build
a
Windows
image
for
verification
or
development.
So
that's
our
motivation
and
one
of
the
optimization
is
to
enable
building
the
windows
space
image
to
accelerate
building
agent
image
because
after
investigation
we
found
actually
we
we
waste
a
lot
of
time
on
downloading,
duplicates
libraries
and
fails
to
create
the
base
environments.
B
So,
just
like
what
we
did
in
Linux,
we
started
to
support
building
windows,
space
image
after
entry
1.8
and
on
our
our
testbed
building
latest
entry
windows.
Image
can
cost
more
than
six
six
minutes
with
without
space
image,
but
building
windows.
Image
with
basic
image
supports
will
only
takes
about
two
minutes,
so
this
new
features
can
reduce
image,
building
Time
by
more
than
60
percent,
and
another
good
thing
is
we
we
we
have
updated
the
make
fails.
So
the
developer
don't
need
to
change
their
building
comments.
B
As
we
know,
after
kubernetes
1.24,
the
docker
became
a
deprecated
feature,
so
we
need
a
container
ID
support
to
verify,
and
trade
agent
supports
the
most
different
points
for
the
container
D
test
pad
is
that
it
has
one
more
image
node
for
image
building,
because
we
don't
want
to
install
and
manage
both
continuity
and
Docker
on
the
same
Windows
host
and
it
could
bring
a
potential
conflicts
during
tests.
So
we
only
run
continuity
on
Windows
worker
nodes
and
only
run
Docker
for
building
image
on
another
windows.
B
Image
node,
so
other
workflows
for
continuity
cell
pipeline
is
as
same
as
the
docker
testbed,
and
you
can
see
as
the
lower
left.
We
have
three
new
trigger
phrases
for
e2e
conference
and
the
network
policy
on
Windows
testbed.
Now,
after
entry,
1.10
developers
can
trigger
Windows
continuity
tests
by
these
three
trigger
comments,
and
another
thing
I
want
to
highlight
is
that
the
test
Windows
all
commands
will
trigger
all
windows,
sell
drops,
including
Docker
and
continuity
test
drops
after
entry
1.10.
D
Jenkins,
actually,
there
are
few
tools
that
claim
to
partially
replace
a
fully
pledged,
cubed
cluster
and
using
them
a
developer
can
have
their
own
local
clusters
instance.
So,
using
that
cluster,
a
developer
can
run
or
deploy
their
application
or
execute
the
test.
D
So
the
kind
is
one
of
them,
as
we
know
that,
like
a
kind
or
community
in
Docker
is
a
suit
of
a
tooling
for
the
local
cabinet
cluster,
where
each
spot
is
a
Docker
container
and
kind
targets,
local
cluster,
for
the
testing
purpose,
so
we
have
added
a
set
of
kind
tested
as
a
Jenkins
node
and
a
kind
tested
is
a
more
lightweight
than
the
three
kubernet
cluster
and
the
related
to
the
conformance
test.
Here
we
are
running
kind
test
in
Jenkins
VM,
instead
of
GitHub
actions.
Why?
D
Because
a
conformance
tests
requires
some
considerable
amount
of
memory,
that's
why
we
prefer
a
jks
VM
instead
of
GitHub
actions,
so
the
next
spot
related
to
the
cluster
API.
Actually,
the
cluster
API
is
a
framework
to
create
kubernetes
cluster,
with
a
different
cloud
provider
such
as
AWS,
vsphere
Azure
and
for
the
our
vsphere
environment,
the
regular
jobs
like
e2e,
conformance
and
network
policy.
D
These
are
using
this
framework
to
create
runtime
VMS
for
the
new
humanit
cluster,
so,
like
all
the
kind
tests
can
save
lots
of
resources
for
us
and
it
simulates
multiple
nodes
on
one
real
host,
but
we
can
say
that
that
like
cluster
created
by
cabbies
still
closer
to
the
real
scenario,
and
we
will
keep
using
it
for,
for
the
other,
our
regular
Jenkins
job
and
for
the
kind.
Currently
we
have
two
trigger
faces,
one
for
the
conformance
test
and
other
for
the
network
policy.
D
It
is
so
when
we
trigger
these
faces
on
GitHub
PR,
then
it
will
start
running
the
test
in
Jenkins,
so
we
can
see
in
the
workflow
first,
you
will
trigger
the
Jenkins
test.
Then
it
will
create
a
new
kind
cluster
and
then
it
will
start
running
conformance
or
network
policy
test
and
after
finishing
the
test,
it
will
delete
the
created
cluster.
D
But
suppose
you
have
triggered
the
face
and
then
it
will
create
the
and
then
it
will
create
the
new
current
cluster
and
it
will
start
to
the
test
and
during
that
time,
if
you
abort
the
process,
then
that
case
current
cluster
will
not
be
deleted
and
it
becomes
garbage
cluster.
So
we
need
to
handle
this
aborted
case
also
and
for
handling.
This
I
have
created
a
PR,
so
in
that
PR
I
have
added
one
35
minute
timeout
for
the
test.
So
after
merging
that
PR,
what
will
be
the
updated
flow
like?
D
If
you
trigger
the
face,
then
it
will
start
creating
a
new
kind
cluster
and
just
before
running
the
actual
test,
it
will
run
the
cleanup
function
and
there
we
will
check.
If
there
is
any
cluster
present
from
from
more
than
135
minute,
then
we
need
to
delete
that
particular
cluster
and
after
finishing
the
cleanup
function,
it
will
start
to
the
actual
test
on
new
kind
cluster.
D
Then
after
finishing
the
test,
it
will
delete
the
cluster.
So
these
all
about
the
workflow
and
one
more
thing
like
currently,
we
have
added
six
kind
tested
with
one
slot
for
handling
the
parallel
lens,
but
in
future
we
will.
We
will
handle
that
in
a
single
test
set.
We
will
create
multiple
cluster
in
the
single
test,
where
we
will
run
the
multiple
test
in
the
single
test
field
and
that
work
is
in
progress.
So
that's
all
from
my
side
now
showing
over
to
you.
B
Thanks
Andrew
and
the
next
new
feature
is
about
the
Rancher
said
pipeline,
but
we
won't
cover
more
details
in
today's
spread
presentation,
because
pocket
had
introduced
the
its
implementation
in
last
month's
company
meeting.
I
just
want
to
reaffirm.
If
you
want
to
test
the
entry
as
a
cni
plugin
on
render
testpad,
it
can
be
supported
after
entry
1.11.
B
Previously
we
used
the
public
service
to
create
server
channel
for
GitHub
requests,
but
recently
the
public
service
faced
a
series
of
service
abusing
issue
and
their
maintainer
have
to
shut
down
the
service
for
more
than
two
weeks
and
so
I
think
for
entry
set
pipeline.
We
can't
just
rely
on
the
public
service,
so
we
we
need
another
backup
service
for
the
Smee.
B
So
we
also
have
this
opportunity
to
improve
the
stability,
because
we
can
redeploy
the
SS
SME
server
and
the
clients
based
on
the
latest
environment
and
the
and
now
we
we
have
enabled
private
SME
service
for
entry
CI
since
last
month,
and
also
we
have
a
good
news
this
week,
the
public
SME
service
also
back
to
normal.
So
in
the
future
we
are
able
to
migrate
SME
between
public
and
the
previous
service
to
improve
the
stability
and
the
robustness
for
our
private
drinking
stuff
decline.
B
And
another
enhancement
is
that
we
support
multiple
Jenkins
description
Fields.
Previously
we
only
create
the
public
Jenkins
job
yamo
for
users
who
who
want
to
build
their
own
Jenkins
test
bat,
but
for
other
test
bats.
We
didn't
support
their
description
Fields,
but
now
we
created
another
lab
Jenkins
yaml
field
for
the
private
Jenkins
job,
so
developers
can
can
learn
how
these
the
test
drops
wrong
and
in
the
future.
We
also
hope
we
can
involve
more
developers
in
the
pipeline
contribution
and
next
I
will
introduce
some
future
improvements.
B
I
have
said
we,
we
are
using
vmc
for
the
public
drinking
experts.
Currently
we
are
working
on
migrated
to
the
AWS
and
in
the
future
we
will
have
more
their
jobs
running
with
can
or
cpv
support,
as
Andrew
said
for
the
kind
of
cancer
test
and
the
judging
is
working
on
support.
Multi-Clusters
CI
in
Kent
and
Andrew
will
continue
working
on
migrates,
our
fpv6
and
other
more
Jenkins
jobs
to
Kent
and,
moreover,
our
Windows,
their
jobs,
still
takes
more
time
than
the
Linux
drops.
B
So
we
will
continue
working
on
optimization
and
they
involve
more
involved,
more
windows,
improvements
in
the
next
release.
B
E
I
have
a
quick
question:
are
we
planning
to
keep
the
Jenkins
Windows
CI
jobs
on
Docker,
or
are
we
planning
to
remove
them
and
just
keep
the
container
you
want.
B
Currently,
we
we
keep
both
Docker
and
continuity
test
bed
for
Windows,
but
in
the
future,
maybe
we
will
duplicate
the
docker
test
pad
because
we
see
the
kubernetes
communities
they
they
depreciate,
the
docker
after
1.24.
So
as
I
think
yes
in
the
future,
we
will
only
use
the
containerdy
test
bed,
but
currently
I
think
we
we
still
support
both
of
them.
A
All
right,
it
was
very
informative,
and
do
we
have
any
other
question
on
this
topic.
A
In
terms
of
sorry,
in
terms
of
resources
that
we
will
need
do
we
need
to
allocate,
for
instance,
still
multiple
VMS
to
run
a
multi-cluster,
or
are
we
going
to
deploy
simulate
multiple
clusters
on
a
single
VM,
or
maybe
that's
not
yet
been
defined?.
B
I
think
the
purpose
to
run
multicaster
job
being
kind
of
is
the
only
deployed
is
in
the
in
a
single
VM.
But,
as
you
said,
this
VM
needs
more
resources
for
the
multi-cluster
test.
A
Yeah
sounds
good
all
right,
so
that
was
my
only
curiosity.
It
seems
that
we
don't
have
any
other
question
on
the
CI,
so
perhaps
we
can
move
to
the
next
topic
and
which
is
a
part
of
this
guy.
Thank
you
thank
you,
which
is
about
discussing
the
apis
that
we
can
or
want
to
promote.
So
I,
don't
know.
What's
going
to
lead
this
conversation
antonan
Chan,
with
going
to
do
that.
F
Continue,
do
you
want
to
start
since
you
began
the
conversation
or
I?
Could
I
I
also
summarized
something
in
GitHub
issue
with
items
of
who
entered
2.0.
C
C
F
Yeah
this
this
is
a
issue
about
the
preparation
we
we
might
need
to
do
for
Angel,
2.0
and
just
a
summarize
something
we
have
discussed
before
in
other
channels
and
I
have
added
something
I
have
investigated
for
for
this
release
and
yeah
it's
made
about
the
API
migration
and
the
API
removal,
and
also
there
are
some
items.
There
are
some
tasks
to
promote
some
official
case
that
are
not
some.
Some
of
them
are
not
API
based,
but
just
a
code
based.
F
We
may
want
to
promote
some
of
them
to
next
stage
and
the
remainings
about
configuration
options.
F
We
have
we
added
some
configurations
in
one
new
answer:
one
and
Angela
one
two:
is
it
zero
to
one
daughter
11,
but,
and
we
we
remove,
we
duplicated
some
configurations,
but
never
rarely
remove
them
from
the
configuration
file.
So
I
think
it's
also
a
good
opportunity
to
remove
the
duplicated
configurations
in
the
new
major
release.
F
Yeah
I
know
Antonio
proposed
to
introduce
new
version
for
some
apis
that
have
been
introduced
for
a
long
time
and
have
been
used
wisely
and
then
do
you
want
to
start
this
part.
E
Yeah
sure
so,
I
think
what
what
we've
noticed
is
that
one
notice
we
knew
that,
but
most
of
the
entry
apis
or
still
in
a
V1
Alpha
stage
for
some
of
them.
We've
introduced
new
versions,
for
example
in
the
for
the
network
policy.
Things
like
cluster
groups,
where
we
may
have
had
like
more
than
one
version,
but
most
of
them
are
in
Alpha,
One
or
Alpha
2
stage,
and
we
think
that
Andrea
is
a
pretty
mature
project.
E
Now
we've
been
around
for
close
to
five
years,
and
so
it
doesn't
necessarily
send
the
right
signal
to
the
community
to
keep
most
of
our
apis
in
V1
Alpha
stage
when
in
truth
or
kind
of
like
telling
users
eventually
that
some
of
those
apis
at
least
or
a
production
ready
and
can
be
used
in
their
production
communities
clusters.
E
And
so
we
should
consider,
if
not
moving
them
straight
to
V1
at
least
moving
them
to
Beta
maturity,
so
that
we
can
like
send
a
better
message
to
our
users
and
our
community
and
I
was
internally
kind
of
like
I
I
generated
some
basic
proposal
about
which
versions
We
could
adopt
and
Channel
10
feedback
and
I
see
that
you're
in
in
Easy
huge,
and
it's
kind
of
like
a
proposal
for
all
the
different
apis
that
we
have
and
as
we
move
to
a
new
version
of
those
apis,
we'll
stick
to
our
policy
about
backward
compatibility
and
upgrade
paths
for
users
and
yeah.
E
That's
that's
about
it.
I
know
that
Chan
had
some
comments
about
potential
improvements
to
some
API
because,
as
we
change
the
version
of
those
apis,
this
is
kind
of
like
a
good
opportunity
to
make
modifications
to
the
API
and
API
types
that
may
break
backwards.
Compatibility.
Of
course.
That
means
that
for
a
while,
we
need
to
be
able
to
support
both
versions
of
the
apis
in
accordance
to
or
upgrade
policies.
But
this
is
a
good
opportunity
to
do
some
cleanups
to
some
of
the
apis
trace
flow.
E
For
example,
I
know
that
over
the
last
couple
of
years,
we've
like
taken
note
of
some
possible
improvements
that
China's
detailed
here
and
if
there
is
an
API
that
you've
been
working
working
on
and
you
kind
of
like
have
something
on
the
back
burner
that
you
think
should
be
improved.
I
think
now
is
the
right
time
to
speak
up,
because
obviously
upgrading
API
is
changing
the
version,
that's
a
bit
painful,
and
so
we
don't
want
to
be
doing
this
all
the
time,
it's
painful
for
us
and
painful
for
users.
F
Yeah
thanks
Antonio
yeah,
actually
I
have
already
identified
some
defects
in
the
current
apis,
where
I
was
doing
some
investigation.
F
For
example,
two
major
issues
I
found
in
choose
flow
and
the
Android
agent
and
the
controlling
for
is
for
trade
flow,
and
there
are
two
fields
introduced
in
the
first
version
and
but
they
are
never
used
and
it
seems
redundant
with
the
field
with
the
same
name
in
its
parent
extract
and
I
was
not
sure
it
was
have
kept
for
future
usage
or
is
is
added
by
mistake.
F
I
think
we
shouldn't
do
a
clean
up
if
it's
not
by
Design,
when
we
graduate
this
when
we
introduce
a
new
version
of
choice,
level,
API
and
except
for
that,
I'm,
not
sure
if
we
are
confident
to
to
to
to
to
graduate
the
trade
flow
to
Beta
or
to
V1
directly.
F
If
we
are
confident
in
enough
I
remember
changing
said
he
may
be
not
very
confident
of
about
graduating
to
way
one
directly
yeah,
so
I
I
really
I
left
this
as
an
open
question
about
this
API
and
for
the
other
one
Android
agent
team
funded
controller
info
I
found.
That
is,
since
this
is
the
first
crd
we
introduced
everything
I
think
we
might.
We
might
made
some
mistakes
when
adding
this
crd,
because
in
the
schema
is
unstructed-
and
it's
still
in
that
case
here,
no
so
I
think
we
shouldn't.
F
Maybe
this
is
not
really
related
to
graduating
to
National
release,
but
we
should
do
it
immediately
if
it
was
a
mistake,
because
typically,
any
kind
of
data
can
be
stored
in
the
CR
yeah
and
except
for
that,
I.
Don't
have
other
concerns
I
on
this
on
the
new
version
of
financial
reasoning
for
under
control
info,
because
they
are
basically
for
real
purpose.
F
Only
and
I
don't
see
a
big
risk
to
how
we
want
version
for
that
and
for
other
apis
I
think
we
discussed
in
other
channels
and
personally
I'm
fine
with
the
The
Proposal
Anthony
had
I'm
going
to
show
you
for
others.
I
will
comments
on
this
part
and
besides
that,
I
I
also
added
two
other
apis
for
or
two
other
group
of
apis
for
consideration.
The
first
one
is
the
multicast
loop.
F
I
think
this
feature
it
may
depends
on
whether
we
could
make
a
podcast
future
Data
before
2.0
and
as
I
as
far
as
I
know
to
the
multicast
group.
Api
implementation
should
be
straightforward
and
there
should
be
no
big
risk
to
to
say
this
feature
is
beta
and
for
the
group
of
stats
API.
F
My
concern
was
currently
we
don't
persistent,
persist,
the
stats
in
any
storage
and
and
every
time
the
controller
restarts
the
stats.
The
the
the
start
of
an
airport
says
will
be
reset
and
we
are
starting
from
zero,
so
I
until
we
had,
we
introduced
some
processing,
storage
or
use
crd
to
persist.
This
data
I
personally
I'm
I'm
not
confident
to
graduating
it
to
next
stage.
So
any
comments
on
this
proposal,
or
or
you
want
to
add
a
new
API
you
are
familiar
with-
is
welcome.
A
G
Antonio
I
have
a
question
regarding
the
promotion.
I
saw
that
you
input
there
on
the
entry
2.0.
Does
it
mean
that
so
we
only
promote
them
in
this
version,
or
we
will
also
do
that
in
version.
One.
F
I
I
think
he
is
not
not
necessary
to
make
all
promotion
in
two
build
a
turtle.
Yeah
I
think
he's
trying
to
do
it
before
that
release.
I
I,
I
I
thought
we
just
want
to
make
sure
that
we
have
we
we
how
reasonable
beta
and
the
ga
features
in
the
next
major
release,
but
the
process
could
be
could
be
in
this.
It's
there
in
this
in
one
dot
well
in
in
one
in
the
major
release
of
1.0
yeah.
E
Yeah
I
agree
with
Chen
and
I
personally
have
a
preference
for
option,
one
that
Chen
describes
in
in
the
issue,
which
is
like
introduce
the
new
API
versions
in
1.x
and
then
remove
remove
deprecated
apis
in
in
2.0,
which,
which
is
the
real.
The
real
breaking
change.
F
Yeah;
okay,
if
you
just
know
comments
about
this
part,
perhaps
we
could
talk
about
the
API
duplication
and
removal.
F
F
So
basically
it
provides
two
options:
yeah
yeah
I
mean
yeah.
Maybe
they
respond.
F
Yeah
maybe
like,
let
me
talk
about
why
we
need
to
worry
about
this,
because
when
we
add
a
new
version
of
an
API,
we
may
choose
to
change
the
storage
version
of
this
API.
F
For
example,
when
we
how
we
want
beta1-
and
we
add
way,
one
at
a
new
version
and
then
eventually
we
we
want
users
to
migrate
to
the
new
version,
but
the
users
may
have
used
the
previous
version
for
a
while
and
they
have
accessing
the
data
in
etcd,
which
is
written
in
the
version
of
the
previous,
the
previous
one.
F
But
even
we
updated
the
the
crd
configuration
to
see
that
the
store
watch,
the
storage
washing
should
should
now
be.
We
want.
The
existing
data
doesn't
doesn't
automatically
update
to
the
new
version
and
they
will
be
kept
as
they
are
until
there
is
a
chance
to
rewrite
the
the
object
to
each
City.
So
there
could
be
two
cases
actually
three
cases.
F
The
first
is
the
user
creating
new
object.
Then
the
object
will
be
a
written
to
each
City
in
the
new
version
automatically,
and
we
don't
need
to
worry
about
that
and
the
and
the
second
case
is
user.
Before
upgrade
they
have
some
data
in
etcd,
and
now
they
made
some
change
to
the
uses.
F
F
Eventually,
we
want
to
remove
the
steel
one,
but
as
long
as
the
data
is
not
modified,
it
will
still
be
the
old
version
and
one
day
if
we
want
to
remove
the
support
of
the
auto
version,
we
updated
the
CRT
to
remove
the
the
washing
and
the
user
will
not
be
able
to
apply
that
crd,
because
there
are
two
problems.
F
The
first
is
the
API.
We
are
block
the
the
CRT
update.
Api
will
block
the
request,
because
the
status
stored
watching
field
will
have
two
versions
until
user.
Remove
the
old
one
manually-
and
this
is
by
Community
design
and
another
problem
is
even
use-
is
able
to
do
this
themselves.
The
as
it
comes
around
as
yeah
it's
at
least
one
still
object
is
stored
in
etcd.
Even
the
update
request
is
not
the
CRT
update.
Request
is
not
blocked
and
is
applied
successfully.
F
The
CR
that
API
will
no
longer
be
available
because
the
API
server
will
not
be
able
to
convert
the
stored
object
to
the
right
version,
because
the
missing
version
in
crd
schema
and
the
API
will
stop
work
will
stop
working
so
to
make
sure
that
user
could
continue.
Using
that
API
in
the
future,
there
are
two
preconditions
and
the
first
one
is
the
O
of
the
stored
objects
need
to
be
stored,
updated
to
a
new
version
and
the
status
field.
F
Stores
version
should
be
updated
to
contain
to
not
contain
the
the
remote
version
so
that
the
update
of
crd
could
success.
So
kubernetes
could
documented
two
ways
to
do
that.
The
first
one.
The
first
option
is
to
use
a
storage
washing
migrator
I
tried
this
report
I'm,
not
sure
whether
it's
mutual
enough,
but
at
least
it's
not
very
friendly
to
end
the
user,
because
you
have
to.
F
And
finally,
you
need
to
apply
the
Manifest
and
make
sure
the
image
is
available
in
your
cluster
and
then
wait
the
the
job
of
finishes
so
I
I
think
it
is
not
worry
to
end
users
yeah,
especially
who
is
not
familiar
with
this
stuff
and
after
that,
you'd
have
to
remove
the
old
version
from
the
sales
status
first
manually.
The
second
option
is,
it
sounds
simpler,
but
it's
a
pure
manual
operations,
because
the
the
key
or
for
upgrade
the
existing
objects
is
to
make
sure
that
every
object
is
updated
once
so.
F
The
solution
is:
ask
users
to
list
all
existing
objects
and
write
them
back
with
the
same
content,
so
that
the
API
server
could
automatically
store
the
objects
in
the
latest
version.
F
Remove
the
field
in
in
stated
field
the
the
overwatching
in
the
student
failed,
and
if
you
remember
how
we
dealt
with
the
API,
Group
or
migration,
we
introduced
a
mirroring
controller
and
it
belongs.
It
is
a
long
running
controller
in
Android
controller
and
will
be
responsible
for
mirroring
the
objects
users
stored
in
Old
API
Loop
to
new
group.
But
I
think
that
scenario
is
much
more
complex
than
the
current
one,
because
in
that
one
they
are
not
same.
F
They
are
not
same
API,
so
so
we
will
have
where
we,
we
are
actually
copying
the
data
and
converting
them
and
we
we
are
actually
daring.
We
are
actually
we
were
actually
dealing
with
two
objects
because
there's
all
the
same
object
exists
in
uses
the
two
groups
at
the
same
time,
but
currently
our
our
purpose
is
only
making
sure
that
every
object
is
updated
once
with
exactly
same
content.
F
So
the
current
requirement
is
much
simpler,
so
I'm
not
sure
whether
to
introduce
a
controller
back
in
Android
controller
is
really
necessary,
and
since
this
is
an
opportunity
of
introduce
a
new
major
release,
so
I
think
perhaps
it's
not
bad
to
ask
users
to
do
some
one.
F
One
click
to
perform
some
one
once
One
Step
command
to
to
migrate
from
old
major
release,
so
I
introduced
some
investigated
some
some
other
projects
and
found
that
they
they
did
the
similar
things
by
adding
a
command
in
the
CRI
and
that
command
just
performs
what
the
documentation
describes
and
yeah
listing
the
objects,
existing
objects
and
writing
them
back
they're
in
the
execution
and
I
think
that
code
is
well
straightforward
and
then
not
hard
to
write
and
is
under
the
benefit
of
that
approach.
F
Is
that
with
it
has
no
impact
on
the
existing
lung
longing
process
like
Android
controller,
if
we
add
a
new
controller
to
to
do
this,
API
migration-
and
there
are
some
changes
needed
to.
There-
are
some
side
effects.
First,
we
need
to
Grant
unnecessary
and
permissions,
because
under
controller
now
will
need
to
create
or
update
all
study
resources
which
we
don't
have
to
before,
and
the
second
is
that
it
will
how
to
cash
or
objects,
even
it.
F
It
was
not
used
by
Android
control
yourself,
so
it
will
increase
the
memory
and
the
CPU
usage
to
how
the
controller
I
mean
in
letter
to
in
that
component,
but-
and
it's
also
more
hard
to
control,
because
we
we
will
have
to
run
it
until
we
are
confident
that
the
user,
how
migrated
all
resources
to
new
version
and
we've
met
only
remove
that
controller.
F
Quite
a
quite
quite
a
lot
quite
some
Mana
release
later,
but
with
a
new
command,
West
Dev
command,
the
the
disadvantage
is
I
described
for
a
controller
is
not
there
and
I
I
and
I
see
that
more
and
more
projects
are
using
their
CRI
to
do
the
installation
for
their
software,
and
also
some
upgrades
and
managing
functionalities
implemented
in
the
SLI,
so
I
think
is
not
is,
is
not
new
to
users
that
they
they
could
use,
adjust
CRI
to
to
to
to
to
gratefully
upgrade
to
a
new
actual
major
release.
F
Yeah
and
the
only
decent
disadvantage
is
that
it
requires
you
the
to
operate
the
com,
the
CRI
once
when
the
the
the
the
the
in
this
case,
but
for
users
that
who
who
who
who
deploy
a
new
cluster
of
2,
200
I,
think
that's
not
a
problem
yeah
and
the
reason
why
I
wanted
to
bring
this
up
earlier,
especially
before
we
started
the
2.0
release.
F
Is
that
I
think
until
2.0
couldn't
could
have
two
meanings?
Two
apis.
The
first
one
and
Anthony
just
mentioned
is
that
it
means
the
end
of
support
for
older
versions.
F
It
could
also
mean
that
is
the
start
of
the
support
of
new
versions.
I
think
different
projects
have
different
qualities.
I
saw
some
projects
like
it's
still
when
they
introduce
the
one
1.0,
they
just
add
new.
They
just
copied
their
old
apis
and
create
new
apis,
and
they
later
they
have
to
I'm,
not
sure
how
many
minorities
later
we
just
remove
the
support
of
the
other
versions.
F
But
in
our
case
I
think,
since
we
are
preparing
the
the
map,
the
new
major
release,
we
could
consider
that
the
do
some
preparation.
For
example.
If
we
are
we,
we
know
what
we
want
to
graduate
in
the
next
major
release.
We
could
add
the
new
versions
earlier
before
that,
for
example,
to
at
least
two
minor
release
earlier
than
that
in
the
next
minorities
of.
F
So
that
users
get
enough
time
window
to
migrate
to
new
versions,
and
then
we
could
just
remove
the
duplicated
versions
in
the
next
major
release
and
as
the
material
major
release
normally
also
means
some
non-compatible
changes.
So
it's
reasonable
to
remove
some
duplicated
API
API
versions,
but
we
we
are
not
ready
now
backwards
compatible
because
which
will
also
provide
some
tools
or
we
do
it
automatically
to
have
user
grids
for
upgrade
to
new
major
release.
F
And
if
we
follow
the
Easter
way
or
we
could,
we
could
just
add
new
versions
in
1.0
to
which
means
that
this
is
the
start
of
support
of
new
versions.
And
then
we
support
the
two
versions
for
some
minorities
and
eventually
remove
the
duplicate.
The
apis,
for
example,
2.2,
but
regardless
of
which
way
we
choose
I,
think
we
need
to
consider
how
to
do
the
clean
up
earlier
and
whether
it
is
it
is
necessary
to
introduce
to
us
or
control
us
to
have
user
grid
for
upgrade.
F
Basically,
I
summarize
that
to
three
options:
the
first
one
is:
we
just
leave
the
upload
to
users
that
under
guide
them
to
follow
kubernetes
options.
Pattern
I
feel
this
is
not
right.
Randomly
and
I.
Think
underneath
some
projects
tries
to
avoid
that,
for
example,
the
a
certain
manager
project
they
had
a
tour
they
introduced.
The
tour
like
I
will
described
in
the
third
option,
and
the
second
choice
is
that
we,
like
a
mirroring
controller,
we
add
a
new
controller
to
help
user
migrate,
the
stored
object,
but
how
yeah
the
this?
F
The
third
one
has
used.
I
I
have
described
zero
yeah
and
the
third
one
is
that
we
do
it
like
a
study
manager
and
introduce.
F
Execute
the
command
once
if
they
are
migrating
from
other
version
and
they
do
have
processing
the
data.
F
A
No,
no,
not
much
for
me.
I
just
want
to
say
that
the
third
option,
the
one
that's
apparently
seems
they
are
preferred
for
the
users
that
are
leveraging
the
entry
operator.
I,
think
that
we
can
also
use
the
the
operator
to
automate
this
execution
of
of
resource
upgrades.
There
will
be
also
consistent
to
the
third
option
we
can.
Probably
just
you
know
the
operator
calling
the
Sim
logic
implemented
by
the
CLI.
F
C
A
F
F
Think,
oh
I,
we
could
yeah,
maybe
it's
too
early
to
to
to
to
get
a
real
feedback
on
this
proposal.
Yeah
feel
free
to
leave
comments
or
direct
message
me.
If
you
have
a
comment.
A
Thanks
John
I
think
it
would
be
great
to
keep
all
the
discussion
on
the
pr
I
appreciate.
Maybe
everyone
needs
to
think
a
little
bit
about
this
but
yep
and
if
you
don't
have
any
feedback
now
as
a
chance
it
just
let's
keep
the
conversation
on
the
pr.
A
All
right,
so
we
are
pretty
much
at
time
for
today's
meeting,
but
if
there
is
any
other
topic
that
you
would
like
to
discuss
any
other
thing
that
you'd
like
to
mention
please
go
ahead.
We
have
a
one
minute
for
open
discussion.
Of
course,
if
needed,
we
can
stretch
the
meeting
for
a
few
few
more
minutes.
A
So
I'll
wait,
let's
see
if
she's
10
to
20
seconds,
for
anyone
to
propose
a
topic.
Otherwise,
we'll
probably
finish
it
here.
A
All
right,
it
seems
that
there
is
nothing
else
for
today.
So
I
would
like
to
thank
everyone
for
attending
and
especially
thanks
a
lot
to
Xiang
and
you
and
Chan
for
presenting
today's
topic
and
we'll
meet
again
in
Twix
time.