►
From YouTube: Antrea Community Meeting 03/25/2020
Description
Antrea Community Meeting, March 25th 2020
A
Out
so
good
morning,
good
afternoon,
good
evening
or
good
go
good,
whatever
everyone,
this
is
the
Andrea
community
meeting
and
today
is
while
staying
at
home
becomes
difficult.
Remember
which
day
it
is
it's
March,
25th,
okay,
so
we'll
wait
for
it
for
a
few
more
people
to
join
and
then
we'll
start.
The
meeting.
A
B
C
D
C
A
Yes,
no,
no,
no
I,
don't
know
I,
don't
know
what
that
printed
to
my
my
mails
client
honestly
button.
Yes,
I
can
I
can
read
it
now.
So,
yes,
I
had.
Actually.
My
idea
for
today
was
actually
pretty
much
in
line
with
both
of
you.
I
wanted
the
first
one.
No
you
with
the
release
planning.
So
you
we
do
these
very
exciting
tasks
first
and
then
Marty.
Our
key
actual
support
was
the
other
thing
that
I
wanted
to
discuss
about
as
well,
and
these
are
the
thing
about
differential
data.
A
A
E
E
A
Didn't
see
this
one,
so
this
one
goes
to
zero
point,
six
zero,
and
so
the
other
comment
that
I
left
on
your
or
your
PR
is
that
we
also-
maybe
maybe,
since
now,
we've
moved
it
to
zero
six
zero
and
the
works
is
all
the
works
almost
on.
We
probably
need
to
think
about
also
integrating
and
running
running
and
RIA
with
parameters
in
our
CI,
so
that
you
know
we
don't
get
caught
by
surprise.
If
something
breaks,
do
you
think
that
this
could
be
doable?
Or
do
you
see
that
any
major
problem
in
doing
that.
E
A
E
A
Even
if
the
matrix
has
to
be
as
simple
as
I
don't
know
the
uptime
for
the
agent
anyway,
so
that
is
about
Prometheus
and
antonin.
Regarding
it
is
a
issue
181
about
deleting
and
doing
a
complete
cleanup.
Do
you
think
we
can
do
a
force,
push
and
still
make
it
in
0.5
or
should
I
move
a
sure
we
move
it
out?
No.
A
D
A
C
D
C
A
A
It
is
so
now
the
payload
looks
fairly
small
and
you
know
to
be
honest
with
you:
I
would
like
to
keep
it
a
fairly
small,
because
so
far
we've
been
keeping
pushing
features
to
milestones,
but
always
pushing
them
out,
and
so
I
think
that,
concerning
our
release,
cycles
may
be
staying
with
is
a
feature.
Payload
is
good
enough
for
two
six
zero.
Unless
you
already
know
of
other
more
important
features
that
need
to
be
added
to
this
release.
I
don't
know,
call
the.
If
you
have
an
opinion
here.
A
A
Ok
and
April
22
is
4
weeks
for
now.
I
will
say
that
you
know
if,
in
two
weeks
time
we
have
at
least
the
CI
in
place,
and
we
are
a
good
stage
of
in
terms
of
code
review.
Then
maybe
we
can
make
it.
Then
installation
is
probably
something
that
I
don't
know.
If
we
have,
we
must
have
it
immediately
or
maybe
we
can
decide
to
defer
installation
to
the
next
release.
Oh
god
is
better
sorry.
F
A
Cody
we
were
talking
about
the
zero
six
zero
payload
and
there
was
a
mention
that
Windows
might
be
another
desire
to
feature
four
zero
six
zero,
so
we
were
considering
whether
it
can
be
made
or
not,
realistically
so
the
feedback
that
we
got
I'm
just
summarizing.
What
we
discussed
while
you
were
offline,
is
that
the
code
is
in
a
fairly
good
stage
of
development.
So
let's
say
the:
what
is
missing
aware
is
the
installation
part
and
the
integration
see
I
pipeline
for
the
integration
in
this
CI
pipeline?
A
Work
is
in
progress,
but
we
need
to
check
with
Junction
to
see
where
we
are
and
for
the
installation
part
the
this
is
still
to
do
so.
I
was
wondering
whether
we
can
face
the
you
know.
We
can
sorry.
Oh
sorry,
we
can
sort
of
divide
the
shipment
of
this
feature
in
multiple
releases.
Maybe
we
start
adding
window
support,
but
you
know
without
having
installation
facilities,
which
means
it
could
not
be
considered
like
beta
or
whatever
it
should
be
recognized.
D
B
Like
that
I
like
that
approach,
because
I
do
think
that
there
are
some
additional
priorities
even
before
Windows
support
right.
If
we
have,
if
we
have
code
that
can
be
tested
in
alpha
great,
we
can
include
it
in
0.6,
but
I
think
it's
imperative
that
we
shift
some
focus
to
the
s
nap
problem
and,
to
you
know,
I
Pam
and
some
other.
Some
others
will
have
to
talk
about
IPAN,
but
but
as
Matt
and
I,
Pam,
and
and
being
able
to
do
grouping
of
precedents
for
policy.
A
B
F
C
F
A
Way
to
install
no,
no
that's
right.
I
was
thinking
that,
even
if
we
don't
provide
installation
support,
we
could
provide
a
guide
for
installing
it
manually.
Maybe
maybe
it
will
be
like
a
very
painful
and
hacking
your
way
through
installation,
but
we
will
provide
a
sort
of
a
guide
to
say.
Okay,
if
you
want
to
try
this
feature,
that's
how
it's
done,
but
you
know
if
we
are
in
a
stage
where
you
cannot
properly
without
an
installation
support,
you
can't
realistically
run
it
in
the
CI
pipeline.
A
F
B
I'm
not
suggesting
you're
at
six
I'm
saying:
let's
don't
work
on
Windows
things
you
know
or
let's
gradually
introduce
Windows,
because
right
now,
Windows
is
not
blocking
our
traction
with
customers.
That's
that's
not
the
call
that
were
there
were
being
asked
to
deliver.
I
think
it's
gonna
be
a
great
differentiator,
but
if
we
even
need
to
push
Windows
back
until
I
0.9
right
eight
weeks
out,
that's
not.
D
B
Be
a
major,
a
major
to
track
for
the
product
right
now,
the
biggest
hurdle
that
I'm
running
up
into
as
I
talked
to
the
other.
You
know,
especially
to
you
know,
say
the
tig
team
and
other
folks.
We
really
need
to
focus
on
how
do
we
deliver
a
sunette
capabilities?
How
do
we
deliver
tiering
capabilities,
and
and
how
do
we
deliver
some
sort
of
ipam
capability
and
if
0.7
or
0.8,
that's
fine
I
just
want
to
target
these
for
a
June
timeframe
on
on
those
on
those
features?
Sure,
let's.
B
F
B
If
I
was
a
righty
and
upon
this,
so
so
does
it
make
a
sense
in
the
next
release
for
us
to
select
a
a
logging
strategy
and-
and
when
you
say
relying
on
the
container
runtime,
does
that
mean
that
we
don't
have
like?
Because
if
we,
if
we
select
a
logging
strategy,
say
like
fluent
or
something
like
that
to
ship
log
somewhere
right,
I,
don't
know
that
necessarily
the
log
housing
is
is
right
now
part
of
the
of
the
core.
B
But
but
can
we
at
least
sin?
Can
we
at
least
select
a
and
decide
on
a
a
logging
strategy
and
and
build
this,
the
appropriate
support
pieces
so
that
we
can
go
ahead
and
have
a
mechanism
to
obtain
logs
for
for
a
support,
bundle,
I'm,
trying
to
figure
out
like
where
do
I
draw
the
line
on
that
on
that
separation
there
you.
F
Ice
ended
up
to
to
Cynthia
describing
our
wines
for
the
streaming
in
look
that
one
ho
player
since
we
I
mean
even
we
are
seeing
a
they
re
diving
under
the
container
wrong
time
to
maintain
a
long.
Maybe
we
should
have
some
way
to
stream
analog
to
some
systems
or
whatever
solitaire.
Maybe
you
guys
should
chiming,
and
maybe
you
know
I.
A
Completely
agree
with
you,
you
know
we.
It
is
also
incredible
in
some
way
that
commercial,
so
even
commercial
solutions
are
relying
on
this
approach
of
just
supporting
just
providing
access
to
container
logs,
for
instance,
that's
the
case
for
OpenShift,
so
they
just
you
need
the
logs
go
fetch
the
container
log
and
it
is.
It
is
extremely
annoying
for
troubleshooting,
because
if
you
have
a
container
that
keeps
restarting,
for
instance,
you
only
have
the
log
of
the
last
execution,
you
don't
know
what
happens
before,
and
typically
you
have
no
idea
why
the
container
is
restarting.
A
So
you
know
it's
a
I
think
that
we
need
to
support
some
form
of
streaming.
Syslog
is
the
most
obvious
choice,
but
you
know
I'm
old
I'm
14
year
old.
Maybe
young
people
now
do
it
in
a
different
way.
I!
Don't
think
that
we
want
to
export
logs,
to
you,
know
commercial
solutions
that
you
know
like
log
log
collectors
but
I'm
fairly
sure
that
there
is
something
better
that
we
can
do
apart
from
syslog.
A
But
whatever
I
know
in
in
from
what
I
remember
tools
like
Cabana
these
kind
of
things,
they
all
use
a
syslog
exporter.
So
we
promote
your
honor
for
your
application
to
use
a
syslog
exporter
and
then
they
import
the
log
and
you
can
process
all
the
data
in
it.
So
starting
with
syslog
is
a
great
start
in
my
opinion,
but
if
you
have
a
better
idea
that
we
did,
you
know
we
can,
we
can
develop
it.
I.
Remember
that
we
had
an
old
issue,
that
I
don't
find
anymore
about
using
a
tool
for
collecting.
A
C
That's
something
I
looked
into
a
while
ago,
but
it
was
a
very
young
project
at
the
time
and
I.
Don't
think
a
lot
of
improvements,
since
it's
also
VMware
tenzou
project
and
it's
very
basic
I,
think
the
way
it
work
is
using
some
file
format
to
detect
dockerfile.
You
describe
the
operations,
your
Diagnostics
operation,
you
want
to
perform
on
each
node
and
then
basically,
they
have
an
Orchestrator
which
is
gonna
SSH
into
each
node
and
perform
those
operations
for
you.
C
C
F
A
F
A
A
A
Not
at
least
in
the
default
installation
in
the
default
installation
order
passes
that
on
every
you
have
to
go
on
every
node
and
and
look
at
the
logs
on
that
node
or
if
the
mod
is
still
running.
You
access
the
logs
with
cube
CTO
logs,
but
I'm,
pretty
sure
that
you
can
configure
some
logging
operator
because
for
the
OpenShift
everything
is
an
operator
that
will
add
a
solution
to
push
all
the
logs
to.
A
B
Think
it's
best
to
make
it
agnostic
as
you're
saying
right.
I
can't
really
have
an
opinion
about
how
its
logs
are
collected
and
analyzed
unless
we're
building
additional
analysis
value.
On
top
of
that
right
like
if
we're
building
something
that
is
domain-specific
to
entry
for
analyzing
the
logs,
then
we
may
have
a
a
reason
to
say:
hey.
We
need
to
connect
to
this
particular
set
of
logs
that
have
been
collected
and
perform
that
analysis.
A
A
B
F
B
A
F
A
Yet,
to
the
same
I
mean
this
was
something
that
I
wanted
to
mention
at
the
end
of
the
meeting,
but
we
need
to
think
about,
including
also
the
people
in
China
in
the
community
meetings.
It
would
be
great
to
you
know,
start
having
also
meetings
with
a
china
friendly
time
so
that,
at
the
end
of
the
day,
half
more
than
half
of
the
contributors
to
interfere
ant.
We
are
there
so
I
guess
it
makes
sense
to
have
them
in
the
meeting
too.
But
this
is
something
that
we
can
discuss
after
the
meeting.
B
This,
though,
I
do
think,
even
though,
if
we
don't
include
something
in
core
and
tria
I
do
think
we
need
a
contributor
for
cat
texture.
If
I
wanted
to
stand
up
a
very,
very
simple,
you
know
fluent
collection
into
you
know.
Whatever
we
decide
to
collect
that
into
you
know,
we
could
use
the
the
typical
what's
it
called
these
fluent
and
Cabana,
and
help
me
out
here.
B
The
log
collection
piece
come
on
that.
B
Thank
you,
I
could
not
think
of
the
term
the
flu
and
elasticsearch
in
Qabbani.
You
know
basically
because
at
some
point
like,
if
we
want
people
to
start
testing
this
out
of
the
box,
even
though
we
don't
include
as
part
of
the
core
we
could
say
here
is
a
reference
architecture
on
tria
you
know
is
well,
is
open
to
you
know
if
you
want
to
ship
to
syslog.
B
If
you
want
to
ship
to
some
other
enterprise
log
collector,
we
don't
care,
but
if
you
want
to
try
this
out
from
an
open-source
perspective,
you
know
here
is
a
very,
very,
very
simple
log
collection,
because
it's
going
to
be
very
hard
to
do.
Demos
of
some
of
the
features
like
float
flow,
logging
and
tracing,
and
all
that
without
some
way
to
visualize
that
and
to
show
you
know,
people
here
are
the
logs
that
we're
seeing
thought
thoughts
on
on
basically
putting
together
a
contract
around
that
yes,
I
mean
I.
B
Well,
I'll
give
you
an
example:
what
are
order
friends
over
calico?
Do
they
actually
ship
a
default
fluent
operator
right
and
a
default?
You
know,
Caban
elastic,
search,
type
stack
right
and
it's
very
simple:
it's
not
you
know
anything
crazy.
You
know
we're
just
looking
at
raw
logs.
If
you
want
to
add
some
dashboard,
you
know,
look
and
feel
above
above
it.
You
know
using
kibana
sure.
B
A
D
A
B
I
would
say
more
like
an
add-on
operator
right
like
if
I've
got
an
aunt
rheostat
that
I've
already
brought
up
right,
I
spin
up
this
add-on
operator
and
now
I.
That
operator
spins
up
the
necessary
components
to
establish
log
collection
and
give
me
a
UI
to
peer
in
at
those
logs,
something
very
simple
like
I'm,
not
looking
for
you
know
again
crazy.
A
A
F
B
So
I
think
that
can
be
part
of
it
right,
like
I
I,
think
we
build
into
core
and
tria
the
way
to
collect.
So
so
the
question
when
you
say
streaming
versus
static
collection.
So
let's,
let's
break
that
down
a
little
bit
and
help
me
understand
what
pieces
of
the
of
the
log
collection
are
streaming.
What
pieces
are
static
and
and
when
a
request
comes
in
for
a
support,
what
would
your
expectations
as
a
support
engineer
be
like
what
all
do
you
need
to
see?
I
think.
F
I
think
you
won
some
sale,
I
will
Apio
you,
I
can
wear
usually
to
get
water
locks
and
other
less
free
tongues.
Wait
I'm
going
in
until
case
we
all
sing
about.
Maybe
we
can
dump
the
waste
that
may
be
too
much
to
including
every
single
bonfire.
Just
to
give
you,
somebody
is
a
general
I
want
to
dump
some
state
of
this.
F
B
I
can
regurgitate
to
you.
You
basically
want
a
you
want
a
copy
of
what
the
user
intended
right.
So,
whatever
the
configuration,
the
user
sent
in
terms
of
network
policy,
potentially
untreated
configuration,
etc,
and
then
you
want
to
see
how
the
an
tria
management
plane
actually
interpreted
that
and
applied
it
on
each
node
right.
So
so,
basically,
what
is
the
state
of
the
data
plane
or
the
state
of
you
know.
A
It's
a
full
state
snapshot
plus,
say
all
the
logs
that
you
can
think
of
that
can
be
useful
to
the
purpose.
So,
typically,
if
you
have
an
appliance,
you
collect
every
single
log
in
the
appliance,
a
full
state
of
the
appliance,
including,
for
instance,
both
application
state,
like
you
know,
API
objects,
database,
state
and
even
up
to
operating
system
state
like
a
snapshot
of
PS
snapshot
of
top
all
this
kind
of
information
sure.
B
A
Most
most
software
allows
you
to
select
also
to
select
all
so
you
know
like
me.
Well,
like
you
the
same
thing
that
you
do
with
June
all
right,
you,
you
can
select
how
far
you
want
to
go
in
the
past.
Indeed,
the
problem
that
we
usually
have
is
that
there
is
there
aren't
enough.
There
isn't
enough
log
history
for
analyzing
any
sort
of
failure,
because
usually
log
rotate
faster
than
what
you
want.
So.
B
Know
right
to
help
us
in
on
the
demo
side,
we
have
some
facility
with
an
elk
stack,
that's
what
I
was
looking
for.
That
was
the
term
elk
stack
or,
or
each
or
or
efk
stack
that
can
capture
that.
But
what
we
also
would
you
know
if
I
want
to
commercially
support
this,
then
we
can
build
our
own
support,
bundle
that
captures
that
and-
and
it
basically
makes
that
for
the
support
engineer-
they
can
recall-
you
know
at
any
point
in
time
or
any
snapshot
from
all
of
the
data
that
was
streamed
to
it.
B
A
The
story
about
the
collecting
the
snapshot
of
the
state,
the
snapshot
of
the
system.
I
think
that
even
if
crash
diagnostic
is
fairly
limited
at
the
moment,
I
think
that
it's
better
to
work
with
it
using
a
tool
like
that.
Rather
than
writing
software
from
scratch,
and
perhaps
use
that
crash
diagnostic
tool
to
generate
these
a
system
image
collection,
then
we
can.
We
can
decide
what
to
collect
exactly.
A
What
is
a
useful
information
and
what
is
not
useful
information
I
believe
that
it
will
be
a
mix
of
andrea
and
kubernetes
state
and
we
can
decide
exactly
what
need
to
go
with
what
needs
to
go
in
and
I.
Don't
know
if
you
want
to
target
this
one
as
well
for
zero
six
zero,
because
I
I
really
would
like
to
focus
on
bug,
fixing
and
stability
for
this
release
and
not
add
too
many
features.
I
mean
this
story
about
the
LK
stock
is
not
really
a
feature,
something
something
completely
over.
C
Diagnostics
requires,
as
it
is,
based
on
SSH
access
to
all
the
nodes
so
and
I'm
hearing
in
this
conversation
and
your
stuff
about
adding
capabilities
to
entry
are
to
stream
information
to
the
outside
world,
so
I
mean
these
all
come
like
orthogonal
in
crash.
Diagnostics
is
more
of
a
moral
of
I'm
gonna
SSH
into
the
node,
or
you
just
tell
me
like
in
a
very
simplified
declarative
way.
What
you
want
me
to
execute
on
each
node
and
then
I'll
return
that
information
to
you
in
a
nice
package.
A
A
Is
correct
and
that
is
correct
and
that's
why
I
think
that
one
doesn't
rule
out
the
other.
So
there
is
a
I
think
there
are
two
equally
valid
requirements
here.
One
is
to
add
an
tria
to
some
centralized.
The
log
collector,
like
in
your
case,
stack
and
the
activity
that
we're
planning
here
is
not
like
doing
anything
on
that
regard.
It's
more
like
a
shove
casing
that
entry
I
can
do
that.
The
other
one
is
about
providing
a
facility
to
Eve
to
build
a
system
snapshot
that
can
be
used
for
debugging
failures.
A
A
Community
oriented
and
the
slaughter
feature
is
sounds
more
like
a
commercially
oriented,
because
it's
something
that
would
be
pretty
much
useful,
mostly
for
technical
support,
people
baa
it's
something
that
we
can.
It
can
be
also
useful
for
us.
For
instance,
when
we
get
issues
filed
from
from
users
on
github,
and
we
can
ask
them
a,
can
you
please
upload
your
system
snapshot
and
you
know,
which
also
means
that
we
have
to
be
careful
with
it,
because
if
they
start
uploading
files
with
hundreds
of
megabytes
I,
don't
think
that
github
will
take
them.
A
B
Think
all
these
have
been
very
valid,
valid
points.
Antonin
in
terms
of
you
know,
I
think.
The
two
concerns
that
I
see
is
number
one
ensuring
that
our
logs
from
the
streaming
side,
at
least
that
are
going
to
standard
out
or
or
whatever
the
the
typical
logging
mechanism
that
we've
captured
all
the
data
there
that
we
need.
But
then
we
also
have
this
point
in
time
right
that
if
we
need
to
log
into
every
node
to
capture
some
other
state,
the
question
I
have
there
is:
is
that
an
expensive
operation?
B
D
D
D
C
The
thing
is:
we've
been
discussing
two
things:
if,
if
the
end
goal
was
to
like
be
able
for
people
reporting
opening
support
issues
with
the
entry
in
the
entry
area,
I'm
sorry
to
submit
enough
information
for
us
to
have,
then
maybe
crash
diagnostics
is
actually
the
right
tool
for
that,
because
this
is
an
open
source
project
to
in
order
to
submit
information.
We
ask
that
you,
yes,
you
have
SSH
access
into
every.
C
No,
you
just
ran
that
crap
Diagnostics
script
template
that
we
can
provide
and
collects
all
the
information
you
have
loaded
somewhere
for
us
to
see
I
mean
for
the
community
product
movie.
That's
that's
enough,
and
maybe
that
the
right
approach,
because
it's
really
a
local
approach.
In
my
opinion,
there
is
actually
no
changes
required
on
the
entry
inside.
But
if
you
want
to
be
like
an
infrastructure
that
can
be
extended,
then
that
really
doesn't
apply.
My.
B
Security
bills
go
off
when
I
hear
the
way
the
crash
diagnostic
approaches
this
right,
like
yeah,
it
may
be
okay
to
have
SSH
access
to
all
of
the
nodes,
but
that
that
is
not
gonna
fly,
at
least
for
enterprise.
Adoption
and
again
I
know
that
there's
different
targets,
we're
looking
at
from
a
code
change
perspective.
It
seems
to
me
that
that's
why
we've
developed
ant
cuddle
right
is,
we
can
put
in
a
you.
B
You
use
that
command
line
tool
to
put
a
request
in
via
the
API
server
for
to
have
our
agents
basically
collect
something
I
would
I
would
prefer
that
approach,
but
that
requires
you
know.
Obviously,
code
changes
there
and
it
may
not,
and
it
may
be
too
big
for
0.6.
If
that's
the
case,
I
think
I
I
think
before
we
run
out
of
time,
we
only
got
15
minutes
left.
B
Why
don't
we
go
ahead
and
look
at
the
bugs
that
we
need
to
address
in
0.6,
let's
see
how
much
that
fills
up
0.6
and
then
come
back
to
as
Gingin.
You
know
already
pointed
out.
We
really
need
a
better
PRD
on
this
and,
let's,
let's
put
the
PRD
together
and
if
we
have
additional
bandwidth
and
0.6,
we
can
again.
You
know
we
can
begin
this
capability
and
extending
this
capability,
but
but
I
do
think
getting
to
a
stable
release,
especially
with
needing
to
begin
activities
around
bundling
and
packaging.
A
Right
so
in
the
interest
of
time
since
in
the
agenda,
we
also
have
two
other
topics
proposed
by
Antonin.
I
would
like
to
have
you,
especially
on
turn
on
Hoover
versus
top.
We
can
make
a
call
on
whether
we
want
to
discuss
bugs
for
inclusion
in
0.060
or
discuss
arm
and
the
other
D
D
log
feature
that
Antonin
was
talking
about
and
then
maybe
we
can
we
can.
We
can
review
the
bugs
on
on
slack
and
sure
it's
possible,
so
should
we
do
bugs
or
arm.
A
Okay,
perfect
perfect,
so
maybe
maybe
we
can
I
mean
we
can
for
the
bugs.
You
can
either
start
targeting
them
to
zero
six
zero
or
you
can
ping
us
about
them,
is
locking.
We
can
decide
the
offline
anyway.
So
let's
talk
about
arm
so
I've
read
these
ready.
The
shoe
description
and
I
saw
the
problems
that
we
have,
and
but
it
seems
now
that
you
have
a
patch
on
turn
on
that
builds
the
arm
image
right.
Let
me
share
my
screen
to
reassure
our
stop
sharing
mine.
C
Can
everyone
see
my
screen
yep,
okay,
so
yeah?
Basically,
somebody
opened
an
issue
about
I
think
they
were
trying
to
run
entry
on
on
raspberry
pi,
and
so
they
were
asking
for
either
a
docker
image
that
can
eventually,
as
it
can
works
on
arm
CPUs
or
instructions
on
how
to
do
Andrea
for
arm
architectures.
C
So
I
think
for
a
couple
of
months
now,
I
mean
the
first
approach.
I've
looked
at
is
I.
Think
for
a
couple
months
now,
docker
is
supporting
like
a
new
build
CLI
build
command,
which
is
called
build
X
with
a
different
backpack
hand,
and
that
command
is
a
bill.
Is
able
to
produce
like
a
what
we
call
a
military
architecture
image
it's
actually
like
a
docker
manifest
list.
C
So
basically
it's
a
collection
of
architecture,
specific
docker
images,
and
so
we
chose
up
as
one
docker
images
as
one
docker
image
on
your
doctor,
repo,
and
you
can
just
like
start
a
container
from
that.
It's
gonna
pick
the
appropriate
image
for
your
architecture
and
if
you
look
online,
like
the
you
going
to
image
on
docker
app,
for
example,
is
actually
a
manifest.
Yes,
because
it
comes
with
support
for
multiple
architectures,
which
is
falling
it
for
us,
because
the
entry
image
depends
on
the
Ubuntu
and
so
I
gave
this
a
try.
D
C
But
I
actually
tried
to
integrate
it
with
our
CI
on
github,
using
a
github
workflow,
and
it
takes
about
three
hours
to
build
for
those
three
architectures
AMD,
which
were
amd64,
which
is
the
only
architecture
with
Apple
at
the
moment,
arm
64,
plus
our
MV
7,
which
is
a
32-bit
ARM
architecture.
That
I
think
is
the
one
used
on
on
the
Raspberry
Pi,
at
least
some
generation
of
the
Raspberry
Pi,
so
I
think
three
hours
is
like
it's
not
reasonable
enough
to
run
on
every
every
port.
C
Every
time
we
call
request
is
updated,
but
that's
definitely
reasonable
to
run
for
every
release
and
maybe
even
every
time
master.
The
master
branch
is
updated
because
we
usually
update
it
on
average,
like
three
or
four
times
a
day,
and
also
this
can
be
sped
up
by
pre
building
the
page
image,
because
right
now
in
this
get
a
lot
flow.
What
it
does
is.
C
And
so
the
big,
if
you
know
this,
was
this
way.
This
was
quite
easy
and
I
think
we
can
make
it
work.
But,
like
the
big
issue
is,
should
we
actually
be
able
to
test
those
images
before
shipping
them?
As
part
of
the
release
and
saying
we
support
those
arm,
architectures
and
like
well,
I.
Think
first
thing
to
point
out
which
is
not
in
this
guide:
is
it
actually
communities
itself
doesn't
really
test
the
arm
images
its
shipping
for
for
the
communities,
control,
plane
components?
C
C
C
It'd
probably
be
difficult
to
set
up
actually
for
our
CI
system,
because
we
need
to
to
create
to
create
the
ec2
instance
install
everything
run
the
test
daily,
the
instance.
So
there
would
be
some
some
kind
of
a
lot
of
infrastructure
tests,
probably
infrastructure,
logic
and
code
to
make
this
happen,
and
one
other
issue
is
that
AWS
ec2
only
supports
on
64.
So
if
we
want
like
arm
v7
32-bit
architecture,
there
is
no
possibility
to
do
it
on
AWS,
and
so
basically,
if
we
want
to
do
a
similar
thing,
we
have
to
host
our
own.
C
Like
a
builder,
maybe
a
couple
like
raspberry
PI's
and
ever
has
very
PI.
So
I
will
be
fine
doing
that.
But
again
there
is
a
lot
of
complexity
for
the
CI
infrastructure
here.
So
the
last
thing
I
tried-
and
that
was
kind
of
like
the
most
promising
step
here
is
just
like
docker
build
X,
is
using
like
emulation,
I,
think
it's
using
human
you
to
build
as
a
multi
architecture,
docker
image
the
manifest
list.
We
can
use
emulation
to
also
run
the
test
and
actually
I
spent
a
couple
of
hours.
C
C
C
C
D
A
B
A
A
A
B
A
Mean
if
we
treat
the
thing
is
that
we
make
wiki,
we
can
explicit
documents.
There
is
no
guarantee
that
those
images
will
be
working
and
I
think
that
to
not
use
too
many
resources
on
CR
on
on
the
build
system,
we
can
probably
build
the
arm
images
once
a
day
on
master
instead
of
building
them
at
every
commit
and
I.
Don't
know
if
this
will
make
it
more
complex,
workflow,
I.
C
Because
we
can
only
have
one
if
you
use
like
a
different
docker
image,
name
or
a
different
docker
image
tag,
or
that
milky
arch
docker
image
and
put
it
on
our
docker
hub
and
update
the
readme.
So
is
that,
okay,
we
have
those
arm
images.
We
haven't
been
able
to
test
them,
which
is
actually
exactly
what
communities
is
doing
and
they're,
not
testing
them
for
exactly
the
same
reason.
It's
actually
like
really
hard
to
get
like
CI
resources
to
be
able
to
test
on
unarmed
I
mean
arm.
64
is
not.
A
A
A
Right
and
I
have
to
be
rude
now
and
tell
you
that
we
are
already
one
minute
overtime.
I
know
that
some
of
us,
some
of
you,
is
late.
Some
of
you
have
to
run
to
other
meetings,
so
I
would
like
to.
Maybe
we
can
defer
to
slack.
Also
this
conversation
about
arm
builds.
It
will
be
nice
also
to
use
the
channel
for
more
discussion
topics,
as
it
will
allow
more
people
to
contribute.
A
So,
let's
sum
move
to
slack
for
this
circus
Ingeborg
is
to
add
zero,
six
zero
payload
and
the
for
continuing
the
arm
discussion
and
for
the
dee
dee
log
discussion
instead
topic
that
Antonin
brought
up.
Maybe
we
can
either
continue
discussion
on
Socrates
on
the
mailing
list
or
and
the
for
sure
there
would
be
a
slot
for
East
for
it
in
the
next
community
meeting
say
that
I
would
like
to
thank
you.
Everyone
for
attending
and
I
am
going
to
stop
the
recording
now
thanks
everyone
again.