►
From YouTube: Antrea Community Meeting 12/11/2019
Description
Antrea Community Meeting, December 11th 2019
A
No
I
mean
I'm,
not
just
looking
at
I
was
just
looking
at
the
meaning
list,
so
nothing,
nothing
major.
So
the
first
item
that
we
have
into
the
agenda
for
today
is
a
an
introduction
to
the
ANA
to
the
OVS
pipeline,
implemented
by
Andrea
by
Antonin.
So
maybe
in
a
try,
are
you
ready?
You
want?
Okay,
perfect!
That's.
C
B
A
C
All
right,
so
hopefully
everyone
can
see
my
screen
now.
So
this
is
a
pull
request.
I
opened
a
couple
days
ago,
and
now
it's
going
through
review
right
now
and
basically,
when
I
started
working
on
entry
founded
like
confusing
to
debug
stuff,
without
knowing
exactly
what
the
pipeline
was
doing
so
I
took
a
couple
days
before
OBS
con
and
I
documented
the
pipeline.
It
was
like
in
two
birds
with
one
stone,
because
I
was
able
to
talk
about
it.
C
So
I
did
that
diagram
of
the
pipeline.
I.
Think
I'm,
just
gonna
cover
it's
about
a
dozen
tables
I
think
I'm
gonna
cover
all
the
tables
one
by
one
and
show
some
example
flows
and
so
explain
what
they
are
for
and
obviously,
if
you
have,
if
you
have
questions
on
this,
just
just
interrupt
me
and
we
can
make
it
interactive.
C
C
We
write,
we
write
an
integer
value
to
classify
the
traffic
in
three
categories,
so
we
have
local
traffic,
local
gateway,
traffic,
internal
traffic,
so
don't
traffic
is
everything
that's
coming
from
a
remote
communities
node
and
we're
going
to
use
that
information
later
in
subsequent
matches
in
other
tables
of
the
pipeline
and
I
think
we're
making
a
few
changes
at
the
moment
at
the
moment
right
now,
so
that
actually
there
is
no
matching
ingress
port.
That
probably
means
that
there
is
something
wrong
going
on
and
so
we're
just
gonna
drop
the
the
packet.
C
We
should
not
have
like
a
meteor,
so
I
think,
instead
of
just
in
recent
meds
we
set
by
a
drop
and
the
normal
action
here
is
just
something
which
is
installed
by
default
in
obvious
to
say
that,
okay,
if
you
don't
have
any
tables
defined
and
it
flows
installed,
basically
you
do
the
normal
action,
which
just
means
like
traditional
forwarding.
It
makes
it
behave
as
as
Linux
bridge,
basically.
C
In
case
one
of
the
local
parts
is
misbehaving,
and
so
what
that
means
is,
for
every
part,
we're
going
to
install
two
flows
want
to
prevent
like
IP,
spore,
facebooking
and
one
brilliant
article
thing,
and
if
we
look
at
those
two
flows
here,
no
sir
I
mean,
if
rule
number
three
and
flow
number
five
year
as
those
were
installed
for
one
of
the
core
DNS
pod,
and
so
the
first
one
is
for
IP
IP
spoofing,
yes,
and
basically
the
way
it
worked
is
if
none
of
these
flow
matches.
Sorry
I
should
have
started
with
that.
C
If
none
of
the
flow
matches-
and
it
means
it's
a
spoof
packet
and
we're
going
to
drop
the
packet
if
one
of
the
flow
match,
then
that
means
it's
a
it's
a
legit
packet
and
by
legit
packet.
It's
either
it's
a
correct
or
packet
or
it's
a
correct,
IP
packet,
and
we
also
use
that
table
to
basically
separate
traffic.
Let's
split
traffic
between
IP
and
HAARP
and
if
I
scroll
up
in
the
pipeline
here,
you
see
that
we're
going
to
go
to
two
different
tax
tables
based
on
the
type
of
traffic
here.
C
So
pretty
straightforward
I
think
we
had
the
discussion
of
while
back
that
we
don't
have
any
checks
for
the
for
the
gateway
and
that's
because
there
is
no
guarantee
that
traffic
coming
through
the
Gateway
is
gonna
as
the
IP
source
VIP
assigned
to
the
gateway
interface,
and
that's
because
traffic
is
service.
Traffic,
for
example,
is
going
through
the
through
the
Gateway
and
for
that
traffic.
The
source
IP
is
original
IP.
C
A
Have
a
issue
I
believe
issue
200
open
to
put
some
sort
of
spoof
guard
on
the
Gateway
right
to
make
sure
that
at
least
we
only
have
the
we
don't
allow
all
the
sorts
of
vs.
traffic
from
the
gateway
interface,
because
I
think
there
is
a
there
is
a
way
for
doing
some
sort
of
spoofing,
like
you
know,
doing
syn
flood
attacks
or
something
like
that
at
the
moment.
But
I
don't
think
that
it
is
an
act
AK
vector
that
can
be
easily
exploited
because
you
first
need
to
gain
access
to
the
container
host.
C
D
C
Alright,
so
for
our
traffic,
what
we
do
next
is
we
go
through
the
art
responder
table,
and
here
we
really
just
do
two
different
things.
So,
if
you
look
at
the
few
tables,
I
think
we
should
be
able
to
remove
through
to
actually
from
the
pipeline,
because
we
can
see
that
only
we
guarantee
that
only
I
did
our
traffic
comes
here,
but
I
guess
it's
good
to
have
a
default
action,
which
is
just
like
this.
C
So
we
do.
We
do
two
different
things
for
most
art
traffic.
We
just
do
the
normal
action
and,
as
I
said
before,
it
just
means
that
the
OBS
watch
is
just
going
to
behave
like
a
normal,
regular
l2
switch,
so
we're
just
gonna
do
well
to
forwarding
on
the
packets
like
our
profiles,
however,
we
have
a
special
flow
for
our
requests.
C
If,
if
those
sorry,
we
have
a
special
flu
for
up
requests
where
we're
asking
for
the
the
MAC
address
associated
with
a
remote
gateway,
I
think
I
should
clarify
the
documentation
here.
We
actually
no,
no
that's
correct.
We
add
one
such
flow.
Every
time
a
new
node
is
joining
the
cluster,
and
so
what
do
we
do
that?
That,
because
we
use
a
specific
MAC
address,
we
call
the
global
virtual
Mac
for
all
the
total
traffic.
C
So
every
node,
every
node
for
every
other
node
in
the
cluster,
is
going
to
have
that
kind
of
route
installed
in
the
gateway
which
basically
tricks
the
Linux
kernel
into
thinking
that
were
neighbors
direct
neighbors
with
who
is
the
remote
remote
gateways
and
that's
a
non
link
route.
And
so
that
means
that
the
oast,
when
he
has
to
send
traffic
to
that
remote
gateway,
is
going
to
ask
for
the
MAC
address
of
the
Gateway
using
an
ARP
request.
And
so
what
we're
doing
is
we
just
captured
without
request
locally
in
the
switch?
C
And
we
reply
with
a
global
virtual
Mac
after
that,
the
OVS
which
is
going
to
just
send
the
traffic
on
the
appropriate
tone.
We're
gonna
see
that
in
the
first
table
and
when
the
traffic
gets
to
the
remote
node
to
the
ouvea
switch
instance
on
the
destination
node,
the
traffic
is
going
to
be
D
capsulated.
We're
gonna
identify
the
traffic
by
the
by
that
global
virtual
mac
and
we're
just
gonna
forward
it
directly
to
the
to
the
to
the
appropriate,
but
without
first
going
through
the
gateway.
C
C
A
C
Yes,
we
do
something
very
specific
yeah.
We
have
a
failure
and
that
the
first
bullet
point,
which
is
a
bit
hard
to
understand
and
actually
that's
changing
a
bit
in
the
current
code.
There
is
no
country
offer
this,
but
basically
what
we
do
you're
the
main
purpose
of
that
flow
I
mean.
Obviously
we
commit
all
the
new
connections,
so
we
can
have
like
a
stateful
matching
on
connections
in
the
rest
of
the
pipeline.
C
But
the
main
thing
we'll
do
is,
if
you
have
a
pod,
sending
traffic
to
a
service,
so
that
traffic
is
going
to
go
first
through
the
Gateway,
where
it's
gonna
be
handled
by
the
queue
proxy
direct
pass
and
then
we're
gonna
kick
up.
It's
gonna
load
balance
the
traffic
and
select
a
back-end
pod
for
the
service,
and
so
let's
assume
that
the
bakit
pod
and
the
part
which
is
originating
the
traffic
on
the
same
node,
then
when
the
backend
pod
replies
to
the
originating
pod.
C
If
we
don't
install
that
flow,
then
we're
just
gonna
do
like
l2,
switching
back
to
the
originator,
pod,
we're
not
gonna,
go
through
the
Gateway
for
reverse
traffic
and
we're
not
gonna.
Go
through
queue,
proxy
and
iptables,
and
so
the
packet
is
actually
gonna
be
incorrect,
because
it's
not
gonna
have
the
correct
I
just
push
up
it.
When
it's
received,
it's
gonna
be
like
the
back
at
Pyke
instead
of
the
server's
IP.
So
you
kind
of
need
to
do
that.
Reverse
translation
by
going
through
IP
tables-
and
this
is
what
this
Rose
is.
C
C
Two
flows
through
number
two
is
saving
the
MAC
address
into
a
contract
register
using
city
label.
Here
you
have
a
move
from
the
source
MAC
address
to
the
city
table
and
throw
number
three
is
just
in
the
reverse
path.
Basically,
if
that's
why
we
match
on
not
new
connection
here,
and
we
also
mention
the
traffic
mark
that
we
put
here,
we
mark
it
with
the
F
20,
and
then
we
mention
that
like
so.
C
In
that
case,
we
we
move
back
from
the
city
label,
contract
label
to
the
destination
MAC
address,
but
what
I
told
whining
is
we
should?
Probably
we
don't
need
that
we
don't
need
to
use
city
label
because
we
know
what
the
MAC
address
is
going
to
be.
You
know
we
know
what
it's
always
a
MAC
address
of
the
gateway.
It's
not
like
it's
dependent
on
the
connection,
so
actually
she
is
removing
the
use
of
contract
table
name.
You
will
save
a
few
cycles
per
packet
or
whatever,
because
we
just
don't
need
to.
E
C
Here
we
do,
we
do
match
on
register
0
all
right
so
yeah,
and
it
must
be
one
I,
don't
remember
one.
What
one
is
today
if
I
didn't
put
it
I
should
probably
put
it:
oh,
it's
or
local
gateway,
so
it's
only
gonna
be
for
the
local
gauge,
which
was
the
traffic
coming
coming
from
pod,
which
is
what
we're
interested
in
for
that
thing.
That
I
just
described
we're
gonna
meet
the
second
row.
C
A
A
Because
register
register
zero
is
said
to
one
only
for
no
no,
it's
just
that
I
was
sorry.
My
mom
must
be
confused,
as
you
see
the
earlier
that
the
third
one
was
for
true
freedom,
traffic
coming
through
the
Gateway
or
or
solid.
You
mean
the
tenth
one
for
suppose:
returned
traffic
coming
from
a
service
plus
therapy
from
back
end
pod,
forever
service,
plus
right,
P.
Sorry,
okay,
so
flew.
C
A
C
So
there
is
a
single
flow
and
what
if
this
is
cluster
IP
traffic,
just
output
it
to
the
gateway
and
don't
do
anything
else,
keep
the
rest
of
the
pipeline,
and
so
you
can
read
through
the
texture.
But
basically
this
ensures
that
we
don't.
We
don't
drop
traffic
because
of
because
of
an
egress
egress
policy
row
at
this
stage.
C
Alright,
so
then
we
good
we
get
to
something
a
bit
more
difficult,
I
guess,
which
is
the
tables
that
we
use
to
enforce
network
policies.
So
at
this
stage
we
haven't
done
forwarding
it
and
and
we're
gonna
just
enforce
the
egress
rules
of
network
policies,
and
we
do
we
do
the
ingress
side
of
things
after-after-party.
C
So
the
tables
are
very
similar
in
spirit.
We
do
conductive
match
fields
which
is
a
nice
feature
to
avoid
an
explosion
of
the
number
of
flows,
as
we
increase
the
number
of
network
policies
and
the
complexity
of
network
policies,
and
so
will
your
second
a
separate
document
that
Chan
and
I
are
gonna
work
on
to
describe
exactly
how
we
implement
network
policies
and
translate
a
network
policy
into
obvious
flows.
But
here
we
can.
We
can
give
a
high-level
overview.
C
Basically,
we
use
three
dimensions
or
the
conjunctive
match,
and
the
way
it
works
is
that
those
three
dimensions
are
actually
three
sets
of
values
and
and
you're
matching
three
fields
against
those
three
sets
of
values.
And
if
you
have
a
match
or
each
one
of
them,
then
we
selling
that
conduction
action
in
perform
that
function.
Action
in
the
three
dimensions
here
or
the
traffic
source,
where
the
traffic
coming
from.
C
Wait
no
ingress
so
yeah
the
traffic
like
the
pathogen
ating,
the
traffic,
which
is
the
pod
on
which
the
network
quality
deprived.
Then
we
have
the
traffic
destination,
because
this
is
an
egress
hole.
So
are
we
allowed
to
send
traffic
to
that
destination?
And
then
we
have
the
supports
the
application?
Basically,
sorry,
the
TCP
port.
A
C
D
C
Take
so
very
straightforward,
and
if
we
look
at
the
roles,
those
two
IP
addresses
that
show
up
here:
10
10,
1,
2
and
10
10
1-3.
Those
are
the
two
IP
addresses
for
the
two
pods
which
have
the
engines
label
on
my
local
node,
on
which
I'm
dumping
those.
So
this
is
very
kind
of
like
easy
to
read
if
the
if
the
source
IP
address
is
one
of
those
two
IPs.
C
That
means
traffic
coming
from
those
local
thoughts
and
if
the
destination
is
one
of
those
two
IP,
that
means
I'm
talking
to
an
engine
spot
and
if
the
port
is
80,
then
I'm
gonna
take
conjunction
ID,
2
and
resubmit
to
table
7
returning
to
table
70
means
basically
I
have
no
violation
of
a
policy
rule.
However,
if
I
have
a
Miss,
then
I
go
to
table
60,
which
is
in
graz
default
table,
and
what
the
table
is
for
is
to
enforce
the
default
isolated
pad
behavior
of
kubernetes,
basically
I
mean.
C
C
One
policy
becomes
an
isolated
pod,
and
so
now
you
have
to
have
an
explicit
white
role:
whitelist
roles,
I'm,
sorry
for
traffic
to
be
accepted,
and
so
the
first
table
here
table
50
grass
pool
tables
basically
is
just
like
a
list
of
whitelist
roles
and
table
60
and
forces
a
default
behavior
if
nothing
matched
in
the
previous
table,
and
if
our
god
is
an
isolated
pod
which
has
a
network
policy
applied
to
it,
then
we
drop.
The
traffic
here
are
two
isolated
pods
or
our
engines.
D
D
E
That
case
we
weren't
robbed
because
I
think
we
are
using
the
IP
addresses
here.
So,
okay,
yeah
sorry,
can
you
repeat
that
again,
Abhishek
hidden,
well
since--since,
well,
I
P
addresses
here
to
drop,
so
we
are
only
dropping
traffic
for
the
pods
which
are
isolated
and
not
really
a
cider
or
a
search.
So.
C
Sorry-
and
one
thing
to
notice
here,
is
that
we
are
not
calling
the
output
action
directly,
we're
just
like
loading,
the
port
number
into
register,
the
second
obvious
register,
and
that's
because
we
haven't
all
at
that,
because
we
haven't
enforced
ingress
role
tables
at
this
stage.
Yet,
and
why
do
we
put
ingress
policy
roles
after
for
worrying
it's
because
we
only
be
able
to
match
on
the
port
when
we
enforce
the
policies,
so
we
compute
that.
A
C
So
now
same
thing
we'll
go
over
this
quickly.
This
is
just
the
counter
parts
of
the
egress
tables
for
ingress
policy
rules.
Here,
I
think
one
thing
I
didn't
cover
previously,
so
maybe
I
just
talk
about
this
is
we
do
install
a
priority.
I
prioritize
show
using
contract.
That
say
that
is
not
a
new
connection.
C
Just
acts
at
the
traffic,
don't
don't
don't
perform
whitelist
checks
again
just
accept
traffic
I
think
he
says
a
behavior
and
I
don't
know
if
it
is
the
intended
behavior.
So
if
somebody
knows
I
think
please
bring
your
bring
it
up,
which
is
if
we
have
an
established
connection
like
a
longleaf
connection,
and
we
update
the
network
policy
in
such
a
way
that
this
connection
is
no
longer
valid.
Then
we're
not
gonna
enforce
the
network
policy
right
away.
A
A
The
behavior
as
a
matter
of
fact,
in
my
opinion,
the
behavior
that
we
are
not
implementing,
we
are
implementing
right
right
now
is
not
the
right
behavior,
because
I
can
I
mean
one
can
have
like
a
connection,
keep
having
doing
a
keepalive
with
you
for
it
keeping
doing
a
keepalive
for
that
connection,
every
10
seconds,
and
that
connection
would
probably
never
be
before
ever
then.
There
you
know,
and
therefore
you
can
denied
by
by
a
policy,
but
that
deny
action
will
never
take
effect
on
your
connection.
In
my
opinion,.
A
Enforcing
the
policy
also
means
breaking
existing
connections,
but
this
is
just
my
interpretation
on
the
semantics
of
network
policies,
so
it
is
a
it
is
I
I
think
that
it
is
subjective
and
it
can
probably
vary.
Also
from
you
know,
use
case
to
use
case
from
situation
to
situation
and
actually
I
was
very
curious.
Is.
C
A
A
C
D
C
A
Is
something
it's
definitely
worth
evaluating
and,
in
my
opinion,
whenever
I
see
a
field
like
flow
one,
it
makes
me
want
to
try
to
find
a
way
to
hijack
this
condition,
like
you
know,
spoof
something
so
that
I
can
make
a
new
connection.
Look
like
the
16
connections,
so
that
I
can
get
a
free
pass.
But
yes,
you're
right
as
a
matter
of
fact
that
is
a
that
is
about
probably
money.
The
concern
by.
D
C
A
Mean
to
me,
like
you
know,
from
a
from
a
semantics
perspective,
it
looks
more
correct
to
me
that
if
you
have
say
that
a
connection
is
not
allowed,
that
applies
to
existing
and
new
connections
right
but
anyway,
so
we
we
can.
We
can
experiment
that
I
am
a
little
bit
curious
about
the
remaining
flow.
So
maybe
we
can
move
forward
with
an
analysis,
yeah
sure.
E
C
A
A
C
That's
the
destination
port,
so
that
basically
identifies
the
pod
we
apply.
The
policy
to
here
I
mean
so.
It's
a
rationale
for
using
port,
and
the
IP
address
here.
I
believe,
is
that
let's
say
you
have
two
local
pods,
so
for
those
local
buzz.
We're
gonna
do
well
to
switching
we're
not
going
to
do
a
three
forwarding,
so
in
theory,
I
can
use
the
destination
MAC
address
if
I
know
it
of
a
pod
I'm
not
allowed
to
talk
to
just
by
putting
an
IP
at
an
IP
address.
A
A
A
C
A
C
A
A
A
C
C
A
So
the
only
other
question
that
a
known
the
only
other
question
that
I
have
is
that:
do
you
think
that
it
would
be
a
value
in
announcing
the
ARP
responder
to
have
to
have
an
entry
for
every
pod
so
that,
basically,
we
pretty
much
stopped
sending
every
sort
of
ARPA
request
over
the
switch?
Or
do
you
think
that
any
sort
of
performance
of
throughput
gain
will
be
pretty
much
negligible?
I
mean
it's
negligible.
We
don't
send
that
traffic
across
yeah.
A
C
C
E
C
If
I,
maybe
it's
a
question
for
you
to,
if
I
have
to
part
like
put
a
and
put
B
and
I,
define
a
network
policy
such
that,
but
a
can
send
egress
traffic
to
put
b
but
but
b
as
an
egress
traffic.
That
says
it
cannot
send
traffic
to
put
a
in
this
case.
Pata
can
still
connect
to
Part
B.
So
that
means
like
all
the
reply.
Traffic
should
be
accepted.
Is
that
correct
or
I
think
that
the
expectation?
A
All
right,
I
don't
want
to
sound
annoying,
but
perhaps
this
is
a
conversation
that
we
can
take
to
the
project,
meaning
list
and
or
maybe
on
github
and
I
would
like
now
to
quickly
move
to
the
0.20
release
status.
We
have
six
features
still
open,
like
six
issues,
will
open
I
think
that
one
of
those
is
just
full
documentation,
which
is
a
pretty
much,
is
what
enter
Antonin
just
showed
us
today
right
this
one.
A
Then,
in
terms
of
features,
we
have
kubernetes
Network
policy
implementation,
where
the
two
remaining
issues,
as
we
discussed
the
last
time,
are
pertain
in
testing.
So
let's
say
that
the
the
feature
work
for
it
has
been
implemented.
I
believe
that
we
still
need
to
take
her.
No,
we
did
take
care
of
the
except
block
right
that
one
was
done.
E
A
A
C
A
Okay,
yes,
I
notices
that
issue
indeed
as
well,
and
yet
that's
something
that
perhaps
we
may
want
to
address
and
in
terms
of
documentation,
did
the
other
thing
about
this
iPhone
thing
we
already
discussed
about
it.
So
I,
don't
think
that
there
is
I,
don't
think
that
there
is
anything
else
that
would
need
to
be
discussed.
We
just
need
to
verify
the
status
of
the
CLI
and
done
for
the
issue
with
this
delay
in
network
egress,
which
is
197
I,
have
a
note
from
genuine.
Let's
say
that
Abhishek
has
an
idea
for
it
right.
A
A
E
A
Read
any
channels
are
either
we
can
use
a
channel,
but
I
he's
just
looking
about
the
gulang
channel,
not
like
not
alike
in
you,
TCP
connection
to
establish
okay,
I'm,
just
thinking
you
know,
whenever
we
talk
about
the
CNI
Arduino
that
she
and
I
had
works
for
when
new
containers
are
added,
but
it
doesn't
work
for
when
we
apply
a
policy
on
existing
containers,
but
we
also
know
we
are.
We
also
know
that
we
have
another
problem
in
that
case,
which
is
you
know
if
the
connection
is
already
established,
the
policy
won't
do
anything.
E
A
So
we'll
check
with
John,
if
this
is
something
that
we
can
target
for
197,
because
the
sensitive
since
a
sorry
you
mean
really
yes,
because
since
it
affect
the
end-to-end
I
took
policy
test,
then
we
may
want
to
fix
it.
I
understand
that
you
know,
for
production.
Use
cases
is
not
probably
a
supermassive
issue,
at
least
not
at
the
moment,
but
it
would
be
nice
to
address
the
to
make
sure
that
the
end-to-end
metropolitans
pass
all
right.
So
is
there
anything
else
you
would
like
to
bring
up
for
the
upcoming
release.
C
Yeah
so
I
opened
it
like
a
while
ago,
at
the
very
beginning
of
century,
I
said
basically
it's
like
an
important
issue
which
is
and,
and
there
was
an
open
for
support
a
while
back
and
clearly
there
was
no
possible
action
on
our
part
without
getting
like
more
information
about
the
cluster
and
more
in
more
dogs
from
entry
and
maybe
cubelets,
and
so
basically
is
how
do
we
make
it
easy
for
people
to
gather
that
information
so
that
we
don't
have
to
ask
every
time
please
provide
this.
Please
provide
that.
C
A
Is
exactly
the
same
thing
that
we
have
with
the
Prometheus
integration?
We
need
to
define
first,
what
we
want
to
collect
boom
and
then
maybe
we
can
discuss
how
we
want
to
collect
it.
So
I
completely
agree
with
you
and,
in
my
opinion,
what
what
is
really
important
here
is,
apart
from,
of
course,
from
montréal
logs
are
the
cubelet
logs.
A
Perhaps
the
cube
api
server
logs
I
usually
never
find
the
cube
scheduler
logs,
useful,
typically
and
another
another
thing
that
on
which
I
am
NOT
certain
about
how
it
works
with
a
for
with
Andrea
is
the
CNI
logging.
I
know
this
is
that
the
CNI
plugin
is
not
logging
into
the
cubelet
log.
So
it's
using
a
different
stream
right.
A
Someone
can
correct
me,
but
I
think
it's
orange
cubed,
okay,
I'm
I'm,
mmmmmm
I
just
be
confused
because
normal
it's
the
cube,
root
log
I
thought
it
was
some
four
somewhere
else.
But
if
it's
the
cube,
that's
log,
then
we
have
nothing
to
worry
about
and
then
we
need
to
collect
the
cluster.
The
current
cluster
status
I
was
thinking.
C
A
A
Sorry,
as
we
now
are
doing
that
through
Q
proxy,
maybe
service
and
endpoints
are
not
strictly
useful,
but
I
will
connect
a
clock,
collect
them
and,
of
course,
another
thing
that
that
is
important,
that
the
cube
proxy
logs
and
then
the
cube
proxy
comfy
drop
and
then
I,
don't
I,
don't
know
how
we
want
to
capture
configurations
for
QB,
PI
server
and
the
cubelet,
because
the
chair
I
feel
that
the
D
hate
the
configuration
files
also
tend
to
contain
secrets
there.
So
I
don't
know
if
it's
okay
to
just
upload
the
configuration
files.
C
D
Agree
with
Salvatori
that
we
don't
want
to,
we
don't
want
to
you,
know
upload
any
secret
information.
Is
there
anything
specific
in
the
cube
API
configuration
that
is
impacting
our
network
configuration
other
than
things
like
service,
sitter
ranges
and
and
and
those
sort
of
things
or
can
we?
Can
we
narrow
it
down
to
the
two
or
three
specific
things
that
we
need
and
then
maybe
just
be
able
to
parse
that
configuration
and
extract
the
exact
things
that
we
need
out
of
it?
Yeah.
A
I
mean
I
mean
for
me
from
the
cubelet
configuration,
for
instance.
We
only
need
the
the
flags
that
are
used
to
start
the
Hulett
I.
Don't
think
that
we
need
anything
else
right
then
there
are.
There
are
other
information
that
I
would
like
to
collect
on
the
host,
which
is,
for
instance,
you
know
obvious
kernel
module.
You
could
be
always
useful
to
have
canceled
yeah.
We
asked
a
memorial
version
whether
for
instance,
now
with
kubernetes
1.18.
A
Another
information
that
could
be
useful
is
the
container
runtime
and
it
will
be
just
mostly
for
reproduction
efforts,
so
we
need
to
know
which
is
the
CRI
driver
in
use
or
whether
they
are
just
using
in
dhaka,
runtime
and-
and
you
know
it
we
pretty
much-
there
are
just
three
runtimes
I
think
now
there
docker
cRIO
and
continuity,
so
we
just
need
to
know
which
one
is
running,
which
version
is
running,
don't
figure?
Anything
else
will
be
you
that's
pretty
much
it
I
guess.
Yeah.
D
C
A
A
C
D
We've
only
got
two
minutes
left
I
did
want
to
just
and
we
I
can
reserve
this
for
the
next
meeting.
I
wanted
to
at
least
propose
the
idea
from
a
documentation
standpoint
if
we
would
think
about
using
something
like
read
the
docs
or
if
somebody
has
a
better
idea
for
a
more
structured
output
and
have
like
a
Docs
and
treated
on
IO
site.
D
That
would
be
very
targeted
for
and
structured
for,
determining
who,
the
who
the
consumer
that
was
going
to
be,
whether
it
was
going
to
be
a
contributor
or
an
end
user
to
be
able
to
very
quickly
get
at
the
information
they
need
to
either
set
an
tree
up
as
a
as
an
end
user
or
be
able
to
dive
in
to
some
of
these
deeper
things
like
the
flow
diagram
that
was
shown
today.
Any
thoughts
on
that
on
on
setting
up
a
a
structured
documentation,
site,
I.
A
Would
love
to
do
that,
but
I'm
afraid
that
I'm
not
sure
if
we
would
find
time
to
do
that
in
the
short
time
we
should
definitely
plan
for
it
and
we
should
probably
have
a
sort
of
a
let's
say,
a
timeline
for
how
we
plan
to
roll
out
this
documentation.
Because
of
now
that's
the
project
still
fairly
young.
It
might
be
okay
that
that
might
not
be
needed,
because
most
of
the
users
are
also
contributors.
A
D
A
C
D
A
D
Tracked
in
github,
you
can
the
one
thing
whether
we
use
markdown
or
not
it
really.
We
can.
We
can
debate
that
and
decide
what
the
best
solution
is.
I
think
the
key
thing
here
from
user
perspective,
we
just
want
an
easy
way
for
them
to
switch
between
active
versions
of
the
docs
right.
So
what
were
the
docs
at
a
specific
version
in
the
past
versus
what
the
Doc's
are
now.
A
So
I
think
read:
the
docs
might
be
arrestee,
not
markdown,
but
if
it's
not
read
the
docs
is
whatever
is
that
he
uses
markdown,
that
we
can
keep
to
the
community
documentation
in
the
repo
so
that
you
know
we
manage
it
like
with
pull
requests
and
whatever.
Then
we
have
a
make
action
like
make
doc
that
automatically
publish
to
whatever
site
were
using
I've
been
using
Jekyll
in
the
past
to
it
when
I,
when
I've
been
using
it.
It
was
very
mature.
So,
but
maybe
it's
much
better.
Now.
A
Anyway,
so
let's
start
with
the
giving
reconstruction,
and
then
we
can
decide
together
on
where
to
publish
and
how
to
publish
it.
I
agree
of
about
not
using
the
two
distinct
markup
languages.
Markdown
for
everything
should
be
good
all
right.
It
seems
that
we
are
at
time.
So
is
there
any
final
item
that
you
would
like
to
discuss?
A
Good
all
right.
So
the
item
ax,
the
items
in
the
agenda
that
were
not
discussed
today
will
be
different.
The
next
community
meeting,
speaking
of
which
I,
was
a
planning
to
one
final
community
meeting
before
the
New
Year
the
end
of
year
break
on
next
Wednesday,
and
then
you
know,
then
the
other
two
Wednesday
will
be
Christmas
and
New
Year
Day,
so
I
figured
skipped
us
once
and
then
we'll
will
meet
again
on
January
the
8th.
D
A
Next
week,
that's
right
so
that
would
be
next
week
will
be
the
last
meeting
of
the
year
and
then,
after
that
it
will
be
the
8th
of
January
all
right.
So
thank
you
very
much
for
attending
and
it's
been
a
pleasure
as
usual
and
talk
to
you
next
Wednesday
and
with
one
good
afternoon
good
morning
or
good
evening
or
good
whatever.
Thank.