►
From YouTube: Antrea Community Meeting 04/24/2023
Description
Antrea Community Meeting, April 24th 2023
A
All
right,
I
welcome
everyone
to
this
entria
community
meeting
tonight
or
today.
As
far
as
I
know,
we
have
three
topics
on
the
agenda
feel
free
to
tell
me
if
you
would
like
to
add
one
new
topic.
I
think
we're
gonna
start
with
a
quick
update
on
the
entria
1.11.1
release
and
the
important
bug
fixes
that
went
into
that
release.
After
that
I'm
gonna
give
a
quick
update
on
the
entria
UI
subproject.
A
That's
only
going
to
take
like
a
few
minutes
and
finally,
I
also
have
like
a
third
topic
which
is
I
want
to
present
a
new
sync,
a
new
exporter
that
I
am
planning
to
add
to
the
to
the
full
aggregator
based
on
some
some
user
Upstream
user
request.
A
C
Yeah
I
I
want
to
give
a
quick
update
that,
after
the
release
of
1.11.0,
we
identified
a
few
creative,
critical
bugs,
and
we
have
released
this
new
patch
release
to
include
the
create
critical
bug
fixes.
The
first
one
is
that
we
found
that
in
actually
in
operas
releases,
the
Enterprise
had
some
defects
to
support
the
sum
updated
case
and,
for
example,
when
sticky
max
age
seconds
attribute
or
for
service
is
updated
or
when
the
internal
traffic
policy
is
updated
in
the
entry
plus,
it
doesn't
reflect
the
change.
C
The
changes
to
data
plane
overflows
correctly,
and
this
had
been
fixed
along
with
deto
refactoring
in
Android
process
code
and
is
back
ported
to
this
release.
I'm
trying
to
backboard
to
1.10
as
well,
but
I
made
some
difficult.
Some
I
find
it
is
a
little
difficult
to
backboard
the
the
code
chain
because
it
also
includes
some
refractory
and
it's
kind
of
how
some
dependencies
on
other
changes
that
that
are
only
included
in
1.11.
C
Let
me,
let
me
know
if
you
have
concerns
that
we
only
fix
this
update
case
in
in
the
latest
patch
release,
a
minor
release,
and
the
second
critical
figures
is
the
endpoints
lines
API
available
check
and
this.
This
is
a
park
in
1.11
only
and
it
was
caused
by
a
last
minute,
commit
to
the
last
minor
release
and
because
of
typo
and
how
backpot
it
fixed
it
and
the
back
ported
to
this
release.
C
The
consequence
of
the
bug
is
that
even
we
have
graduated
endpoints
lines
future
to
beta
and
make
it
enabled
by
default.
It
will
always
fall
back
to
the
endpoints
API
you
and
the
endpoint
size
API
is
available,
but
there
should
be
no
impact
to
functionalities,
except
for
that,
for
those
have
a
strong
dependency
on
an
important
science
API
and
the
last
one
is
Android
is
a
risk
condition.
C
You
enter
agent
that,
when
a
gloop,
when
we're
apart
could
join
multiple
groups,
I
think,
if,
if
any
anyone
is
more
family
with
this
part,
please
correct
me:
there
is
a
risk
condition
that
could
make
the
agent
process
crash,
because
the
race-
and
we
have
fixed
it
by
avoiding
by
by
making
the
operations
Traders
F
and
the
you
know
the
third
party
library
and
the
pump
up
to
the
latest
version
of
the
third
party
Library
yeah,
that's
all
about
the
the
latest
the
release.
Thank
you.
A
Thanks
for
the
update
Channel,
thanks
for
the
fixes
to
the
proxy
entry
proxy
code,
I
know
you
did
a
lot
of
refactoring
to
make
the
update
case
more
robust.
A
Thank
you
any
question
from
anyone
on
on
this
release
or
backboarding
of
the
fixes.
A
All
right,
in
that
case,
thanks
again
Chen
and
I'll,
move
on
to
the
second
topic,
which
is
I'm
gonna,
give
a
quick
update
on
the
entry
UI
v0.1
release
the
first
release.
So
let
me
open
my
my
slide
here.
I
have
a
one
slide:
presentation.
A
Oh
here
we
go
all
right.
Hopefully
you
can
see
that
slide.
Sorry
for
the
delay,
so
I
just
wanted
to
give
a
quick
update
and
tell
everyone
that
the
initial
Target
features,
which
is
essentially
kind
of
like
feature
parity
with
with
the
old
octane
plugin,
have
been
implemented.
So
essentially,
the
UI
consists
of
a
summary
page
which
displays
information
about
the
entria
components
so
using
the
agent
entry
agent
info
and
share
controller
info
custom
resources
and
there
is
a
page
to
run
a
trace
flow
requests.
A
So,
since
the
first
presentation,
I
I
did
on
the
entry
IUI
I've
added
support
for
live
traffic
Trace
flow,
so
the
UI
now
supports
both
regular
Trace
flows
and
and
live
traffic,
Choice
flows
and
I
added.
Whichever
features
were
missing
for
Trace
flow.
A
In
addition
to
that,
there
is
support
for
accessing
the
UI
over
https
and
basic
authentication,
using
an
admin
password
I've
added,
a
decent
amount
of
unit
tests,
both
for
the
golang
backend
and
for
the
react
front
end
and
by
the
way,
thanks
to
I,
wanted
to
thank
Chan
and
and
chew
for
the
reviews,
as
I
did
of
all
the
pull
requests
for
the
intro
UI.
A
There
is
also
an
M
chart
now
to
install
entria
and
that's
actually
the
only
installation
mechanism
at
the
moment
for
the
entry
UI.
So
it
requires
Elm
and
that's
because
it
it
kind
of
like
depends
on
some
LM
templating
functions
to
generate
Dynamic
data
in
the
in
the
yaml
Manifest,
and
so
we
can.
We
can
revisit
that
decision
later,
but
at
the
moment
only
LM
is
supported
for
installation.
A
We've
also
had
the
first
external
contribution
to
enter
a
UI
for
someone
that
was
like
from
someone
that
was
the
first
time
contributor
to
the
entria
project,
so
I
think
it's
pretty
cool
and
I
think
that
actually
adding
kind
of
like
a
front-end
component
to
to
entry
are
that
case
could
like
help
attract
new
contributors,
because
there
are
developers
out
there,
which
kind
of
like
specialized
more
on
front-end
development.
A
So
I
think
that's
pretty
pretty
good
I
merged
one
PR
today
and
that's
actually,
the
last
PR
I
wanted
to
merge
before
the
first
release
of
the
UI
I
wanted
to
make
that
release
sooner
rather
than
later.
So
I'm
gonna
make
it
this
week
because
until
until
there
is
an
actual
release
of
the
UI,
it's
not
really
convenient
to
install
to
install
the
UI
using.
A
You
would
need
to
like
basically,
for
example,
clone
the
repo
and
install
it
locally,
because
M
release
is
not
going
to
be
published
to
the
end
repository
for
entria.
Until
we
do
an
actual
release
in
GitHub,
so
I'm
going
to
make
that
release
this
week,
and
then
it
will
be
possible
for
users
to
install
the
entry
you
are
easily
using
and
once
this
is
done,
it
will
also
be
possible
to
officially
deprecate
the
Octon
plugin
in
entria
I
actually
have
a
PR
which
is
ready
to
be
merged.
A
It's
been
approved,
but
I
was
waiting
for
an
actual
entry,
a
UI
released
before
I
move
forward
with
the
deprecation
process.
Once
this
PR
is
merged,
the
octane
plugin
will
be
officially
deprecated
in
entria
1.12
and
then
after
1.12
post
1.12,
we
will
remove
the
octane
plug-in
code
and
it
will
be
completely
removed
for
the
113
time
frame.
A
I
think,
in
that
case,
having
a
single
minor
release
is
enough
of
a
warning
and
ends
up
to
people,
because
I
assume
most
Stockton
users
are
kind
of
like
familiar
with
the
fact
that
Auckland
has
not
been
maintained
and
is
not
no
longer
maintained
and
has
not
been
maintained
for
I
think
one
year
and
yeah
that's
about
it
for
the
update.
If
you
haven't
tried
the
UI,
please
give
it
a
try.
It's
very
easy
to
install
or
wait
for
the
release,
and
then
it
will
be
even
easier.
D
A
Install
of
course,
and
share
your
feedback
and
yeah
I'm
really
looking
forward
to
future
contributions.
I
think
we
can
turn
this
into
a
great
like
feature
for
users.
We
can
have
new
UI
web
pages
for
entry.
Apis
I
think
genjun
mentioned
a
group
membership
in
the
past
for
Network
policies.
This
is
something
we
could
have
Pages
for
of
this
place
for
I
think
we
can
Inc.
We
could
incorporate
some
elements
for
flow,
visualization
and
even
metrics
visualization
and
have
everything
in
one
place
so
I
think
we
can
turn
this
into
something
very
nice.
A
All
right,
no
questions,
so
let
me
stop
sharing
this
and
sorry
guys.
Everyone
is
stuck
with
me
tonight.
Cause
I'm
gonna,
move
to
the
last
topic
that
we
have
on
the
agenda,
and
it's
also
me
presenting
and
to
to
single
comments
from
Chad
yeah
thanks
thanks
Chan
for
this
comment,
and
thank
you
for
the
very
thorough
review
reviews
that
you
did
for
for
the
UI,
especially
the
first
PR,
which
was
pretty
massive
and
yeah.
A
Let
me
share
my
screen
one
more
time
for
the
other
presentation
that
I
have.
A
It's
saying
my
screen
sharing
this
post.
Let
me
oh
I,
see
okay.
Hopefully
you
see
the
new
PowerPoint
here
and
I
want
to
talk
about
a
new
sink
I
guess
we
call
it
exporter
in
the
context
of
the
flow
aggregator
that
I've
been
working
on
for
the
flow
aggregator,
it's
called
The
Log
exporter
and
essentially
the
idea
is
to
enable
the
flow
aggregator
to
support
logging
flows
directly
to
a
local
local
file
as
an
alternative
to
exporting
those
flows
either
to
clickhouse
or
AWS
S3,
which
is
something
we
support
today
as
well.
A
So
a
quick
recap:
I
know
that
some
of
you
work
on
the
floor,
aggregator
or
app
work
on
the
flow
aggregator
or
use
the
flow
aggregator.
So
you
probably
are
familiar
with
that
architecture
already,
but
essentially
we
have
a
component
in
the
entry
agent,
which
is
called
the
flow
exporter,
that
component
periodically
exports
connection
data
and
that
data
is
collected
from
contract
on
Linux
to
centralized
entity
which
is
a
flow
aggregator
which
runs
as
a
one
pod
deployment
locally
on
on
each
entry
cluster.
A
If
you
have
enabled
this
feature,
of
course,
the
export
is
done
over
using
ipfix
messages
over
TCP.
So
the
idea
is,
we
don't
drop
or
miss
any
connections,
because
we
want
that
system
to
be
aware
of
every
connection,
that's
going
through
the
entry
Network.
So
we
export
that
data
to
the
flow
aggregator.
It's
collected
by
essentially
part
of
the
flow
aggregator
which
we
refer
to
as
a
collector.
A
A
To
a
sync
using
one
of
the
exporters
that
we
support,
and
at
the
moment
we
support
three
different
exporters.
We
have
what
I
want
to
call
the
Legacy
ipfix
Explorer,
which
is
able
to
export
aggregated
flow
data
using
API
fix
to
an
external
collector.
A
We
have
the
clickhouse
exporter,
which
is
really
used
by
the
CIA
sub
project
today,
and
so
that
exporter
uses
a
clickhouse
golang
drivers
to
export
through
data
in
into
clickhouse
database.
A
And
finally,
the
most
recent
Edition
is
the
S3
exporter,
which
supports
uploading
the
floor
record
as
CSV
CSV
files,
sorry
to
S3
bucket
of
your
choice.
So
those
are
the
three
exporters
we
support
today
and
I'm
planning
to
add
a
force
exporter
which
implements
the
same
exporter
interface
and
is
called
the
log
exporter
and
can
write
a
individual
aggregated
flows
to
a
local
file
currently
in
CSV
format,
and
it's
very
easy
to
consume
those
flows
from
this
file.
A
I
think
before
I
I
move
further,
and
the
next
slide
is
about
explaining
why
this
log
Explorer,
why
I
think
it
is
a
good
idea?
Is
there
any
question
on
this
slide.
A
All
right,
I,
think
most
most
folks
in
the
meeting
should
be
kind
of
like
familiar
with
that
architecture
already.
A
So
it
started
from
an
issue
that
was
opened
I
think
a
year
ago,
or
maybe
even
more
by
an
Android
user,
and
that
user
was
saying
that
they
find
like
a
lot
of
value
in
the
local
audit
logs,
that
we
support
So,
if
you're
not
familiar
with
that,
it's
logs
that
each
agent
can
write
to
the
local
file
system
of
that
agents,
pod
with
information
about
allowed
connections
or
denied
collections
or
rejected
connections,
and
this
is
enabled
on
per
Network
policy
basis
by
either
adding
an
annotation
for
kubernetes
Network
policies
or
by
turning
on
a
flag
for
entry
and
Native
policies.
A
That
flag
is
part
of
the
policy
specification,
okay,
and
so
they
were
using
those
audit
logs.
But
there
are
a
couple:
I
want
to
say
quote
unquote
issues.
Of
course
it's
it's
by
Design,
but
there
are
a
couple
of
issues
for
users
with
those
local
audit
logs
and
the
major
one
which
that
user
was
complaining
about
in
the
issue.
A
Is
that
there
are
missing
information
and
in
particular
they
are
missing:
pod
names
for
the
source
and
the
destination,
so
for
one
of
those
pods
it's
kind
of
like
it
would
be
easy
for
us
to
add
the
Pod
namespace
and
name.
That
would
be
the
part
to
which
the
local
part
to
which
the
policy
Network
policy
is
applied
to.
A
But
it's
not
easy
or
convenient
or
realistic.
I
want
to
say
for
the
agent
to
get
information
about
the
other
side
of
the
connection,
so
for
an
Ingress
policy
rule,
for
example,
that
would
be
the
source
of
the
connection,
the
source,
button
and
namespace,
because
that
would
mean
that
each
agent
would
be.
We
need
to
get
that
information
from
the
communities,
API
servers
and
cache
it
locally
and
I
think
By
Design.
A
We
don't
want
each
agent
to
have
knowledge
about
all
all
the
parts
in
the
communities
cluster,
because
that
can
represent
a
lot
of
information
for
a
large
cluster,
potentially
put
some
stress
on
the
communities
control
plane
if
a
lot
of
pods
are
created
and
deleted.
So
we
kind
of
like
want
to
avoid
that.
A
So
that's
the
main
issue.
I
think.
The
second
issue
would
be
that
by
Nature,
those
logs
are
also
local
to
a
node
to
a
specific
agent,
and
if
you
want
to
collect
them
in
a
central
place
to
be
easily
consumed,
you
need
to
deploy
additional
software
to
do
that
and
we
have
a
cookbook
which
is
part
of
the
entry
documentation
that
describes
how
to
deploy
fluency
as
a
demand
set
in
the
cluster
to
export
all
those
logs
to
a
central,
collector.
A
And
the
point
I
want
to
make
is
that,
while
we
can
improve
those
local
audit
logs,
we
could,
for
example,
add
a
pod,
namespace
and
and
put
name
for
the
apply
two
part
for
the
local
pod,
which
is
part
of
that
connection.
I
think
that
the
bottom
line
is
that
we
should
avoid
duplicating
work
and
functionality
with
what
we've
done
in
the
flow
aggregator
and
in
Thea,
and
so
the
flow
aggregator
should
already
be
processing.
All
the
flows
should
be
aware
of
all
the
flows.
A
Of
course,
there
is
a
little
bit
of
a
delay
because
I
think
it
depends
on
the
timeouts
you
configure
and
in
the
flow
exporter
and
the
aggregator,
but
typically
we
can.
We
can
say
that
there
is
potentially
a
60
second
delay
between
the
connection
being
initiated
and
flow
aggregator
becoming
aware
of
that
connection
and
sorry
what
was
I
thinking
about
yeah.
A
So
there
is
a
delay,
but
the
flow
aggregator
already
has
all
the
flows
and
all
the
information,
because
after
it
performs
the
aggregation
process,
all
the
flows
have
been
enhanced
with
all
the
necessary
information,
so
put
name
spaces
spot
names
for
both
sides
of
the
connection
network
policy
info.
We
have
the
network
policy
name
the
the
role,
the
network
policy
action
performed
by
that
role.
So
we
have
a
lot
of
information,
and
so
all
that
information
is
already
in
the
flow
aggregator.
A
Obviously
the
flow
aggregator
can
already
export
that
information
to
Thea.
As
we've
seen
before,
we
have
three
different
exporters,
but
some
users
may
be
reluctant
to
deploy
CR
with
a
clickhouse
exporter
and
all
the
CIA
software
just
to
get
access
to
those
audit
logs,
and
so
my
solution
is
to
add
a
new
exporter
to
the
flow
aggregator
which
is
going
to
leverage
everything
we've
dealt
inside.
A
The
flow
aggregator
and
I
would
say
that
the
flow
aggregator
is
pretty
easy
to
deploy
at
this
stage
and
maybe
lightweight,
at
least
in
some
cases,
and
we
can
rely
on
the
flow
aggregator
to
log
all
the
flows
to
the
pods
file
system.
So
it's
just
an
alternative
to
adding
those
flows
to
a
clickhouse
database,
for
example
it.
Obviously
there
are
some
scaling
considerations
here
in
terms
in
case
a
very
large
number
of
flows
have
to
be
logged
and
I
can
come
back
to
that
later.
D
D
B
Yeah,
okay,
so
so
the
I
have
a
question
about
the
audit
logs.
So
if
a
connection
is
blocked,
it
will
appear
in
the
audit
logs.
But
if
that
connection
is
blocked,
will
it
appear
in
the
flow
aggregator
in
data.
A
Yes,
with
the
same
caveat
here
that
for
those
connection
for
those
connections,
the
information,
if
I
recall
correctly,
comes
from
ovs
directly
using
your
packet
in
message,
and
so
we
have
rate
limiting
in
place
to
avoid
overloading
the
agent
with
too
much
information.
If
someone
keeps
trying
to
send
UDP
packets,
for
example,
which
match
a
network
policy
drop
rule.
A
So
if
someone
sends
a
UDP
stream
right,
every
packet
in
that
stream
is
gonna
eat
the
connection
and
sorry
the
network
policy
role
and
to
avoid
sending
too
many
packeting
messages
to
the
agent
and
increasing
CPU
usage.
We
have
a
rate
limiting
in
place
and
I
think
it's
100
packets
per
second.
So
if
we
all
200
and
if
we
exceed
that
rate,
we
will
drop
that
packet
and
they
will
end.
The
information
in
that
case
will
not
be
part
of
the
audit
logs
and
will
not
be
seen
by
the
flow
aggregator.
A
B
C
Yeah
I
actually
had
had
this.
My
one
of
my
questions
was
the
same
as
the
transverse
one.
Could
you
remind
me
how
we
collected
the
flows
we
export
from
Android
agent?
He
is
it
periodical
or
it
is
triggered
by.
A
So
for
established
connection,
the
main
mechanism
is,
we
do
polling
of
the
contract
table
and
I've
been
thinking
about
a
long
for
a
long
time
about
potentially
moving
to
event
driven
contract.
But
for
now
every
minute
we
pull
contract
and
we
get
a
full
list
of
Connections
in
the
entria
net.
Zone
I
think.
But
so
if
someone
is
not
familiar
with
that
aspect,
they
can
correct
me.
C
A
A
C
Think
and
since
there's
a
delay,
do
you
think
it
could
happen
that
if
we
we
just
have
Port
IP
in
in
the
exporting
flow?
And
then
we
we
collected
a
pawn
Nam
information
later
in
flow
aggregator,
the
port
IP
and
the
point
name.
Mapping
has
already
changed,
which
could
cause
a
the
wrong
flow
information.
A
So
it's
too
bad
there's
no
one
from
the
I.
Don't
think!
There's
anyone
from
that
team
here,
but
so
for
the
applied
to
pod.
That
information
is
populated
by
the
flow
exporter.
So
hopefully
it
should
be
current,
but
I
think
even
there,
because
there
is
a
polling
delay,
it's
possible
that
it's
outdated,
I,
don't
remember!
A
If
we
compare
the
Pod
creation
timestamp
with
the
flow
start
time
when
we
map
the
IP
address
to
the
Pod
name,
maybe
that's
something
worse
investigating
if
we
could
do
that
to
make
sure
that
we
don't
map
it
to
the
wrong
pod.
C
A
Yeah,
but
even
then,
without
time,
synchronization
I
don't
know
how
accurate
it
it
would
be
because
well,
I,
guess
it's
a
wider
problem,
but
assuming
the
nodes
have
synchronized
times
and
I
guess
we
could
check
the
pod
creation
time
to
make
sure
we
have
the
right
pad.
A
Okay,
thank
you.
It's
a
bit
orthogonal
questions
right,
those
those
are
more
like
wider
topics
about
the
flow
aggregator
implementation,
but
yes
or
Worse,
looking
into
I,
think
I'll
I'll
check
with
Anna
and
and
Yong
Ming
tomorrow
and
revisit
those
questions
with
them.
If
that
sounds
good.
D
C
I
thought
that
if
it's
only
for
flow
it,
it
is
only
for
flow
virtual
duration,
I
thought
it
might
be
okay,
but
if
this
this
is
going
to
be
used
for
auditor
logging,
it
may
have
higher
requirements
on
the
correctness
of
the
information.
A
Yeah,
that's
a
good
point.
I
think
we
would
have
this
problem
with
any
process,
any
Central
process
that
that
would
collect
those
flows
and
and
and
and
do
the
mapping
between
IP
and
and
pod
names
or
because
there
could
always
be
a
delaying
Theory.
So
yeah
I
think
we
have
to
be
careful
about
that.
But
I
think
that
problem
would
apply
kind
of
like
regardless
of
how
we
do
the
mapping.
D
A
Let
me
get
into
that
in
the
next
slide.
I
think
I'll,
just
read
after
that.
So
this
is
a
PR
for
the
log
exporter.
It's
about
1000
lines
of
codes,
including
unit
tests
and
for
log
file
management.
A
We
use
the
same
Library
as
for
audit
logs,
which
is
called
lumberjack
rights
or
buffered
for
efficiency
with
I
think
a
Max
latency
of
five
seconds
before
we
we
write
to
the
to
the
file
so
kind
of
like
not
a
big
deal,
given
the
existing
latency
between
the
flow
exporter
and
the
flow
aggregator.
A
A
And
this
is
a
screenshot
of
the
current
flow
exporter
configuration
so
to
kind
of
like
answer:
Elton
elton's
questionnaire,
those
parameters,
Max
size,
Max,
backups,
max
age
and
compress
those
are
directly
taken
from
the
log
management
library
that
we
use
and
they
let
you
configure
exactly
the
volume
of
logs
that
you
want
to
keep
around.
A
A
And
there
is
a
filtering,
a
parameter
which
is
Filters,
which
can
let
you
decide
which
flows
you
want
to
keep
so
by
default,
all
flows
are
logged,
even
when
they're,
not
even
when
there
is
no
network
policy
applied
to
them,
and
so
you
can
do
different
filtering
options
to
log
different
flows.
For
example,
you
can
log
only
on
protected
flows
or
you
can
log
only
flows
which
are
only
connections
which
are
denied.
A
And
this
is
an
example
of
a
log
file.
I
guess
I
should
I
should
have
included
the
column
names
here,
but
from
memory.
It
is
a
time
at
which
the
connection
started
time
at
which
it
ended.
We
have
the
source
IP
destination
IP.
Then
it's
port
numbers
for
the
first
two,
it's
icmp,
so
those
port
numbers
are
zero,
then
the
protocol,
and
then
we
have.
A
What
is
that?
That's
a
source
Source
pod
with
the
source
node
after
that
we
have
the
destination
for
the
first
flows
here
it's
to
an
external
IP
address,
so
there
is
no
information
about
destination
pod
and
and
then
we
have
like
information
about
policies.
If
you
look
at
the
third
line,
you
can
see
that
there
is
a
NTR
cluster
Network
policy
which
applies
to
that
traffic,
and
you
have
the
name
of
the
policy
and
the
name
of
the
role.
A
A
And
I
think
yeah
I
have
some
possible
future
work
here,
but
there
were
some
very
good
questions
tonight
and
so
I
want
to
follow
up
on
those,
especially
the
last
one,
for
from
the
Channel
I
want
to
make
sure
that
the
mapping
we
do
between
IP,
address
and
and
pod
name
information
is
correct
and
that
if
we
cannot
get
the
right
the
right
mapping,
obviously
we
should
still
log
the
flow,
but
just
with
IP
addresses
and
I
think
that
should
satisfy
requirements
for
audit
logging
because,
basically
for
with
locality
flux
today,
what
we
have
is
the
IP
address.
A
So
ideally,
we
should
be
able
to
map
to
pod
namespace
and
names
correctly,
and
if
we
cannot
decide
that
this
is
the
correct
mapping,
then
we
should
not
try
to
add
a
a
wrong
wrong
names
to
to
the
logs.
So
I'll
follow
up
on
on
those
questions.
A
All
right
guys,
any
everyone,
any
other
questions
on
the
exporter,
the
configuration
options
or
the
the
example
I
showed
you
or
any
General
concerns
about
about
this
feature.
D
A
It
may
increase,
it
will
increase
CPU
usage,
of
course,
but
yeah
they
can
all
run
at
the
same
time
or
any
combination
of
of
those
exporters.
A
A
All
right
looks
like
there
is
nothing,
so
everybody
is
getting
20
minutes
of
their
time
back
today,
thanks
everyone
for
attending
thanks
for
your
attention
and
and
for
your
questions
and
I'll,
see
everyone
in
two
weeks.