►
Description
[SIG-Network] Ingress NGINX Bi-Weekly Meeting for 20220901
A
Hello,
everyone.
This
is
september,
1st
2022.
This
is
the
cignet
working
ingress
in
genetics
project.
This
is
a
cncf
subproject,
so
that
means
that
it
is
beholden
to
the
cncf
code
of
conduct,
which
essentially
means
be
awesome
to
each
other.
If
you
have
any
issues,
please
report
those
to
ricardo
and
myself
or
to
the
sig
networking
leads
with
that.
I
know
we
had
a
little
bit
of
a
break.
A
A
couple
of
us
took
some
vacations
and
had
some
fun
so
we're
back
to
back
to
the
real
world
back
to
work
so
with
that
go
ahead
and
introduce
any
of
the
new
members
who
want
to
introduce
themselves
what
they
are
looking
to
get
out
of
the
project
looking
to
help
or
just
interested
in
what
we're
doing.
B
I'll
go,
my
name
is
dylan
turnbull.
I
actually
work
for
the
nginx
business
development
and
alliances
group,
and
I
I
primarily
work
in
the
in
the
kate
space
and
our
our
our
ingress
for
the
rest
of
the
company,
the
other
ingress
stuff,
mainly
in
partner
in
partners
in
alliance's
work,
and
I'm
mainly
here
to
find
out
what's
going
on,
see
what's
happening.
That's
that's
pretty
much.
It.
B
Hi
I
I've
been
to
this
meeting
once
we
discussed
about
open
telemetry.
If
you
remember
yeah,.
B
It
was
a
while
back
so
I
I
know
that
there's
you
are
working
on
a
stabilization,
so
I
don't
want
to
talk
about
open
telemetry
now,
but
the
reason
I'm
in
the
meeting
is
is
I
I
would
like
to
be
active
in
this
zig.
I
went
through
the
code
because
of
open
telemetry.
C
We're
waiting
for
we
waited
to
trouble
ricardo
for
about
half
an
hour
with
a
discussion
with
sr.
C
B
B
D
I
do
okay,
so
hey
my
name
is
ricardo.
I
work
for
vmware
I
I
am
one
of
the
main
fainters
of
ingreso
ginx,
with
a
lot
of
other
folks.
I've
been
in
this
project
since
2018.
D
A
We
appreciate
all
your
contributions,
ricardo
and
james,
strong,
I'm
a
solutions
architect
for
chain
guard
and
I've
been
a
part
of
this
project.
Oh
I
don't
know
I've
been
going
on
two
years.
I
don't
know
two
years
wow.
A
I
I
mostly
make
sure
ricardo's
doing
what
he's
supposed
to
be
doing
now.
I
I
help
shepherd
things
along.
I
try
to
keep
things
organized
and
doing
reviews
and
general
housekeeping
all
the
stuff
that
most
people
don't
want
to
do.
B
B
A
C
B
A
There
was
a
couple
issues
that
I
pulled
up
on
the
issue
triage
that
are
there
in
the
open
topics
that
we
can
look
at.
I
I
want
to
go
through
those
four
because
they're,
they
seem
kind
of
important
from
what
I
was
looking
at
and
then
we
can
talk
about
project
stabilization
stuff,
where
ricardo
is
going
to
give
an
awesome,
update
on
the
data
plan
and
control,
plane
split
and
we
can
go
from
there.
A
I've
got
a
couple
things,
so
anybody,
okay
with
that
or
is
there
anything
top
of
mind
that
we
should
look
at
from
an
issues
perspective.
C
Can
we
sort
out
the
release
question
first.
A
D
Well,
I
I
would
just
look
if
you
have
things
to
cherry
pick,
I
guess
we
got
some
bug
fixes
the
last
one
or
two
weeks.
I
guess
then
we
can.
If
someone
wants
to
share
a
peek
to
the
branch
we
can
just
cherry
pick
and
make
the
release
I'm
fine
with
that.
I
guess.
A
D
C
C
D
A
D
Are
not
prioritizing
yeah,
we
don't
care
about
features
right
now
for
the
new
releases
right.
It's
not
that
we
don't
care
about
them
at
all.
We
do
and
we
want
to
get
them
in
the
next
releases.
We
are
not
prioritizing
them,
as
as
james
said,
but
for
the
first
one,
so
so
for
the
first
one,
the
cvs
ones.
I
guess
that
just
making
a
new
release
will
even
not
cherry
picking
the
image.
D
Sorry,
we
need
to
share
it,
but
yeah
anyway,
just
cherry
picking
the
image
that
the
final
image,
the
alpine
image
to
the
final
container
will
solve
it
right.
So
we
don't
need
to
do
the
cherry
peak
of
the
whole
things
like
the
test
runner
or
whatever
to
this
branch.
Just
the
final,
just
the
docker
file
in
root
fs,
to
use
the
the
last
image
and
for
the
bug
fixes
we
need
to
cherry
pick
them
as
well.
The
main
problem
that
I
can
see
right
now
is
again.
D
We
have
that
that
situation
where
cloud
builds,
they
are
just
configured
to
run
on
a
specific
branch
right,
they're
only
running.
D
So
I
think
this
is
the
only
the
only
real
problem
that
this
one
and
people
that
relies
on
on
the
manifests
that
are
on
main
right,
so
people
that
just
go
to
maine
and
do
curl
whatever.
So
I
would
just
split
this
into
efforts.
First,
we
need
to
open
an
issue
for
that
saying:
hey
release
a
new
version,
and
in
that
issue
we
need
just
to
clarify
everything
that
we
need
right.
So
sherry
pick
the
image
cherry
pick.
D
The
bug
fixes
then
make
sure
that
the
cloud
build
runs
and
make
sure
that
our
manifests
even
the
main,
even
the
ones
in
main
they
are
pointing
to
the
right
release
and
I
am
not
against
just
changing
the
manifesting
main
to
point
to
that
one.
So
we
we
keep
in
mind.
Just
the
I
mean
version,
131
one,
three
one,
one,
three,
two!
That's
that's
fine
right.
I
think
it's
easier
that
than
again
creating
a
stable
text
file
and
changing
everything
in
kind
or
whatever
kind
of
documentation
or
whatever,
okay.
But
then.
C
D
No,
I
didn't,
but
we
had
some
features
and
some
other
stuff
that
has
been
merged
and
I
don't
want
so
we
can.
Actually,
let's
do
this
thing
long.
Can
we
do
a
comparison
between
the
commits
from
the
last
release
and
where
we
are
right
now
so.
B
D
Yeah
so
at
least
we
can,
we
can
see
the
the
the
real
problem
is
that
we
already
cut
a
branch
that
that
right
so,
but
can
you
can
you
take
a
look
james
like.
C
Yeah,
but
besides
the
problem
of
the
brand,
do
you
can
you
can
you
can
we
can
you
like
say
it
out
loud?
What
would
be
the
problem
to
solve
if
we
want
to
make
a
release
out
of
main
now.
C
Because
ricardo,
we
set
expectation
for
some
of
these
pr's
that
we
will
we'll
merge
it.
It's
all
done
ready
for
review
and
we'll
merge
it
and
then
we'll
release.
So
it
means
we're
actually
looking
at
january.
A
D
This
won't
cause
issues
because
the
youtubes-
because
it's
already
passing
on
end-to-end
tests,
that
I
I
didn't
change
it
in
the
future
yeah.
A
B
D
D
D
C
A
So
about
that
I
was
gonna
ask
and
I
was
gonna,
bring
it
up
when
we
start
talking
about
the
control
plane
split.
Do
we
want
to
do
what
we
did
with
the
v1
update
and
just
have
that
on
a
separate
branch
and
a
separate
release
candidate
like
a
beta
release
like
we
did
so,
do
the
same
thing?
So
then
we
wouldn't
have
this
problem
that
we
think
we're
going
to
have.
D
D
Yeah,
but
this
is
something
that
I
need
to
pay,
so
I
won't
block
you
folks
because
of
my
4k
lines,
pr
right
now,
so
I
will
have
a
separate
branch.
I
will
work
on
that
separate
branch.
I
want
to
merge
in
a
way
that
the
control
that
that
nginx
works
still
works.
I
mean
that's,
not
great,
but
it
still
works
on
both
ways.
B
D
D
I
think
it
would
be
good,
I
mean
right
now
we
can
cut
a
release
from
main.
There
is
no
impact
stuff,
but
this
is
not
sustainable
if
we
want,
if
we
are
gonna,
keep
two
branches
again
right,
so
one
with
the
control,
plane,
split
and
one
without
the
control
plane
split,
I
don't
know
if
you
are
gonna,
do
that.
D
I
was
thinking
about
implementing
the
control
plane
split,
but
keeping
keeping
the
the
the
real
control
plane
still
within
gynex
code,
at
least
for
the
next
release,
but
I
need
to
plan
that
so
right
now
there
is
no
problem
instead
of
cherry-picking
cutting
a
branch,
but
before
I
do
my
merge,
I
think
before
I
do
my
merge.
I
think
that
we
need
to
solve
this
branching
problem.
D
D
A
A
Okay,
I'm
thinking
tomorrow
afternoon
we'll
do
the
same
thing:
pull
up
put
up
a
meeting
and
start
the
131
cut.
D
Ready
in
the
afternoon
I
mean
I
can
be
a
synchronous,
I
can
open
slack
asynchronous.
If
you
need
me,
kubernetes
is
like
I
just
won't
be
able
to
join
a
meeting
tomorrow.
A
A
A
C
A
C
A
That's
that's
also
one
of
the
reasons
why
I
added
the
release,
notes
section
in
all
of
the
pr's
and
feature
requests.
So
when
people
put
in
pr's,
let's
make
sure
that
they're
filling
out
the
release,
notes
section.
Otherwise
we
shouldn't
be
accepting
prs.
C
B
A
A
I
I
mean
I
agree,
but
that
also,
like
you
said,
takes
two
to
three
days
and
makes
a
release
cut.
You
have
you
have
the
link
to
the
github
issue
and
the
get
and
the
the
commits
we'll
try
to
make
it
as
clean
as
possible.
But
again
I'm
not
gonna
spend
two
to
three
days.
A
A
So
we
had
someone
put
in
a
request
onto
back
porting
something
into
the
legacy
branch.
It
looks
like
it's
just
changes
to
the
control
to
the
controller
to
the
helm
chart.
It
looks
like
he's
just
adding
some
values,
but
this
does
bring
up
the
points.
A
Should
we
it's
been
up
for
a
while
we're
like
four
or
five
releases
away,
and
I
think
two
or
three
kubernetes
releases
away.
We
did
discuss
about
dropping
legacy
support.
I
don't
think
we
should
actually
be
delete
the
branch,
but
no.
D
Yeah-
and
I
think
we
are
right
now
that
kubernetes
125
was
released,
there
is
no
more
reason.
We
should
keep
things
so
that
support
114
or
115..
I
just
think
we
need
to
state
that
loud
and
clear,
hey.
We
are
gonna,
do
a
last
release
for
legacy
branch
and
we
won't
do
support
for
it
anymore.
C
B
I
think
awakened
job
droppers,
legendary's
branch
support,
yeah,
okay,.
A
B
A
So,
okay,
I
don't
think
we
have
any
pr's
in
the
many
pr's
in
the
legacy
anyway,
but
I'll
go
ahead
and
close
this
one
out
and
put
the
note
out
there.
A
We
can
also
make
that,
as
part
of
the
update
when
we
put
out
the
the
beta
for
the
control
plane,
data
plane,
split,
we'll
also
mention
that
we've
deleted
that
we
we're
not
accepting
and
we're
not
supporting
the
legacy
branch,
which
is
pre,
which
is
four
nine.
I
think
or
five
one
was
our
last
one.
Those
won't
be
supported
anymore.
D
A
A
B
D
B
D
One
of
the
things
that
I'm
doing
is
actually
not
right
now,
but
I'm
I'm
splitting
the
code
from
web
hook
as
well.
From
the
ingress
controller,
someone
asked
in
some
meeting
yeah
someone
asked
in
some
meeting
or
some
channel
hey.
Can
we
have
like
a
live
web
hook
whatever
and
yeah?
It
does
make
totally
sense
to
me.
D
The
thing
is
that
we
do
have
the
webhook.
The
webhook
also
tests
the
nginx
configuration
file
right
and
we
won't
be
able
to
test
the
configuration
anymore,
but
this
can
be
sort
of
up
to
the
admin
to
enable
or
disable
so
having
a
more
heavy
web
hook
or
a
more
light
one
just
just
checking
for
the
business
logic,
but
not
the
configuration
from
jynx.
D
I
won't
compromise
myself
to
take
a
look
into
this
one,
because
I
really
want
to
focus
on
the
split
control
plane
and
data
plane,
but
this
seems
if
someone
wants
to
take
a
look.
This
seems
pretty
easy.
C
A
D
A
A
C
D
C
C
So
it's
very
simple
that
if
you
put
a
host
in
the
in
the
rules.holds
host
spec
that
host
spec
needs
to
be
in
the
smi
and
the
search
that
is
used,
and
it
also
needs
to
be
listed
in
the
tls.host
spec,
and
it's
not
there
so
he's
using
some
example
or
something
that
is
not
there
in
the
tls
spec.
B
C
No,
his
whole
claim
is
this
works,
and
this
this
will
work.
The
admission
book
will
not
block
this
simply
because
the
in
the
tls,
the
tls
spec
is
applies
to
the
entire
object
that
entire
yeah,
I'm
saying
from
an
end
user's.
A
D
Okay,
but
I'm
I'm
putting
on
a
different
way.
Okay,
imagine
that
you
configure
your
ingress
as
myside.com
on
port
80
and
you
don't
state
that
you
have
a
tls
or
whatever
you
don't
put
the
tls
rule,
okay
and
then
I
come
later,
and
I
see
that
you
did
a
mistake
of
not
enabling
not
enabling
tls
and
I
do
exactly
the
same
configuration
as
you
do,
but
I
enable
tls
on
my
side
right.
D
So
you
won't
decide
the
website
without
the
tls,
and
I
will
want
the
website
with
the
tls
right
and
today
and
today
on
the
browsers.
What
usually
they
are
doing
is
they
are
trying
first,
to
connect
to
part
to
port
443
and
then
go
back
and
fall
back
to
part
80..
So
I
just
steal
your
domain.
Your
your
ingress
makes
sense.
C
C
C
C
So
the
so
the
way
I
understand
is,
what
will
happen.
Is
the
lewis
lua
script
or
whatever
we
use
for
synchronization.
It
will
just
go
check
and
if
it
is
the
same
same
specs
for
host
as
well
as
path,
no
difference,
it
will
just
it
will
add,
or
it
will
add
and
add
a
new
annotation
or
whatever
new
directive
engineering
directive,
that's
if
it
is
required.
Otherwise
it
will
just
forget
it
it
just
added
just
no,
not
no.
It's
not.
He
expects
a
web
webhook
rejection
and
that's
not
going
to
happen.
C
Actually,
you
can
see
his
claim.
He
actually
writes
that
if
his
second
yeah
the
second
hammer
that
he's
using,
he
doesn't
have
the
first
rule,
which
is
a
dummy
example
and
something
something
if
he
doesn't
have
that,
then
the
webhook
is
rejecting.
It
he's
actually
writing
that
he's
written
that
somewhere
down
below.
In
a
note,
it
only
doesn't
reject
when
he
has
two
rules.
C
In
one
object
in
one
yaml,
one
ingress
intended
payload.
A
All
right,
I
think
the
question
is
whether
it
should
or
shouldn't
if
I
deploy
this
second
one
sure,
should
the
emission
controller
decline
this
one
that
I
think
that's
the
the
base
question
here.
C
Because,
as
per
whatever,
you
are
whatever
logic,
you're
writing
the
way
I
unders
understand
so
far.
There
are
two
rules,
not
just
one
rule.
If
there
is
only
this
one
rule,
then
it
would
be
a
data
copy
of
an
existing
and
then
the
webhook
will
reject
it.
C
C
In
his
first
line,
he's
saying:
there's
a
piece
of
code
somewhere
where
it
only
checks
for
the
first
rule
host
and
the
first
rule
host
is
different.
Then
it
goes
ahead
and
it
doesn't
reject
so
he's
right.
So
it
means
we
have.
We
can't
deep
dive
into
because
in
real
world.
Nobody
will
use
something
like
this,
so
we
can't
go.
Imagine
all
the
permutation
combinations
of
weirdness
that
people
want
to
configure
and
have
the
webhook
check
for
every
single
whimsical.
D
So
you
said
something
that
when
we
have
time
and
people
so
I
can
think
from
that
that
this
is
an
improvement
that
we
actually
need,
but
not
a
bug
right.
C
Oh
yeah,
so
if
you
go
in
that
direction,
there
is
actually
several
tls
rated
issues.
People
have
reported,
for
example,
they
want
multiple
client
certificates
right
on
mtls
and
things
like
that.
So
there
is
several
valid
and
good
improvements,
and
just
one
user,
and
just
one
user
or
two
or
anybody
who
seconds
this
kind
of
issues
or
says.
Okay.
I
am
also
facing
this.
There
are
almost
always
a
co-worker
of
that
person
or
somebody
from
the
same
company.
C
So
right
now,
for
example,
you
can
see
that
there
is
one
more
issue.
That's
really.
I
thought
it
was
a
real
big
bug
that,
after
adding
the
release
api
code,
that
gita
did,
somebody
is
complaining
that
on
azure
established,
tcp
connections
die
every
10
minutes
and.
C
A
C
D
A
Think
we
should
close
it.
I
think
we
should
like
a
v2
looking
at
improvements.
So
this
I
would
say
this
is
a
feature
because,
yes,
it's
a
separate,
it
is
a
separate
ingress
object
with
two
rules,
even
if
the
rules
do
match.
So
it's
an
improvement
to
check
all
of
the
rules
in
an
ingress
against
all
of
the
existing
right.
So
let's
look
I'll
make
that
comment
I'll.
Leave
it
at
that
and
I'll
make
sure
that
we
put
it
I'll
put
it
on
the
backlog
right.
A
Then
yeah,
the
only
last
thing
I
had
on
the
action
of
open
item
stuff
was
just
closing
the
community
survey
and
no,
I
I
didn't
respond
back
to
josh's
josh,
yes,
request.
We
only
had
62
responses.
A
D
A
So
I'll
grab
some
stats
from
devstats
and
we
can
close
out
the
survey
and
get
the
results
and
start
looking
at
them.
One,
for
I
mean
62
is
unfortunate
because
you
know
it's
10
000
stars
on
the
repo
and
lots
of
people
use
ingress,
but
yeah
so
I'll
go
ahead
and
I'll
close
that
and
I'll
get
the
data,
and
I
can
share
it
out
to
everyone.
A
We've
got
15
minutes
left
ricardo.
Do
you
want
to
talk
a
little
bit
more
about
the
data
plan.
D
Yeah,
so
I
will,
I
will
try
to
be
really
direct
on
that
we.
So
I
have.
I
took
a
lot
of
time
actually
trying
to
figure
out,
which
would
be
the
best
approach
of
doing
that,
instead
of
just
going
into
one
or
the
other
one,
and
I
have
realized
that
the
easier
way
to
do
that
right
now
is
to
the
final
step
of
ingress
is
actually
getting
a
whole
structure
and
generating
getting
that
structure
and
sending
into
engine
extend
plating
to
do
the
template
stuff
and
the
the
cool
thing
on.
D
D
Two
or
three
weeks
ago,
so
we
are
right
now
the
split
the
status
that
we
are
on
the
split
right
now
is
it
is
working.
So
I
have
a
control
plane
and
a
data
plane
running
on
my
kind
cluster
here.
Let
me
see
if
it's
working
or
not
but
hold
on
it
is
working,
so
we
can
see
as
an
example.
Okay,
I
have
it
here.
D
We
can
see
that
whenever
I
let
me
share
my
screen
whenever
we
generate
a
new,
you
need
to
allow
me
james
whenever
we
generate
a
new,
a
new,
a
new
configuration
or
something
like
that.
This
configuration
is
sent
to
to
data
plane
or
to
all
of
the
data
planes
watching
the
the
control
plane,
and
it
will
then
start
reconfiguring
stuff
right.
So,
as
you
can
see
here,
I
have
the
ingress
in
gynx
control,
plane,
data,
plane
and
ingress
in
gynex
and
let's
take
a
look
into
the
logs.
D
So
if
I
come
here
and
do
a
cube,
ctrl
create
ingress,
something.
Where
is
my
ingress?
Okay,
I'm
gonna.
Do
this
really
fast
ingress
my
project
here?
Okay
and
I'm
gonna
just
gonna
change
here
and
create
another
thing
here
we
are
going
to
say
see
that
it
passes
to
the
admission.
Then
it
passes
the
event
and
finally,
it
triggers
a
new
configuration
here
right.
I'm
not
sure
why
the
data
plane
is
not
showing
anything
but
what's
happening
behind
the
scenes.
D
It's
that
my
the
data
plane
is
just
receiving
this
configuration
as
part
of
a
grpc
package
containing
the
whole
thing
plate,
and
it's
just
going
to
fake
that
template
structure
split
between
things
that
should
be
dynamic,
configuration
and
things
that
should
be
steady
configuration.
D
Then
it
will
write
the
nginx
configuration
file
and
it
will
just
call
the
lua
endpoint
to
do
the
configuration
of
the
of
the
endpoints
right.
Let
me
see
if
I
can
exact
what
sorry.
D
D
Okay,
so
if
we
come
here
to
etc
in
chinax,
we
are
gonna
see,
it
should
be
that
this
rule
that
I
have
created
should
be
here.
It's
not
I'm
not
sure
why
this
isn't
working,
maybe
one.
So
we
have
some
problems,
and
one
of
them
is
that
when,
when
communication
ends,
it's
not
reestablishing,
but
this
is
something
that
ingress
x.
This.
B
D
Something
that
is
my
is
my
with
is
taking
care
on
the
retry
stuff
and
other
things
right.
So,
okay,
so
here
we
go
now
it's
it's
working!
So
that's
exactly
so.
If
I
just
come
here
and
create
another
one.
D
Here
it
is
so
it
was
pretty
fast
right.
So
I
have
this
I'm
here
my
login,
because
why
use
debugger
instead
of
printing
messages
with
the
version
right,
so
I
can
see
that
the
version
that
I
had
changed
it
just
applies
the
configuration
and
says:
okay,
I
have
a
back
end
reload
and
just
do
that's
the
reload
of
the
back
end
and
that's
all
and
and
returning
here
we
can
see
that
the
data
plane
posted
back,
that
it
had
a
reload
configuration
here
right.
D
So
it
just
said:
hey
I
had
a
new
configuration
and
because
of
that
I
have
just
triggered
a
reload,
so
this
is
what's
working
right
now
right,
so
we
have
the
control,
plane
and
data
plane.
The
control,
the
control
plane,
have
a
service
that
can
publish
the
events
back
when
it
receives
from
from
the
ingress
controller
right.
D
So
this
is
something
that
I
have
added
here
and
it
can
provide
a
watcher
that
every
time
the
configuration
changes
all
of
the
data
planes,
they
can
just
grab
the
new
configuration
and
just
do
whatever
it
needs
to
be
done
to
reconfigure
the
back
end.
D
Instead
of
getting
all
of
the
ingresses
and
doing
all
of
the
calculations,
so
my
my
expectation
next
it's
to
to
finish
this
and
make
it
sorry
folks
it's
to
make
this
properly
work
and
fix
the
end-to-end
tests.
I
mean
just
patiently
saying
that
he
finished
migrating.
The
end-to-end
test
to
the
news
to
the
new
model
without
the
http
expect.
D
So
I
want
to
merge
that
thing
and
then
rebase
this
and
start
doing
the
end-to-end
test
and
checking,
at
least
if
everything
that
we
have
in
end-to-end
test
or,
let's
say
75
percent
of
the
things
they
are
still
working
right.
So
if
I
can
create
an
ingress-
and
I
can
get
this
ingress
if
I
can
create
an
ingress
and
and
have
the
session
affinity
and
other
things,
I
know
that
the
logics
they
are
working
so
the
next
step.
D
It's
going
to
be
just
working
on
the
wire
on
the
grpc
layer,
which
is
what
is
working
doing
the
test
for
me
and
and
and
and
for
us
right
and-
and
I
want
to
check
with
him.
D
What's
the
progress
on
that
and
getting
the
retry
and
other
things
that
won't
get
me
into
this
situation
that
you've
seen
before
of
like
losing
the
connection
and
not
reestablishing
the
connection
so
getting
the
properly
the
performance
between
the
control
plane
and
the
data
plane,
working,
fine
and
even
testing
that,
seeing
if
I
just
keep
creating
and
deleting
a
lot
of
pods
and
creating
and
deleting
a
lot
of
configurations
how
much
of
the
network
bandwidth
I
would
take
just
for
all
of
those
configurations
and
and
so
on,
yeah.
D
So
it's
as
you
can
see,
it's
working,
not
the
final
way,
but
it's
it's
working
and
I
will
just
work
a
bit
more
during
the
weekend
on
this.
One
thing
that
will
help
would
help
me
a
lot
in
the
future
is
as
soon
as
we
get
the
survey
ready,
start
removing
stuff
right.
So
as
an
example,
zipkin
jagger,
open
tracing
call
it
the
way
you
want
it's
not
working
here,
because
it
renders
its
own
template.
D
On
the
other
hand,
one
thing
that
I
we
can
do
right
now
with
this
approach
is
that
we
can
now
split
the
configuration
of
each
of
the
virtual
hosts
right,
because
I
receive
all
of
these
things
as
a
structure
that
then
I
can
just
go
through
all
of
the
virtual
hosts
and
create
separate
configuration
for
each
virtual
host.
As
an
example,
one
thing
that
we've
been
discussing
in
the
past.
B
D
I
don't
know
so
I
will
be
really
honest.
I
have
tried
to
do
the
open,
telemetry
stuff
one
time,
but
I
was
using
debian.
I
guess
instead
of
alpine,
I
have
failed
in
a
lot
of
ways
and
so
many
ways
of
compiling
that
thing
that
I've
said
yeah
whatever
someone
is
going
to
solve
this
for
me,
so
I
couldn't
make
open
telemetry
work.
C
C
End
of
that
scene
has
changed.
That
scene
has
changed.
Okay,
it
comes
that
scene
has
changed,
open,
telemetry
compiles
perfectly
now,
asan
was,
you
know,
did
a
lot
of
work
on
that
it
compiles
perfectly
it
manually.
It
loads
and
hasn't
even
has
got
stats
in
gui
in
grafana.
So
right
now
the
scene
is
individual
components
are
working
and
tested.
A
A
Would
I
would
focus,
I
would
also
just
focus
on
getting
the
end
to
end
finished,
and
then
we
talked
about
dropping
those
jaeger
and
zipkin
and
all
of
those
yeah.
D
Yeah,
my
my
ask
was
going
to
be
something
like
that.
So
what
I
want
long
is
to
make
sure
that
open
telemetry
can
replace
everything
that
we
have
on
the
telemetry
side
right
now,
even
datadog
we
compile
datadog.
I
have
no
idea
why
we
do
that,
but
I
want
to
get
rid
of
all
of
those
things
and
have
just
opened
telemetry
being
able
to
send
to
datadog
to
jagers
zipkin,
whatever
we
have
as
a
backend
right.
D
So
if
you
folks,
like
you
and
a
son
that
are
working
on
this,
you
can
put
a
paper
a
document,
something
like
that.
Just
showing
hey,
we
have
these
inginax.
It
doesn't
need
to
be
ingress
in
gynex.
Okay,
we
have
this
in
gynex
with
the
open
telemetry
model
and
we
can
send
the
tracings
to
zipkin
datadog.
I
can.
I
can
try
a
license
of
datadog
if
we
need
that,
just
to
prove
it.
What
I
want
is
to
have
less
moving
parts
in
our
code
and
less
moving
parts.
D
B
C
B
I
maybe
I
can
add
something
here.
I
I
already
have
this
working
for
engine
ingress,
ingenix.
A
D
And
I
guess
this
is
just
how
you
configure
open
telemetry
right.
So
this
is
how
just
how
you
add
configurations
to
nginx
conf
to
open,
because
open
telemetry
is
just
a
model
that
will
take
the
tracings
and
send
you
whatever
backhand
you
you
support
right.
So
I've
seen
the
documents
I've
seen
actually
some
of
the
compilations.
I
just
want
to
make
sure
that
we
can
drop
zipkin
jager,
open
tracing
all
of
those
things
that
we
have
right
now
in
like
a
pr.
D
D
C
D
Start
doing
that
you,
you
can
start
doing
that
folks,
don't
block
on
control,
plane
and
data
plane
if
you
can
work
on
the
already
existing
code
to
have
open
telemetry
instead
of
all
of
the
other
things
just
start
doing
that,
send
me
to
the
review
and
then
I
can
just
I
mean
fetch
the
branch
later
for
the
split
control,
plane
and
data
plane.
C
C
D
B
C
Correct
what
happens
is
within
the
open
telemetry
like
asano
is
explaining
there's
a
component
that
has
to
be
compiled
for
exporting
or
sending
to
different
backends.
C
B
Yeah,
so
so
the
module
will
use
otlp,
grpc
protocol
and
then
whatever
backend
is,
can
support
that
they
can
just
received
the
traces
and
normally
people
deploy
a
collector
which
does
the
conversion.
So
if,
for
example,
if
the
zip
king
didn't
support
otnpgr
pc,
you
have
the
collector,
you
talk
to
collector
and
collector
will
convert
it
and
send
it
to
the
zip
king.
A
D
Yeah
yeah,
okay,
okay
for
the
next
meeting,
I
want
to
discuss
the
announcements
from
inginx
and
if
we
can
rely
on
something
the
recent
announcements
on
the
open
source
stuff.