►
From YouTube: [User Call] Kong Gateway 2.5 Release
Description
In this session, we get you up-to-speed on the Kong Gateway 2.5 release with a summary of the features and news, including:
- New Testing Framework
- #Hybrid Mode Enhancements
- And more! 🎉
Kong’s User Calls are a place to learn about technologies within the Kong #opensource ecosystem. This interactive forum will give you the chance to ask our engineers questions and get ramped up on information relevant to your #Kong journey.
#KongGateway #KongGateway25
A
A
B
Thanks
daryn
for
that
really
appreciate
the
introduction,
so
hey
everyone
thanks
again
for
taking
the
time
out
of
your
busy
schedules
to
join
us
here
on
the
monthly
meetup
call.
It
means
a
lot
that
you're
here
as
parents
said
today,
is
like
a
rather
special
day
for
the
gateway,
as
it
coincides
with
the
general
availability
of
our
2-5
release
of
the
gateway.
B
So
this
with
this
new
gateway,
you
know
fresh
off
the
press.
We
want
to
just
take
a
short,
maybe
30,
minute
presentation
to
survey
for
you.
What's
in
the
release
in
my
presentation,
I
have
a
lot
of
links
to
help
you
to
help
augment
your
understanding
of
these
features,
and
I
just
asked
actually
for
a
little
bit
of
a
patience
as
we
release.
As
some
of
this
official
release.
Announcements
come
out
and
our
docs
are
officially
updated
with
all
the
great
two
five
features
that
are
coming
out.
B
B
B
B
B
That's
pretty
unfortunate
and
we
try
to
avoid
those
situations,
but
it's
certainly
not
the
end
of
the
world
right.
This
sort
of
next
train
in
the
station
is
scheduled
to
leave
right
on
schedule
in
the
following
release.
B
So
there's
a
high
likelihood
that
that
feature
could
land
then,
with
so
we've
set
with
each
version
release.
We
have
around
12
weeks
between
minor
versions
of
the
gateway
and
around
18
months,
sometimes
more
for
every
major
version
is
our
current
targets
for
the
for
describing
this
sort
of
time-based
deployment
model
and
really
the
advantages
that
we,
we
hope
will
help
you
out
in
the
sense
of
you
know
these.
It
makes
these
releases
highly
predictable,
so
this
helps
gateway
users
to
better
plan
for
and
build.
B
One,
maybe
downside
of
this
approach,
is
you
know
that
I'm
also
often
faced
from
a
pm's
perspective,
is
sort
of
answering
well
what's
coming
in
the
next
release?
When
is
my
feature
going
to
make
this
next
release
and
that
can
kind
of
be
inherently
uncertain
until
the
time
we
go
into
a
feature
freeze
or
the
code
for
a
particular
feature
is
written
and
tested,
and
we
would
have
a
better
idea
if
it
makes
a
specific
train
or
not.
B
So
you
know
this
is
one
of
the
trade-offs
that
we
take
when
we
move
to
time-based
releases
and
definitely
something
to
keep
in
mind
as
you
submit
ideas
to
our
community
forum
or
feature
requests
in
the
in
the
repository
as
we
prioritize.
Those
specific
feature
requests
so
for
the
2
6
release
with
our
advancements
in
a
lot
of
like
automating
our
release,
tasks
and
build
and
a
lot
of
our
testing
as
well.
B
So
from
us
I
know
the
the
gateway
team
is
working
really
hard
to
kind
of
keep
our
release
train
moving
and
to
deliver
a
successful,
2-5
julie,
ga
release
which
is
scheduled
for
this
afternoon.
B
So
I
hope
that
gives
you
a
little
bit
of
a
background
on
our
release.
Cadence
for
the
gateway
now
I'd
like
to
kind
of
survey
sort
of
what's
coming
in
our
2-5
release
of
the
kong
gateway,
starting
first
here
with
our
performance
testing
framework,
so
we're
finding
that
when
it
comes
to
building
healthy
production
systems
and
apis
understanding,
their
performance
is
super
critical
to
the
well-being
of
your
customers
or
partners
who
consume
those
services.
B
B
There's
a
large
likelihood
that
the
customer
might
you
know,
just
close
the
browser
and
look
somewhere
else
for
their
business
and
that's
really
a
bad
thing.
So
this
need
to
understand
performance
particularly
of
your
api
gateway.
Is
why,
in
this
2-5
release
we're
releasing
a
performance
testing
framework
that
provides
an
efficient
way
of
carrying
out
performance
benchmarks
on
your
api
gateway
in
say
a
given
production
environment?
B
It
can
help
you
to
more
accurately
estimate
your
hardware
requirements
and
ultimately
we
want
to
save
you
on
costs,
so
also,
if
you'd
like
to
get
an
understanding
of
how
a
particular
configuration
of
the
kong
gateway
with
say,
custom
plugins
operates
in
your
production
environment.
Using
this
testing
framework
could
be
an
accurate
way
to
measure
gateway,
latencies
and
just
kind
of
switching
over
now.
B
One
of
the
cool
things
that
we're
doing
as
a
part
of
this
rolling
out
this
ethnic
framework
is
that
we're
integrating
it
into
our
existing
on
open
source
repository
on
github.
So
what
I
have
open
here
is
a
github
actions
and.
B
We,
you
know
maintaining
the
kong
gateway
requires,
like
constant
trade-off
between
performance
and
delivering
really
rich
feature
sets,
and
with
this
performance
framework
in
place
on
our
github
repository,
our
maintainers
are
having
the
ability
to
plot
performance
trends
sort
of
over
time,
and
this
really
helps
us
to
ensure
that
you
know
the
high
performance
bar
of
our
expected
of
our
gateway
community
is
maintained
really
down
to
the
commit
level.
B
So
a
little
look
at
our
github
action.
To
give
you
some
background
on
that,
and
let
me
pop
over
to
an
output
of
this
performance
testing
framework
see
we
have
a
github
action
writing
a
comment
on
this
particular
pr
and
as
an
output.
It
gives
us
a
lot
of
nice
information
on
the
requests
per
second
and
latency,
so
I'll
pause
there
and
won't
change.
Are
there
any
other
details,
you'd
like
to
maybe
round
out
on
what
I
showed
here,
yeah.
C
Yeah
so
one
thing
apart
from
the
latency
and
the
request
per
second,
is
we
also
generate
the
so-called
fling
graph,
which
you
can
like
measure.
Your
the
the
pin
point
of
your
program
like
if
you
write
the
application,
which
you
should
observe
having
a
bad
performance,
then
you
can
use
this
tool
to
pinpoint
what
what
what
person
is
taking
the
most
workload
of
your
program
and
and
we're
kind
of
generating
this.
B
So
I'll
pause
here
for
a
minute
any
questions
from
the
community
group
here
around
this
performance
testing
framework
we'd
love
to
start
to
understand
a
little
bit
more
on
your
use
cases.
If
there's
any
really
interesting
challenges
that
you
have
that
we
can
sort
of
extend
this
performance
testing
framework
to
making
it
easier
for
you.
D
C
C
Sure
yeah,
that's
actually
a
good
question,
so
we
don't
actually
run
those
like
load
tests
inside
the
github
actions,
so
the
github
batching
is
only
like
kind
of
a
handler
or
a
controller.
So
we
invoke
some
kind
of
tools.
We
call
we
use
from
hashicob
and
terraform,
which
will
spin
up
like
a
third-party
infrastructure
and
all
the
load
testing
are
done
inside
of
the
third
party
infrastructure.
So
whether
or
not
github
has
a
good
performance
or
bad
performance
in
their
container
rights.
Guitarist
is
not
matter
to
us.
C
Yeah
could
describe
more
on
the
question.
What
do
you
mean
so
when
you
mean
user
environment,
do
you
mean
the
like
the
like
the
operating
system
like
this
part
or
like
the
setup
infrastructure
side.
E
C
C
We
can
probably
extend
this
to
more
scenarios
like,
for
example,
you
can
expect
control,
running
setup
container
and
maybe
in
different
os,
and
maybe
you
can
it's
a
it's
after
specific,
like
a
load
balancer,
so
so
for
the
infrastructure
side,
you
can
extend
our
entire
module
to
add
more
the
infrastructure
you
you
feel
interested
in,
for
example,
like
the
balancer
I
told
I
I
referenced
before,
and
probably
you
can
also.
Maybe
you
can
also
spin
up
a
case
cluster
and
install
coincide.
C
C
Actually,
the
the
graphite
metrics
is
not
like
a
super
connected
to
the
performance
test
here
right
now,
but
we
do
share
some
of
the
metrics
in
both
places
like
the
latencies
and
ips.
Those
are
available
in
the
official
graphing
dashboard
included
in
the
promoters
plugin,
if
you're
interested
in
showing
more
metrics,
especially
your
like
production
traffic,
you're,
encouraged
to
install
the
parameter
plugin
and
we
and
also
import
the
official
group
panel
dashboard
come
with
the
plugin
and
we
can
you'll
be
able
to
see
like
live
traffic
metrics.
D
Thanks
and
we've
got
two
more
questions
we
seem
quite
similar
which
are
do
you
have
any
baseline
recommendations
after
running
this
framework
and
then,
secondly,
can
you
cover
more
on
the
performance
test
coverage
like
which
use
cases
can
be
covered
as
part
of
this
framework,
and
if
I'm
interpreting
that
correctly,
it's
things
like,
can
we
test
throughput
with
consumers,
enabled
various
plugins
like
how
deep
can
you
go
with
this
and
what
recommendations
have
you
seen
after
running
this
baseline
test?
D
D
B
Wong
chun,
let
me
start
for
just
a
minute
on
the
baseline
recommendation
side,
so
we
have
a
number
of
three
factors
coming
with
our
2-6
release,
so
I
think
we
can
expect
to
see
with
our
2-6
sort
of
timeline,
we'd
like
to
publish
more
of
our
sort
of
just
baseline
recommendations
on
sort
of
what
to
expect
so
definitely
be
on
the
lookout
for
some
more
information
on
our
blog
in.
In
that
sense,
sorry,
I
want
john
to
cut
you
off.
E
D
C
Sure
yeah,
I
was
about
to
answer
this
question
so
so,
actually
we
right
now.
The
test
cases
is
only
limit
to
our
like
our
internal
use,
which
we
feel
we
actually
have
been
refactoring
or
developing.
So
right
now
we
having
very
simple
test
case
to
get
everybody
on
board,
which
is
only
have
one:
a
single
route
and
10
10
rounds,
10
thousand,
which
have
sorry
10
services
and
each
have
10
rounds
and
a
simple
plugin.
This
is
just
a
simple
sample
test
and
also
also
because
we're
reflecting
the
balancer
part.
C
So
we
also
have
some
some
test
cases
covering
this
balancer
and
also
we
called
a
plug-in
iterator.
This
is
two
parts
we
care
more
recently,
so
we
add
this
test
case.
So
actually
you
can
we
we
have
having
a
documentation
coming
up
coming
out
in
2.4.5
in
the
docs.
C
You
can
you'll
be
able
to
see
how
to
write
your
own
test
cases
using
that
in
the
documentation.
It's
very,
very
simple:
to
use
it's
a
dual
file,
and-
and
you
can
you
can
just
copy
paste,
the
existing
ones
to
get
started,
and
if
you
are
already
a
chrome
developer,
you
might
already
familiar
with
the
syntax
or
the
layout.
So
yeah
so
definitely
take
a
look
at
the
documentation
and
see
if
we
can.
B
Yeah
and
to
that
point
I'll
just
make
you
aware
that,
as
the
release
blog
comes
out,
we've
linked
to
all
of
this
information
that
wong
chung
is
bringing
up
so
we'll
be
able
to
share
with
you
our
docs
on
this
to
help
make
your
onboarding
a
lot
easier.
Let's
just
be
on
the
lookout
for
that.
D
In
the
chat
paul
mentioned,
taking
up
in
our
github
discussions
on
the
topic,
if
you
do
try
and
write
any
of
your
own
performance
tests,
and
you
want
to
share
those
and
submit
a
pull
request
or
if
you're
having
issues,
please
hop
on
to
the
github
discussions
at
github.com.
D
B
Thanks
for
that,
michael
okay,
so
let's
move
over
now
to
hybrid
mode
and
to
just
kind
of
get
you
up
to
speed
again
on
hybrid
mode,
really,
our
gateway
plays
two
roles,
one
of
a
data
plane
which
proxies
your
traffic
for
your
apis
and
services
and
the
other
is
a
control
plane
which
effectively
synchronizes
the
gateway
configurations
across
multiple
data
planes.
B
So
here
in
the
2-5
release,
we're
continuing
to
strengthen
our
hybrid
mode
deployment,
as
it's
really
becoming
one
of
our
most
popular
deployment
patterns
for
the
gateway
and
we're
just
really
excited
about
investing
in
this
feature
so
you'll
see
here,
I've
highlighted
a
number
of
our
2
5
editions.
B
We
are
making
it
easier
for
gateway
operators
to
maintain
the
versioning
between
the
kong
control
plane
and
the
data
plane,
and
we've
made
this
version
compatibility
between
the
cp
and
the
dp
a
little
bit
more
lenient
and
with
every
release
where
we're
working
on
that
version
compatibility.
B
B
What
I
wanted
to
highlight
on
the
plug-in
side,
and
in
kind
of
this
strengthening
our
hybrid
mode
sort
of
theme,
I
wanted
to
call
out
the
prometheus
plugin
here
as
it
it's
exposing
more
metrics
on
how
healthy
your
data
plane
is
so,
whether
when
it
was
last
seen
what
what
config
it
has
and
if
it's
compatible
in
its
version
we're
starting
to
explore
more
and
more
metrics
to
help
you
operate
this
hybrid
mode,
a
lot
easier.
B
Plugins
themselves
got
a
lot
of
updates
and
fixes,
and
finally,
from
a
community
perspective,
I
wanted
to
make
you
aware
that,
on
our
repository,
we've
opened
up
github
discussions
as
our
kind
of
main
community
forum
we're
currently
piloting
how
this
works
now.
B
So,
if
you
need
help,
this
might
be
a
really
great
first
place
to
start
to
receive
answers
for
from
individual
community
members.
Maybe
power
users,
and
also
the
core
engineering
team,
is
in
github
every
day.
So
one
advantage
that
I
really
like
about
this
community
forum
is
it's
how
integrated
it
is
with
our
main
repository
in
it
with
github.
So
we
spend
a
lot
of
time
here,
we're
monitoring
the
threads
and
from
a
pm
perspective.
B
B
B
To
our
enterprise
offerings
to
kubernetes
ingress
controller,
so
this
is
this
discussion,
we're
kind
of
working
on
how
to
make
it
kind
of
cross
product,
but
you
may
want
to
tailor
your
discussions
to
for
the
specific
open
source
gateway
itself
in
this
community
forum,
but
we're
here,
of
course,
to
help
in
triage
and
get
you
to
the
right
team
that
will
help
answer
your
your
questions.
So
look
out
here
for
a
post
coming
this
afternoon.
We're
really
excited.
B
I
know
the
team's
working
really
hard
to
package
up
and
put
the
final
touches
on
a
2-5
release
of
our
open
source
gateway.
So
we'll
be
posting
a
release.
Announcement
on
the
discussion
forum
and
very
shortly
here,
we're
going
to
hit
publish
on
our
release
blog,
which
outlines
in
much
greater
detail
than
we
did
on
the
presentation
here.
Every
feature
edition
with
some
clear
docs
on
how
to
read
up
on
that
feature.
E
B
B
Not
that
I'm
aware
of
at
the
moment
we
would
maybe
michael-
and
I
can
kind
of
team
up
on
this-
might
be
an
interesting
effort
to
kind
of
push
forward.
But
if
you
like
that
kind
of
asynchronous
discussions,
maybe
a
slack
community
would
really
help
here.
I
think
we're
definitely
starting
first
with
the
community
forum
approach
before
we
move
to
real-time
communication.
D
Yeah,
that's
a
a
great
point
paul.
If
you
want
to
drop
that
in
the
ideas
category
on
github
discussions,
we
can
see
what
kind
of
interest
there
is
in
the
community
and
if
we
hit
that
critical
mass,
let's
make
it
happen
either
any
other
questions
you
can
raise
your
hand
and
I'll
invite
you
to
a
mute
or
you
can
drop
it
into
the
chat.
B
Yeah,
this
definitely
is
a
topic
that
deserves
its
own
sort
of
session
and
is
dependent
on
your
needs
and
your
deployment
architecture
with
the
gateway
so
for
more
best
practices
there.
I
might
want
to
encourage
us
to
maybe
connect
offline
outside
of
this
meeting
to
get
a
little
bit
more
information
about
your
specific
deployment
landscape.
B
So
we
can
help
recommend
how
to
upgrade
to
two
five,
a
little
bit
better.
D
I'm
going
to
send
light
sound
like
a
vlog
on
that
code
here,
but
that
looks
like
a
fantastic
question
for
the
github
discussions,
so
we
can
follow
up
after
this
call
max
like
paul
said,
it
really
depends
on
what
you're
trying
to
achieve,
and
also
your
deployment
configuration
whether
you're
using
the
admin
api,
whether
you're
using
declarative
config
like
there's
a
lot
of
factors
that
we
need
to
know
to
make
her
the
best
recommendation
for
that.
D
Siddharth,
let's
go
yep
feel
free
to
mute
yeah
thanks.
A
So
I
have
a
quick
question
regarding
one
of
the
issues
kind
of
that
I
faced
so
right
now,
I'm
using
a
kubernetes,
I
mean
deployed
kong
or
kubernetes
and
then
have
I
think,
2.4
car
right
now
and
then
using
a
db
mode.
So
sometimes
I
see
a
weird
situation
when,
if
I
get
an
error
right
like,
for
example,
the
authentication
failure
or
whatever
it
is
that
time,
I
see
the
they
don't
get
a
response
from
kong.
A
It
kind
of
just
rotates,
and
on
I
mean
I
mean
it,
waits
for
like
a
timeout
like,
for
example,
right
right
through
postman
waits
till
the
postman
timeout,
and
then
it
kind
of
doesn't
give
me
a
reply
like
even
doesn't
give
me
a
401
or
whatever
error.
That
just
means
is.
Do
you
think
that
is
something
that
is
that
you're.
C
I
don't
recall
any
like
issues
regarding
this
timeout
for
the
authentication
plugin.
So
maybe
you
can
open
a
github
issue
and
maybe
share
with
us
your
an
analog.
You
see
and
maybe
the
configuration,
many
more
entities
that
includes
your
the
plugin
and
also
some
service
and
files.
So
it
can
make
us
reproduce
our
site.
A
Sure
sure
yeah
I'll
do
that.
Thank
you.
So
I
have
a
question
so
this
regular
year,
so
we
are
running
kang
on
keyboard
notice
and
we
are
trying
this
multilingual
deployment
and
what,
from
both
data
centers,
we
are
pointing
the
same
post
database.
A
So
what's
happening
is
even
though
we
have
many
app
running
on
both
the
clusters
and
in
two
different
data
centers
at
given
point
of
time,
we
are
able
to
only
call
one,
only
one
administrator,
which
is
the
the
recent
you
know,
cluster
that
went
into
database,
the
other
cluster,
which
was
deployed
before
the
before
before
this
cluster
right.
A
C
Yeah
so
which
kind
of
database
are
using,
is
it
postgres
or
cassandra
postmates?
So
you
mentioned
that
there's
like
two
con
classes,
so
how
are
they
organized?
Are
they
both
returned
to
the
same
database
or
just
like
kind
of
replication
outside
happening.
A
The
same
database,
so
we
are
writing
into
the
same
master
database
from
both
the
clusters
in
two
different
data.
Centers.
C
Yes,
so
the
admin
api
most
likely
will
create
a
new
text
to
protect
from
like
a
like
half
a
modification,
so
it
might
be
normal
to
steal
a
timeout
but
yeah.
Maybe
maybe,
let's
move
this
discussion
to
github
as
well,
because
I
think
we
will
need
to
ask
more
information
about
detailed
information
from
you,
especially
how
to
set
up
this.
How
are
you
setting
up
this
to
cluster
okay
yeah?
It
might
also
benefit
for
other
github
users
as
well.
A
And-
and
there
is
one
more
issue
so
when
we
try
to
connect
to
our
datums
for
the
first
time
right,
so
we
are
getting
the
bootstrap
whether
when
the
parts
come
up
the
con
parts.
But
then,
even
though
we
see
that
error
in
the
logs,
but
the
parts
are
coming
up,
fine,
we
have
the
migrations
for
pre
upgrade
post
operator
set
to
true.
Only
so
is
there
anything
that
we
have
to.
You
know
make
change
to
make
change
to
those
config
values,
to
not
see
the
sellers.
A
C
A
We
have
both
pre-upgrade
and
post
updates
set
to
true
in
the
config
file
that
we
have.
So
even
then
we
are
seeing
this
bootstrap
error
and
then
but
the
parts
are
coming
up
same.
C
Yeah
so
usually,
when
you
saw
a
push
up
arrow,
even
if
coin
started,
you
might
not
consider
it
being
healthy
because
it
might
missing
some
of
the
schema.
C
So
usually
when
you,
when
you
saw
a
push
up
arrow,
the
migration
script
will
most
likely
and
exit
with
a
non-error
non-zero
ac
code.
So
you
might
want
to
check
your
script
and
to
error
out
on
the
migration
step
and
if
it's
sale,
then
just
try
not
to
bring
out
the
part
in
later,
because
it
might
start
start
calling
the
noun
like
a
consistent
consistently.
So
it
might
break
unexpectedly.
A
C
Okay,
can
you
can
you
can
see
the
parametric
reference
again
you're
mentioning
that
you
set
some
parameter
to
true
right.
A
That's
the
three
upgrade
and
post
upgrade
values
in
the
fields
in
the
migrations
segment
in
the
values.ml
right
in
this
in
the
cr
file.
We
have
these
migrations
values
called
pre
upgrade
and
post
update,
and
they
can
toggle
true
and
false
right.
D
This
sounds
like
we
might
need
to
show
some
configuration
files
that
will
be
easier
in
text
rather
than
speaking
through
them.
So
I'd
encourage
you
raghu
to
open
a
github
issue
with
the
ever
message
that
you're
seeing
in
the
logs
and
your
configuration
file,
and
we
can
take
a
look
and
see
what's
going
on
yeah.
C
Remember
to
the
into
our
sample
like
kubernetes
deployment
files,
and
I
mean
I
need
to
take
a
deeper
look
at
that
so
yeah.
So
please,
please
do
share
your
confirmation,
crd
to
okay,
github,
okay,
yeah,
later
yeah.
D
C
Regard
could
describe
the
question
more
detail
like
how
what
you're
trying
to
like
accomplish
so.
Are
you
trying
to
using
kong
as
a
load
balancer,
or
are
you
trying
to
put
the
little
branch
up
before
kong
to
like
balance
that
coin
itself.
C
Sure
put
open
answer
before
okay,
I
don't
think
for
this.
We
have
like
a
bad
practice,
so
you
can
for
this
specific
question.
You
can
treat
kong
as
a
normal
like
a
service
like
the
application.
So
one
thing
you
want
to
note
is
that
con
will
rely
on
some
kind
of
the
remote
address
or
the
headers
so
make
sure
to
always
pass
along
all
the
other
remote
information
like
remote
address
ports
and
other
other
headers,
if
you're
using
a
load,
seven
layer,
seven
load,
balancer
yeah.
C
D
The
questions
keep
coming
today.
This
is
awesome,
a
question
from
isaac
should
we
use
by
default
for
all
the
kong
services
created.
Oh,
it's
moving
created,
congo
stream
objects
balancers,
or
should
we
use
kong
service
old
way?
I'm
not
sure.
I
follow
that.
One.
C
Yeah
yeah
so
yeah,
I'm
I'm
not
sure.
If
I
understand
correctly
so
my
united
me
goes
through
my
understanding,
yeah.
E
I
can
explain
you
so,
but
basically
you
have
in
order
to
balance
upstream
services.
You
have
two
ways,
so
you
have
the
in
the
when
you
define
create
a
service
object
in
con
okay
in
the
host
section,
you
can
use
what
you
call
up
strings
so
that
the
load
balancing
internal
load,
balancing
mechanism-
okay,
where
you
define
the
targets
and
so
on
or
you
you
you
can
avoid
to
use
that
and
and
don't
use,
congratulations,
objects.
Okay,
so
don't
use
targets
and
you're
exactly
the
con
service
in
the
host
okay.
E
What
is
the
best
way
to
to
do
it?
So
we
should
migrate
to
the
upstream
objects
or
we
should
keep
it
keep
it
in
the
like
the
old-fashioned
way.
C
So
that
that
depends
on
really
depends
on
your
use
case
so
for
for
having
the
like
the
upstreams,
like
you
mentioned,
balances
and
the
targets,
I
basically
have
all
the
benefits
of
health
checks
and
some
additional
features
like
option
tls
some
kind
of
stuff.
C
So,
if
you're
having
only
the
old
way
of
service,
you
do
configure
it
more
conveniently
because
you
don't
have
to
configure
additional
objects
or
entities,
but
you
do
not
have
these
kind
of
like
active
hashtags
or
the
additional
features
that
tied
to
the
balances,
sorry
to
the
app
strings,
only
so
that,
just
depending
on
how
you
want
to
how
you
want
to
use
glasgow.
How
is
like
use
case
here?
So,
if
you
are
not
using
any
of
the
features,
these
two
are
just
the
same
and
you
can
consider
internally.
C
E
Yeah,
just
to
hey
guys,
just
to
clarify
so
using
the
canary
deployment
or
the
canary
plug-in
that
you
guys
have
there
as
a
feature.
Our
endpoint
path
uses
two
different
naming
convention.
E
E
So
I
was
just
wondering
if
you
guys,
you
know,
have
any
recommended
approach
on
that
or
if
it's
you
know
not
supported
just
trying
to
get
my
heads
around
that
guy.
But
we've
been
working
trying
to
figure
this
one
out,
but
just
came
to
a
blocker.
There.
D
Yeah,
I
could
say
this
one
that
honestly
sounds
like
a
bug
to
me
like
we,
if
you're
able
to
configure
separate
upstream
uris
for
the
the
original
deployment
and
the
canary
and
we're
not
respecting
it
for
the
canary,
I
think
we
need
to
dig
in
and
find
out
why.
E
Is
that
something
that
a
bug
ticket
should
be
open
from
you
guys
or
like?
How
does
that
work?.
D
E
D
You
thanks
paul
another
question:
where
can
we
put
custom
logic
before
hitting
the
upstream
and
I'm
happy
to
say
this
one
as
well?
This
is
where
kong
plugins
come
in
really
useful.
D
So
during
the
execution
phase
of
a
request-
and
you
can
manipulate
the
request
using
kong
plugins
before
it
sends
to
the
upstream
and
also
once
the
response
has
been
received,
you
can
buy
those
plugins
either
in
lua.
If
you
want
the
best
performance
or
we
also
offer
plugin
servers,
if
you
prefer
to
write
golang
or
javascript
or
typescript,
we
have
extensive
documentation
available
for
each
of
them
blog
posts
on
jsongo
that
will
drop
into
the
chat
for
you.
D
D
B
We
we're
working
on
a
number
of
hybrid
2.0
features
that
will
help
with
this,
so
you'll
see
coming
from
us
shortly,
we'll
go
one
by
one
through
these
plugins
and
certify
them
for
use
on
our
hybrid
mode,
and
this
is
one
of
the
plugins
that
we're
going
to
approach
first.
This
is
a
really
common
ask.
B
So
it's
definitely
on
our
roadmap.
I
don't
have
a
timeline
quite
yet,
but
we'll
be
publishing
some
more
information
on
which
plugins
that
we
certify
first
for
for
use
in
hybrid
mode.
I
think
this
is
probably
the
first
one
to
go.
D
Thanks
paul
next
question
is
from
isaac.
Do
we
have
any
plans
to
enable
nginx
open
tracing
by
default
in
kong.
C
C
Sorry
one
of
the
opportunity,
implementation
and
also
we're
having
a
community
plugin
that
talks
to
the
sky
working
server.
Then
you
can
also
use
as
a
tracing
agent
in
call.
But
as
far
as
I
know,
probably,
cracked
me
from
wrong,
but
we,
I
don't
think
we
have
a
plan
to
add
this.
I
think
it's
the
c
module
to
do
comparing
to
call,
but
you
can
always
like
compile
your
own
binary
installation.
C
We
can
open
source
the
building
tools
that
you
can
build
from
from
source,
of
course,
to
a
final
binary
package.
D
Could
you
head
over
to
github
and
raise
that
as
an
issue
on
the
the
kick
reaper
and
we'll
see
if
we
can
fit
that
into
the
road
map
advice
specific
to
the
ingress
controller,
not
the
gateway
in
general,
all
right
chalky
for
the
custom
logic,
you
need
to
extract
values
from
json.
The
plugins
can
definitely
do
that,
whether
it's
lure
js
or
go-
and
you
can
parse
json,
transform
it
and
do
anything
you
need
before
passing
it
off
to
the
upstream
magu.
D
Can
we
point
to
multiple
postgres
hosts
through
the
pghost
property
in
the
config
file,
and
can
it
understand
the
current
master
impulse
question
forward
the
connection
on?
Do
they
need
to
change
the
hosting
config
each
time
the
master
changes
on
the
postgres
side?
This
is
running
on
kubernetes
wang
chong.
I
guess.
C
Sure
I
yeah
so
the
current
master.
I
don't
know,
there's
like
official
way
that
you
can
like
in
the
passenger
line,
because
then
you
are
like
way
that
you
can
set
up
a
postcard
cluster
and
assign
a
specific
one
as
a
master.
So
maybe
you
are
using
your
like
some
kind
of
implementation.
That
does
this.
C
If
you
do,
please
like
open,
github
issue
that
let
us
know
if
or
if
we're
just
talking
about
like
dns
based
rotation-
and
I
think
the
con
already
handles
that
and
as
long
as
you're
dn
server
in
the
kubernetes
cluster
configure,
the
ttl,
properly
and
khan
will
be
able
to
rotate
the
entry
for
the
master.
Once
the
the
new
new
postgres
database
had
been
spinning
up.
D
Cool
thank
you
now,
chucky
again
asking
about
plugins.
D
I
would
recommend
that
you,
you
head
over
to
the
github
discussions
forum
and
outline
what
you're
trying
to
achieve
and
we'll
be
able
to
help
you
there,
rather
than
going
back
and
forth
on
the
call
and
up
a
new
person.
What
is
the
recommended
way
for
configuring
consumers
plugins
for
ingress
controller?
D
Should
you
use
the
admin
api
or
should
you
use?
Kubernetes
manifests,
and
there
are
a
few
features
that
are
not
supported
through
the
manifests
and
do
we
still
have
harry
on
the
call.
I
saw
him
earlier.
No
he's
dropped
off.
Can
anyone
take
this
one.
C
Yeah,
I
can
probably
answer
this
one
as
well,
so,
if
you're
using
the
using
the
code
ingress
controller,
always
do
that
through
the
invest,
fair
or
you'll,
see
the
crd,
otherwise
any
any
any
changes
to
do
should
any
api
might
just
got
deleted
after
after
the
sync.
It
might
in
that
case
always
be
the
case,
but
it
is
potentially
happening.
C
D
Yeah
and
when
I
call
what
one
trump
said,
crds
are
the
way
forwards
if
you're,
using
the
ingress
controller
and
but
we
love
hearing
from
you
all
like
when
you're
hitting
these
pain
points
when
you've
got
ideas
like
please
do
head
over
to
github
and
there's
issues
or
posting
discussions
forums
that
way,
we
know
what
to
prioritize
next.
D
C
Any
pain
point
I
assume
you
see,
pain,
point
right,
yeah,
so
yeah.
So
the
main
difference
is
in
the
kilogenetics.
Everything
is
running
a
container
so
make
sure
you're
having
the
correct
privilege
and
file
privilege
and
also
make
sure
you
provide
the
performance
inside
the
kubernetes
container
before
you
do
migration
and
also
make
sure
that
some
some
remote
address
might
got
changed
if
you're,
if
you're
not
running
the
call,
it
will
as
an
ingress
so
make
sure
to
forward
all
the
headers.
C
B
You
know
isaac
as
you
kind
of
journey
into
this
strategy.
This
might
be
a
really
great
way
we
could
kind
of
partner
together
to
maybe
help
spread
the
word
on
other
other
community
members,
so
maybe
we
can
partner
together
on
making
sure
you're
successful,
and
maybe
we
can
build
a
a
little
technical
blog
on
what
it
was
like
to
potentially
move.
B
D
D
I
think
we're
out
of
questions
we're
almost
at
time
as
well.
I've
got
time
for
one
more
if
you.
D
D
Going
once
going
twice
all
right,
let's
call
it
the
thank
you
very
much
everyone
for
taking
the
time
to
join
us
today.
I'd
like
to
remind
you
that
these
user
calls
happen
on
this
on
the
second
tuesday
of
every
month.
D
Our
next
call
is
the
10th
of
august,
and
I
hope
to
see
you
all
there
in
the
meantime,
keep
an
eye
out
for
the
2.5
release
going
out
today
read
the
blog
and
everyone
that
asked
a
question
that
we
couldn't
answer
on.
This
call
please
head
over
to
github
post
it
in
discussions.
There's
an
issue
we'll
make
sure
to
help
you
there
enjoy
the
rest
of
your.