►
From YouTube: [Online Meetup] Kong 1.2 and 1.3 Upcoming Releases
Description
Kong Principal Engineer Thibault Charbonnier presented and demo’d upcoming features to be released in Kong 1.2 and 1.3. Thibault covered performance improvements, introduced newly open sourced plugins, and announced GRPC proxying.
Join our next Online Meetup: https://konghq.com/online-meetups/
B
B
B
B
B
A
A
B
B
A
D
A
C
B
A
B
D
They
call
it
here
the
like
an
uber
for
trucks
for
heavy
transportation,
and
the
premise
is
very
aggressive
but
I
don't
know
it's
a
very
difficult
market
to
get
into
because
the
trucker's
they
don't
have
a
lot
of
education
and
don't
have
are
not
easy
to
you
to
use
app
mobile
app
and
they
they
already
have
a
closed
market.
It's
difficult
to
get
into
and
also
Brazil
is
very,
very
big
and
difficult
to
track.
The
trucker's
or
the
country
for
other
country.
D
B
B
A
Compounding
which
is
age
platform
written
in
the
language
and
then
I
come
to
know
about
the
Kong,
because
the
risk
on
the
internet,
music
on
the
monetization
I'm,
just
excluding
that
from
the
edges
on
this
point
of
view,
as
well
as
from
my
internal
project,
let
me
replace
engineer
with
Kong,
but
mostly
two
features
I'm
looking
into
it
is
one
certificate
based
authentication
in
terms
of
BIOS
businesses
like
under
devices
having
certificates
am
along
with
the
reverse,
proxy
and
pls
termination
for
what
another
the
feature
may
be
most
interested
in.
Yes,
it
is
yep.
B
Great
and
then
I
also
shared
our
agenda
link
in
the
chat
here.
So
if
you'd
like
to
put
yourself
down
as
an
attendee
and
then
if
you
would
like
a
gmail
and
G
calendar,
invite
to
the
community
colleagues
Munsch,
please
put
your
email
address
down.
They'll
get
make
sure
that
you
get
that
and
then,
with
that
let's
hand
it
on
over
to
Kevin.
Who
will
talk
a
little
bit
about
our
community
initiatives
coming
out,
yeah.
C
Reactive
community
uses
discord
and
they
really
like
it.
So
I
love
to
hear
from
the
community
members.
You
know
what
kind
of
platforms
they
prefer
and
they
they
like
to
use
a
so
we
know
we
can
best
support
you
guys
well
moving
forward
so
yeah.
This
is
something
that
you
know.
I'll
start
talking
to
a
lot
of
our
community
members
about
you
know
what?
Where
would
they
like
to
interact
with?
You
know
the
internal
team
and
just
other
community
folks,
so
yeah.
C
B
A
E
Okay,
so
yes,
I
was
thinking
Hannah
for
switching
it
over
to
do
to
me
and
then
I
was
saying
sorry
about
that,
so
we're
gonna
look
at
the
next
upcoming
cone,
open
source
releases
for
the
next
few
months.
So
recently
we
put
out
a
kong
1.1
release
with
declarative
configuration
and
database
dice
nodes,
so
we're
gonna
look
now
at
kong,
1.2
and
cone
1.3
and
what
are
the
new
arm
improvements
and
features
that
we've
been
building
the
team's
been
building
over
the
last
couple
month
or
so,
and
some
of
those
improvements.
E
Some
of
this
work
has
started
even
before
our
1.1
DB
less
work,
and
some
of
it
will
improve
1.1
DBS
as
well.
Okay,
so,
let's
first
have
a
look
at
con
102
in
this
release,
which
is
slated
for
quite
soon
actually,
I
will
give
estimates
in
a
little
bit
in
this
release
were
looking
at
a
consolidation
release
on
top
of
con
1.1,
with,
of
course,
a
few
additional
features,
because
you
know
everybody
loves
having
a
few
gems
into
those
those
releases.
So
with
1.2
we're
going
to
consolidate
the
credit
configuration,
we
have
helpful
bug.
E
Reports
from
the
community
and
some
early
adopters
who've
been
very
active
in
using
the
katifaq
configuration
reporting,
their
their
usage
and
their
issues.
We
also
actually
are
migrating
more
and
more
about
test
fit
on
top
of
DBS.
So
as
we
do
this,
we
sort
of
uncover
any
neck
issues.
E
Organisations
we
can
do,
and
so
talking
about
optimizations
in
1.2
be
on
improving
the
cloud
configuration
we're
looking
at
somewhat
important
performance
improvements,
we're
looking
at
open
sourcing,
new
enterprise,
plug
things
that
have
been
enterprise
only
for
the
last
year
and
a
half
or
so
we're
looking
at
what
card
supports
for
sni
matching
that
will
be
part
of
our
demo
later,
and
hopefully
we
are
going
to
merge
recent
contributions
that
improve
connection
management
with
post
quests,
so
we're
trying
to
limit
the
number
of
concurrent
connections
open
to
the
Postgres
server
and
we
added
some
new
memory
statistics
in
the
admin
area
that
will
help
that
meets
federal,
get
insight
into
the
common
memory
allocated
memory
from
luuvy
m
or
shared
memory
dictionaries.
E
Okay.
So,
let's
talk
about
the
performance
improvements
that
we
have
in
the
pipeline
for
for
your
on,
so
I
want
to
distinguish
two
types
of
performance
improvements
we've
been
doing.
So
there
is
what
I
echo
the
baseline
improvements
that
will
just
improve
performance
of
any
process
requests
so
technically
improve
leniency
and
increase
in
number
of
of
TPS.
E
We
saw
on
the
right
here:
I
have
some
links
to
each
of
the
PRS
that
relate
to
those
changes
because
those
have
already
been
merged.
Unfortunately,
I
don't
have
numbers
for
you
today.
That
is
something
that
we
put
out
before
the
release,
but
we
have
yet
to
run
extensive
tests
that
sort
of
encompass
all
of
those
changes
that
emerge
together
in
one
bundle
before
before,
actually
running
those
tests
versus
going
on
that
one.
So,
four,
based
on
improvements,
we
have
new
plugin
or
a
new
heuristics.
E
So
what
we
mean
by
this
is
in
optimize
Dragon
renew
that
we
do
a
lot
less
CPU
operations,
error,
requests
and
a
lot
less
of
that
database.
Queries
that
we
know
is
often
protected
by
the
cache
so
oftentimes.
The
primer
loop
is
going
to
ask
the
database
for
plugins
that
are
configure
and
oftentimes.
It
will
actually
just
hit
the
cache
and
so
no
queries
the
database
will
be
made.
But
in
this
improvement
we
actually
opt
the
revenue
to
know
ahead
of
time.
E
E
So
all
the
details
are
in
the
PRS
for
those
that
are
curious
and
if
you
want
to
get
to
know
how
the
con
internals
are
built
and
how
to
contribute
those
core
plaguing
the
scope
to
request
sorry
about
the
plugins
renew
are
very
important
and
we'd
love
to
have
more
people
knowledgeable
about
the
internals
of
pregnancy,
look
and
contribute
to
it.
We
also
improved
the
general
G
ability
of
the
pregnant
run
loop.
So,
as
some
of
you
may
know,
combat
3
is
built
on
top
of
luigi
and
not
lua
Salewa.
E
It
stands
for
a
just-in-time,
compiler
version
of
the
Lua
standard
or
interpreter,
and
we
just
make
you
know,
updates
to
some
of
our
hot
cut
paths
as
we
like
to
refer
to
on
that.
Make
sure
that
you
know
just
runs
faster
and
takes
full
advantage
of
the
just-in-time
abilities
of
legit.
So
we
spend
less
time
in
your
interpreted
mode
and
more
time
in
JIT
compiler
mode,
and
then
we
have
the
p99
performance
improvements.
So
what
do
we
mean
by
this?
E
So
as
I
was
referring
to
earlier,
whenever
a
request
hits
whenever
a
request
wants
to
retrieve
the
configuration
value
from
the
database?
Eventually,
the
request
will
first
go
through
the
cache
and
the
cache
may
you
know,
return
a
hit,
and
in
that
case
we
don't
have
to
hit
the
database
to
open
connection
and
actually
send
the
query
over
to
the
database.
E
But
sometimes
it
may
be
so
that
the
value
that
we're
looking
for
is
not
in
the
cache
and
in
those
cases,
con
will
have
to
go
all
the
way
to
the
database
open
connection,
maybe
even
open
a
TLS
handshake
with
the
database
and
run
a
query
or
sequence
of
queries
and
eventually
get
pages
of
rows
back
and
booted
some
states
right.
So
what
we're
looking
at
in
Kong
1
to
2
is
improving
the
soup
so
but
by
p99
I,
don't
see
what
we
mean
is
eventually
those
requests
create
the
p99
tale
right.
E
E
So,
thanks
to
the
feedback
that
was
given
to
us
by
Jeremy,
who
is
Anna
Cornell
and
some
of
our
engineers
work,
we've
been
doing
a
lot
of
effort
to
try
to
improve
on
that
latency
tear
and
make
sure
that
any
sort
of
requests
the
database
is
actually
spun
into
an
asynchronous
job.
So
it
doesn't
affect
the
requests
laughs.
E
So
eventually
the
request
would
be
processed,
maybe
with
slightly
out
of
date
cache,
but
eventually
the
cache
will
be
refreshed
in
the
background
in
the
next
request
will
hit
fresh
cash
value
so
we're
looking
at
a
synchronicity
rebuilding
the
router,
which
so
far
has
been
the
most
expensive
p99
latency
impact
and
we're
looking
at
warming
up
database
entities
and
DNS
queries.
So
we
have
this
new
property
called
DB
cache
form
of
entities,
that's
in
the
con
configuration
file
and
it
gives
you
the
the
possibility.
The
choice
of
warming
up
the
database
cache
ahead
of
time.
E
So
you
know
if
you
want
to
make
sure
that
all
of
the
plugins
are
already
in
memory,
so
that
the
plugin
run
loop
doesn't
have
to
hit
the
database
in
the
first
place.
You
can
do
so
with
that
this
property
and
app,
in
fact,
plugins,
is
by
default
warm
up
in
the
1.2
version
of
calm,
because
DB
cache
form
of
entities
has
those
two
values:
by
default
services
and
plugins,
and
for
services.
E
Some
of
you
may
know.
Actually
this
behavior
is
somewhat
similar
to
nginx
photos
where
you
know
if
you've
used
nginx
before
you
have
a
reserve,
our
property
and
any
host
name
that
needs
to
be
resolved
at
runtime
will
be
resolved,
the
other,
the
with
internal
reservoir
of
nginx,
that
you
can
specify
yourself.
E
You
can
specify
you
the
name
server
yourself
and
nginx
will
query
the
name
server.
But
if
you
hard
code,
the
DNS
name
in
the
internet
configuration
this
first
time
you'll
actually
be
resolved
via
the
system
reserve
when
nginx
starts.
So
that's
an
optimization
that
nginx
does-
and
in
this
case
it's
somewhat
of
a
similar
optimization.
E
Great.
So
next
improvements
or
next
you
know
big
items
and
the
release
list
form
under
to
the
new
open
source
plugins
that
are
coming
out
of
Kong
enterprise.
So
our
goal
or
strategy
with
with
Kong
has
always
been
to
you,
know,
support
the
open
source
development
thanks
to
the
enterprise
platform
and
the
revenue
and
the
growth
that
we
experienced
by
the
enterprise
platform
that
we
refer
back
into
the
open
source
and
into
the
community.
It's
something
that
we
take
really
close
to
a
heart.
We
really
want
to
pay.
E
You
know
back
to
the
open
source
community
for
all
the
wonderful
technologies
that
we've
been
able
to
use
and
we
really
have
strong,
live
in
the
power
and
strength
of
the
community
and
of
our
open
source
approach.
So
as
time
moves
forward,
we
always
had
this
plan
of
you
know,
taking
the
enterprise
components
out
of
the
enterprise
platform
in
open
sourcing
them
in
gifting
them
to
the
community
and
in
one
or
two
will
do
I
think
this
is
our
first
time
that
we're
actually
acting
on
that
on
that
premise.
E
So
we're
open
sourcing,
the
proxy
cache
plugin
and
the
response,
transformer
advanced,
brilliant,
actually
I
think
is
requests
questions
from
the
audience
billion,
so
the
proxy
cash
plugins.
Some
of
you
may
have
seen
it
before
in
our
enterprise
documentation
or
in
the
on
the
hobbit.
E
It
is
only
part
currently
of
the
enterprise
platform,
but
is
this
plug-in
allows
you
to
cache
HTTP
recall,
request,
sorry
to
cache
HTTP
responses
based
on
the
cache
control
headers,
so
it
implements
all
the
cache
control
specifications
and
in
the
about
to
be
released,
open-source
version
allows
you
to
cache
those
those
responses.
In
the
inside
of
the
Khans
node
memory,
the
Enterprise
version.
E
We
still
have
ready
support,
but
we
think
that
the
open
source
version
being
able
to
leverage
the
Lewis
or
dicta
guys
and
destroy
memory
zones
and
cash
values
inside
of
those
members
will
be
extremely
powerful
for
open
source
users
and
in
fact,
the
memory
statistics
that
ship,
along
with
102,
will
further
help
those
open
source
users.
You
tuning
your
proxy
cache
configuration
we'll
talk
about
that
in
a
in
a
second,
so
yeah.
So
it's
a
very
powerful
plugin
implements
many
of
the
cache
control,
headers
and
specifications.
E
It
provides
some
observability
via
a
response
headers
that
it
injects
teacher
configuration
that
can
be
overridden
by
clients
with
max
age.
Max
terror
request
headers
it's
it's
very
port
for
we've,
had
a
number
of
enterprise
customers
using
this
plug-in
for
over
a
year
now
and
very
satisfied
customers.
So
far,
I've
heard
a
lot
of
great
feedback
about
this
pregnant,
so
we're
really
exciting
to
excited
to
ship
it
to
the
community.
E
Here:
okay,
the
request
transformer
advanced
plugin.
So
let
me
so
request
from
your
advanced
is
an
improved
version
of
the
request.
Transformer
plugin
and
its
main
advantage
is
the
ability
to
inject
luluwa
variables
that
were
extracted
via,
for
example,
of
the
request
of
reg
X
capture
groups
or
query
string,
values
or
header
values
right
into
the
upstream
URL
or
upstream
headers.
E
So
it
has
this
sort
of
interpolation
mechanism,
where
you
can
directly
inject
header,
yeah
extracted
for
any
value
extracted
from
the
request
directly
into
an
injected
upstream
request
value
property,
so
it
allows
so
it
gets
rid
of
those
edge
cases
where
you
had
to
use,
say
other
butyl
custom
plug-in
or
use
the
Lua.
The
functions
begin
to
write
your
own
Louis,
I
bet.
That
would
do
this
for
you.
E
Okay,
next
on
the
list,
oh
yeah,
so
the
compression
proxy
cache
is
already
open
sourced
and
available
for
you
today.
You
can
already
use
it
if
you
know
how
to
install
custom
planes
and
the
request
transfer
more
advanced
is
here
being
worked
on
by
our
team
right
now.
So
please
wait
a
few
more
days
before
we
actually
make
it
open
source.
But
if
you
follow
the
conga
organization
and
github,
you
will
surely
be
notified
whenever
we
are
open
source.
It
and
I
will
reiterate
both
of
those
plugins
ship
bundle
by
default
in
count
1.2.
E
So,
technically,
what
this
means
is,
if
you
have
subdomains
and
you
have
your
SSL
certificates
spanning
all
of
those
subdomains,
you
can
avoid
having
to
configure
one
sni
entity
income
for
each
of
your
subdomains
and
actually
have
a
single
sni
entry
for
four
for
all
of
those
subdomains.
So
will
also
remove
this
later
on.
E
E
So
this
means
that
we
won't
have
more
than
this
number
of
concurrent
clients
per
nginx
worker
connected
to
the
database
at
the
same
time.
So
with
this
property,
you
can
finally
to
the
connection
pool
to
progress
and
make
sure
that
this
number
is
never
exceed
the
number
of
concurrent
clients
that
you
compute
it
will
never
be
exceeded.
We
still
recommend
eventually-
and
we
I'm
sure
some
of
you
already
use
in
production
or
thought
about
it.
E
We
added
this
property
in
here
it
still
not
merge.
The
pure
is
a
four
or
five
five
one
and
still
up
for
reviews.
If
anybody
has
any
any
feedback
or
wants
early
usage
of
this
new
property
and
finally
in
con
when
the
two
as
well
a
new
new
property
in
the
status
endpoint
of
the
admin
area,
code
memory
and
this
property
will
include
objects.
That
represents
the
allocated
memory
from
a
given
cognate.
E
We
expose
two
types
of
different
allocated
memory:
the
worker
volumes,
which
are
the
garbage
collector
size
of
a
given
load,
VM
running
inside
of
each
Walker,
and
we
expose
the
size,
the
capacity
and
the
allocated
pages
of
each
lewisham
dictionary.
So
Louis
our
dictionary
just
to
reiterate,
if
you're
not
familiar
with
it,
our
memory
zones
that
are
shared
between
all
of
the
walkers
and
a
blue
AVM
will
be
the
lowest
a
running
inside
of
a
walker
and
Rua
being
a
garbage
collected
language.
E
This
this
memory
is
managed
by
the
garbage
collector
the
Lua
garbage
collector,
so
each
Walker
yvm
will
have
its
the
PID
of
the
Walker
that
assess
this
AVM
and
the
size
of
the
the
garbage
collector.
This
size
is
updated
every
10
seconds
and
the
lower
Shore
dictionary
has
the
capacity
of
each
of
the
configured
share.
Memory
zones,
which
can
be
very
useful
to
the
debug
of
production
instance,
make
sure
that
your
custom,
nginx
templates
or
custom
rules
that
you
added
our
you
know
taking
in
consideration.
E
Capacity
of
the
share
dictionary
is
actually
the
one
that
you
you
want,
and
the
allocated
slabs,
which
is,
which
is
the
the
pages
are
located
for
for
this
given
memory
zone.
I
want
to
stress
out
here
that,
even
if
the
this
number
do
look
it's
slab,
which
is
the
capacity
this
doesn't
mean
that
the
share
dictionary
is
poor,
right,
sure
lectionaries
that
have
a
least
recently
used
algorithm
that
makes
sure
to
evict
previous
entries
in
order
to
allocate
new
ones
right.
So
this
is
how
the
database
cache,
for
example,
works.
E
E
Income
1
2
3
were
looking
at
bumping,
our
open
risky
engine,
X
version,
so
the
open
risky
latest
release
candidate,
115,
8,
1,
RC
2
is
currently
up
for
testing
and
will
be
promoted
to
a
former
release
this
week,
if
nothing,
no
major
bugs
are
reported
so
we're
expecting
to,
but
the
underlying
version
4
come
as
well,
so
this
will
bring
many
implements
to
Congress
we're,
including
support
for
arm
64,
for
for
open,
UST
and
legit,
which
is
a
huge
piece
in
the
native
arm.
64
support
for
Kong
itself.
E
We
have
also,
due
to
the
engine,
expand
the
open
risky
work,
actually
significant,
additional
performance
improvements,
and
here
we're
talking
about
the
baseline
improvements,
just
just
lower
latency
in
x86
64,
on
open
risky
measure
up
to
10%
performance
improvements
with
this
release,
so
we're
really
excited
to
bump
or
underlying
engine
X
version
and
open
receive
on
top
of
those.
We
also
benefit
from
all
the
latest.
Nginx
updates
right,
new
directives
upstream
keep
alive
and
upstream
requests,
keep
alive
properties,
etc.
E
So
it's
a
it's
a
release
that
many
of
the
county
members
have
have
been
testing.
Some
of
them
have
contributed
to
to
this
release
as
well.
It's
it's
a
project
that
Kong
also
actively
maintains
and
has
the
knowledge
and
the
expertise
to
to
to
move
forward
as
work,
so
we're
really
excited
to
to
invest
more
resources,
those
in
those
technologies
and
make
sure
that
we
contribute
as
well
to
a
run
time
in
Oran
and
run
time
benefits
to
to
us
to
Kong
as
well.
E
E
E
So
this
yeah.
This
is
a
pretty
big
milestone
for
count
to
to
cross
that
that
gerb
is
the
proximal
stone
and
we're
really
excited
about
about
releasing
this
one.
So
so,
like
I
said,
it's
still
a
work
in
progress,
we
do
have
a
QC
running.
None
of
that
is
public
right
now,
but
we
probably
would
be
public
in
the
next
month
or
so,
and
up
for
early
testers
to
take
a
look
at
it
also.
E
So
we
do
have
mutuality
res
capabilities
already
in
calm,
the
more
on
the
service
mesh
aspect,
calm
in
the
streaming
stinner
property
like
like
I,
was
saying,
which
was
Royal
TCP
proxy,
but
here
we're
looking
at
Manomet
radiator
support
in
the
gateway
in
the
HTTP
mode,
so
con
will
be
able
in
come
in,
come
on
the
three
to
dynamically
request
on
a
client
certificate
from
from
the
client
and
eventually
verify
these
kind
of
certificates.
So
here
con
will
act
as
a
server
from
which
or
TLS
and
make
sure
that
it
attempt
it's
a
downstream
client.
E
We
will
be
building
matured
here
as
authentication
plugin
as
well,
but
more
on
that
later,
we
have
also
can't
a
support
for
come
to
support
mutual
TLS
as
a
client
as
well.
So
when
con
connects
to
an
upstream,
we
want
to
come
to
dynamically,
we
use
a
cam
certificate
that
you
would
have
configured
beforehand
in
your
database
or
in
your
DBS
deployments.
So
can
we
be
able
to
talk
too
much
for
upstream
and
use
a
different
client
certificate
to
connect
to
the
different
objects?
E
Okay,
okay,
so
on
time
for
some
some
action-
yes,
some
demo
time.
So
what
I
want
to
demo
today
is
what
that
is
already
merged
and
ready
to
shape
income
Wanda
so
will
demo
dynamic?
Sorry,
no,
not
nothing
to
make
me
jealous
I
will
demo.
E
Wildcards
and
I
matching
and
we're
going
to
show
to
look
at
the
new
memory
statistics
in
the
slash
status,
symbol.
Okay,
so
here
I
have
come
running,
it
does
show
call
me
Wanda
Wanda,
but
that's
actually
the
next
branch
of
the
common
repository
we
haven't
done
the
version
number
yet,
but
so
what
I
have
here
is
a
common
environment
with
some
properties,
some
services
in
my
declarative
configuration
and
in
exemplar
certificate
and
an
example
key.
E
So
by
specifying
a
wild-card
SNI,
we
make
connections
to
come
with
different
subdomains
and
or
white
courtesy
that
will
match
all
of
those
subdomains
and
make
sure
that
we
get
these
certificates
instead
of
the
default
kong
certificate.
Okay,
so
on
the
right
here
I
can
start
looking
for
logs.
There
will
be
interesting
things
in
here
and
what
I'm
going
to
do
so,
like
I
said,
is
import
the
new
Medicare,
if
configuration
inside
of
my
database.
E
So
here
we
go
and
I'm
going
to
start
calm,
perfect,
so
calm
started
and
the
first
thing
that
I
want
you
to
notice
is
on
the
right
here
we
have
some
new
low
blogged
at
an
early
strip.
That
indicate
me
tell
me
that
I
have
a
warm
up
cache.
So
all
of
the
plugins
entity
are
already
in
the
cache
and
all
of
the
DNS
entries
have
been
resolved
ahead
of
time.
So
I
don't
have
any
plugins
configured.
So
that
was
a
very
quick
operation,
but
I
do
have
a
DNS
entry
on
the
left.
E
E
What
right
is
a
special
case,
but
I
can
specify
also
custom
entities
oath
to
tokens,
make
sure
that
those
are
pre
warmed
up
inside
of
the
the
cache
right.
Okay,
so
I'm,
going
to
make
a
test
request.
Make
sure
that
this
rod,
which
is
listening
on
this
on
the
the
root
path,
actually
proxies
to
achieve
Eden
or
ox
anything
and
perfect.
So
we
went
through
Khan,
we
have
the
via
header
and
we
got
our
response
from
HTTP
being
in
nine
milliseconds,
perfect,
okay.
So
now
what
we
want
is
to
configure
this
certificate.
E
So
some
of
you
may
already
know
how
the
immunity
I
works,
but
it's
a
set
of
endpoints
that
allow
us
to
dynamically
configure
account.
So
Kong
runs
the
immunity.
I
runs
on
port
8000.
Verse
to
support
sorry
comes,
proxying
runs
in
for
a
thousand
reasons,
and
a
thousand
but
kansai
api
is
on
port
8000,
one
so
I'm
going
to
make
a
post
request
to
a
thousand
one
slash
certificates
and
I'm
going
to
upload
a
certificate
which
is
exempt
Reserve
top
M
and
I'm
going
to
also
alongside
it.
The
private
key.
E
E
E
Products
and
environments,
so
lots
probably
other
news
for
for
other
community
codes
in
the
future.
But
here
what
I
got
is
a
certificate
updated
uploaded
to
to
count
with
this
identifier
and
I
want
to
make
sure
that,
when
I
connect
to
con
with
a
given
s
and
I
I
get
served
these
certificates
and
not
the
comm
default
certificate?
E
So
for
this,
I
must
post
a
new
si,
so
we're
gonna
do
this
right
now,
localhost
a
1
s
right
great,
so
here
in
the
name
property
of
my
s
and
I
I
can
specify
this
new
white
card
support.
If
you
try
this
in
the
version
before
comment
on
one
you
before
come
one
or
two
you're
going
to
get
an
error,
because
this
character
will
not
be
accepted
and
a
certificate
idea.
E
Habits
die
hard,
okay,
great
so
now,
I
have
configured
a
NS&I
on
my
certificate
with
the
start
of
my
domain
calm.
So
here,
if
I
open
a
connection
to
come
and
I
specify
a
server
name
so
say:
WWE
my
domain,
calm,
yep,
connect
and
I
connect
to
the
port.
The
8
4
4,
3
port
of
calm.
E
So
so
what's
going
to
happen
here
is
that
I'm
going
to
connect
to
the
port
8
4
4
3,
it's
calm,
sport
listening
for
HTTP
connections
and
I'm,
going
to
send
an
SN
I
value
of
wwm
I
remain
calm,
and
here
I
can
see
that
I
did
get
my
domain
as
a
common
name
for
in
the
certificate
that
I
was
served.
The
certificate
that
I
was
presented
was
indeed
a
certificate
that
I
uploaded.
But
what,
if
I
request
with
another
subdomains
so
say
food
of
my
domain?
E
Calm,
perfect
I
also
got
the
same
certificate,
so
I
didn't
have
to
configure
to
as
a
nice
one
with
food
and
Matt
Miller
coming
one
with
www,
don't
mind,
America
know
what
happens
if
I
request
connect
as
my
domain
or
here
I
get
the
default
kong
certificate.
The
one
that's
automatically
generated
for
you
if
you
start
coming
out
without
specifying
your
own
certificate
in
the
surf,
signed
certificate.
So
what
I
can
do
to
address
this
issue
is
actually
simply
go
to
my
a
slap
manage
my
eyes.
E
Entities
again
and
I
will
specify
my
domain
the
star.
Instead
and
now,
if
I
open
the
same
connection
with
my
domain,
the
org
I
would
be
presented
the
certificate
that
I
uploaded
to
come
and
not
the
default
certificate.
Again
same
goes.
If
I
connect
to
my
domain,
FR
actually
get
the
custom
certificate
that
I
specified.
Yes,
and
so
this
is
one
called
sni
matching
connect,
Amin
Khan
1.2,
and
we
hope
that
you
will
help
you
manage
your
confess.
There's
a
little
bit
operationally.
E
There
is
a.
There
has
been
a
lot
of
previous
complaints
form
from
users
and
the
pain
points
like
I
will
have
to
configure
like
I've
seen
from
my
own,
with
my
own
eyes,
like
users,
configuring
20
as
my
entities
for
all
of
their
subdomains.
So
hopefully
this
will
alleviate
some
of
their
some
of
their
pain
and
the
other
endpoint
that
we
can
demonstrate.
I
will
be
on
the
admin
API
this
time
again,
and
it
is
the
establishing
point.
Many
of
you
know
it's
you've
used
it
too
many
totally
hairs
at
the
condo.
E
It
gets
you
the
member
of
currently
active
connections.
It
gets
you
the
state,
the
general
health
state
of
calm,
whether
it's
connected
to
the
database
or
not,
and
since
one
or
two
in
one
the
two
I
should
say
it
will
also
give
you
this
new
memory
property
that
we
mentioned
with
the
configure
Lewis
a
dictionary,
the
capacity
of
each
dictionary
and
the
amount
of
memory
currently
allocated
for
that
shared
lectionary.
It
will
also
give
you
the
garbage,
collector
size
of
the
audience,
so
I
do
have
to.
E
So
that
will
be
very
helpful.
You
know
not
only
for
users
have
come
to
properly
allocate
resources
and
make
sure
that
your
your
pods
or
containers
and
your
virtual
memory-
you
know,
system,
resources
and
limits
are
properly
configured
the
database.
Your
database
cache
is
properly
configured
as
well.
It's
properly
sized
it
doesn't
it's.
You
know
it's
not
necessarily
like
full
that
maybe
you
have
some
leeway
in
there,
but
if
it
is
full,
don't
worry
about
it.
E
Like
I,
say
you
have
an
early
election
mechanism
and
it
will
also
have
us,
come
developers
in
in
bug
reports
and
all
of
those
day-to-day
tasks
to
make
sure
that
we
can
properly
help
you
debug
your
instances
and
help
you
configure
your
instances
properly,
etc.
If
I
want
to
I
can
even
specify
a
custom
unit.
E
So
if
I
want
the
response
to
be
in
kilobytes,
I
get
a
kilobyte
give
that
answer
and
if
I
want
to
get
raw,
bytes,
ex-teammates
and
I'm
going
to
build
a
dashboard
with
progress,
bars
and
whatever
sort
of
UI.
You
can
do
this
as
well,
so
it's
very
handy
tool,
a
very
handy
new
property.
In
the
study
standpoint,
we
also
hope
to
make
sure
we
work
on
making
those
properties
available
via
the
primitives
plug-in
as
well,
and
eventually
we
are
logging
plugins.
E
So
we
can
actually
report
those
metrics
and
make
sure
that
you
get
the
best
observability
all
of
your
calm
notes,
but
for
now
in
one
or
two
it
would
part
of
the
slash.
Tell
you
simple:
okay,
so
that's
it
for
demo
today,
so
we
demoed
the
word
chorus
and
I.
Imagine
the
support
and
the
new
memory
matrix
of
the
study
sample
so
know.
I
owe
you
some
some
dates.
I
think
that
I've
been
totally
nog
enough
and
I'm
teasing
all
of
those
things
and
demoing
some
of
them.
E
So
when
do
we
expect
to
ship
com1
to
our
target
date
for
a
stable,
formal
release
with
the
end
of
May
end
of
this
month?
We
hope
to
release
if
first
release
candidate
very
soon,
and
this
estimate
is
estimated
for
a
for
more
videos,
so
stay
tuned,
and
hopefully,
when
the
reality
is
that
you
will
help
us
test
it
and
maybe
benchmark
it,
make
sure
that
our
performance
improvements
live
up
to
the
standards
and
maybe
the
early
adopters
and
report.
Any
issue
you
may
see,
which
are
always
super
helpful
to
us.
E
Getting
feedback
on
the
release
candidate
is
how
we
shape
form
our
releases
faster
and
we've
had
an
incredible
run
with
our
community
so
far.
So
we're
really
happy
about
that
and
we
hope
that
your
continued
support
is
testing
the
release.
Candidates
come
one
two
three
four.
This
release
we're
looking
a
little
bit
more
work.
So,
like
I,
said
your
PC
support
a
new
in
the
next
version
and
mutuality.
E
Yet
that's
just
or
our
current
estimate
I
want
to
stress
out
too
that
if
you
want
to
test
any
of
those
features
today
on
anything,
that's
merged
in
the
next
branch
which
is
currently
in
shape
or
cone
1.2,
including
the
two
features
I
demoed
today,
you
can
download
those
branches
on
the
90p
packages
that
are
available
on
beam
tray
and
those
are
packages
that
every
night
will
take
the
next
branch.
The
upcoming
next
version
calm
and
build
all
of
our
release
artifacts.
E
A
E
The
way
we
built
it
is
we
integrate
with
our
database
access
layer
and
any
entity
that
is
available
as
a
deal
object,
as
a
deal
instance
can
be
specified
in
this
property
file,
so
that
includes
consumers,
API
keys
for
the
key
of
plug-in
or
basic
auth
credentials
from
the
basic
oil
plug
in
so
think
about
it.
As
you
know,
you
have
core
entities:
consumers,
plugins
services,
all
the
user
have
the
plugins
entities
themselves,
so
a
kiosk
plug-in.
We
create
a
table
in
the
database
with
the
key
of
the
key
of
credentials
right.
E
Those
can
also
be
pre-warmed
and
cached
as
well.
So
if
you
want,
if
you
use
a
plug-in,
say
opener
to
connect,
you
have
a
table
of
all
two
tokens.
So
if
you
specify
all
two
tokens
in
the
warm-up
property,
your
clients
connecting
to
home
will
specify.
We
really
include
their
tokens
right,
but
the
token
will
doubly
create
the
database
back
home.
It
would
already
be
warmed
up
in
the
cache,
so
I
I
would
advise
to
specify
any
any
entity
from
custom
plugins
that
you
may
have
configured.
E
So
the
reason
why
we
don't
do
it
by
default
is
that
there
is
a
I
mean
there
is
of
course,
trade
off
and
the
trade
off
with
the
warm
up
will
be
time
to
boot
up
with
new
combo
right,
so
the
the
boot
process,
with
video
slower
and
eventually,
if
the
site,
the
size
of
the
cache,
is
not
large
enough
to
hold
all
of
your
entities
on
it
will
spill
over.
You
will
have
some
warning
logs
and
that's
okay,
because
I
said
you
have
the
error.
You
have
extra
mechanisms
to
converge
to
run
just
on.
E
E
E
Day
arrived
in
con,
we
actually
reviewed
the
router
in
its
entirety
so
by
by
paginating
over
all
of
the
routes,
and
that
is
why
sometimes
the
p9
in
atlanta
can
be
fairly
high
when
you
update
rods
at
runtime
is
because
we
have
to
do
this
process
of
genetic
engineering
through
all
the
routes,
rebuilding
the
router
and
that's
why
the
other
performance
improvements?
You
know
you
and
vendor
have
been
working
on,
which
reveals
this
router
in
the
background,
instead
of
on
the
requests
tab
right,
the
runs
would
actually
like
never
be
queried
individually.
E
B
B
Thanks
so
much
for
joining
us
everyone,
so
if
we
do
these
calls
on
the
second
Tuesday
of
every
month
and
we're
always
looking
for
external
presenters
as
well.
So
if
you
ever
want
to
share
how
you
use
Kong
if
you've
written
a
plugin
for
it,
please
feel
free
to
contact,
Kevin
or
I
will
drop
both
of
our
email
addresses
here
in
the
chat,
and
then
we
also
post
the
recordings
of
these
online
as
well,
and
so
thank
you
so
much
for
joining
us.
Please.
E
Check
our
combination,
discuss
kamischka,
calm
and
host
any
you
know,
feature
requests,
questions
general
questions
about
Kong
about
calm,
plugins
about
all
the
projects
that
are
related
to
calling
such
as
the
eternities
ingress
controller
and
stay
in
touch
with
us
on
and
like
I
said.
If
you
want
to
give
a
try
to
the
next
branch
which
contains
all
of
the
improvements
or
of
one
or
two
that
is
already
available
today
on
betrayed
for
download
on
the
nightly
radius
packages.
Yeah.