►
Description
Kong Builders is the livestream series that takes our developer-focused toolsets and puts them on display in the best venue possible – building applications and connecting workloads.
Join Viktor Gamov as he takes a hands-on, practitioner-focused approach to exploring Kong’s tools.
See upcoming and past episodes at Konghq.com/kong-builders
#KongBuilders #DevOps #Livestream #livecoding
A
A
A
Welcome-
and
this
is
the
this
is
the
place
where
we're
checking
sound,
we're,
checking
sound
and
on
my
site
I
see
everything
is
fine.
Welcome
to
kung
builders,
the
show
live,
show
unscripted,
where
we
talk
about
all
things,
connectivity
and
cloud.
My
name
is
victor
gamov
and
I
will
be
your
host
today.
A
It's
been
a
while
it's
like
a
while.
It's
been
two
weeks
we
trying
to
keep
this
bi-weekly
cadence
and
we
talk
about
you
know
interesting
things
here.
So
if
you
see
me,
if
you
hear
me,
let
me
know
in
the
comments
we're
from
as
a
reminder
for
those
of
you
who
join
us
for
the
first
time
we
actually
stream
in
new
places
we
stream
into
youtube,
which
is
my
favorite
place.
This
is
the
where
I
like
to
be.
A
We
stream
also
to
linkedin.
Apparently
a
lot
of
people
have
a
lunch
break
because
we're
streaming
this
at
at
noon
at
the
eastern
time
zone
and
looks
like,
according
to
our
stats,
according
our
data
and
machine
learning
that
many
people
who
watch
our
show,
they
have
a
maybe
a
lunch
break,
but
it's
also
in
pacific.
It's
9
a.m.
A
People
just
waking
up
grabbing
their
a
cup
of
coffee,
join
us
for
session,
where
we're
gonna
be
talking
about
some
cool
stuff:
okay,
okay,
okay,
okay,
okay,
I
see,
I
see
people
joining
us,
that's
already,
some
regular
folks,
tony
from
winnipeg
snow.
No,
I
don't
I
don't
like
snow,
but
I
was
super
happy
that
you
enjoyed
this
tony
okay
and
yeah
yeah.
That's
coffee
coffee
is
happening,
so
I
don't
have
a
coffee.
Yet
I
don't
have
a
coffee.
A
Yet
today
I
had
the
cup
of
coffee.
I
had
a
cup
of
coffee
and
yeah,
so
we're
gonna
be
doing
this,
so
I
hope
everyone
had
a
great
break
during
the
during
the
holidays.
During
the
time
with
you
know,
maybe
doing
nothing.
Maybe
some
of
the
people
may
be
working.
I've
been
working
as
well,
and
I
was
I
was
doing
some
some
research
for
some.
A
Some
personal
projects
got
the
got
the
new
laptop
for
after
christmas,
and
I
did
some
some
research
how
to
do
things
on
the
new
laptop.
This
laptop
has
slightly
different
architecture
that
my
previous
machines
and
also
caused
me
some
of
the
interesting
things-
and
I
was
I
was
I
was
I
was
working
on
some
of
the
local
workflows
for
clonk,
and
many
of
you
know
that
I'm
a
huge
kubernetes
fan
and
my
work
that
I
do
somehow
related
to
kubernetes.
I
like
to
run
my
demo
on
kubernetes
some
huge
fan.
A
Sometimes
people
asking
how
to
deal
with
the
local
deployments,
how
to
deal
with
local
workflows,
how
we
can
run
the
quan
in
in
some
of
the
local
setting
so
on
the
previous
videos
that
you
can
find
on
this
youtube
channel.
If
you
watch
this
in
a
youtube
channel,
the
those
are
in
you
know
either
calling
youtube
channel.
Oh,
I
forgot
that
I'm
streaming
in
multiple
places.
A
At
the
same
time,
the
on
the
clone
youtube
channel-
I
I
did
the
video
about
ronnie,
konken,
docker
and
the
previous
live
streams.
I
showed
you
how
we
can
use
the
local
development
for
testing
some
of
the
deployments
and
in
a
kind
of
in
the
same,
in
the
same,
in
the
same
theme
of
this
kind
of
local
deployment.
I
decided
to
test
out
some
of
the
popular
tools
to
run
kubernetes
locally.
A
Specifically,
how
we
can
do
this
with
with
clone
can
how
we
can
test
it
with
conch
is
a
api
gateway,
so
you
need
to
have
certain
requirements
on
the
local
deployment
if
you're
running
long
there.
So
today
we're
gonna,
at
least
I'm
planning
to
to
to
see
two
tools.
One
is
the
mini
cube
and
another
tool
is
kind
kind
kind.
Current
is
in
docker
it's
a
abbreviator,
so
we're
gonna
look
into
this
one
and
how
you
can
deploy
quang
there
how
you
can
run
the
coin.
A
One
thing's
actually
bothers
me
that
the
way
how
the
people
don't
know
about
konko
or
people
don't
know
how
to
run
kolonka
and
kubernetes,
because
there
was
no
like
reasonable
information
available.
So
that's
why
maybe
some
of
the
out
of
the
box
solutions
are
popular.
I
don't
want
to
bash
those
solutions,
but
think
about
this
as
a
kind
of
in-store
brand
right
like
everything
that
ships
with
your
whatever
your
favorite,
your
favorite,
store
your
favorite,
the
current
distribution.
A
A
Now
so
now,
I'm
going
to
start
with
a
with
a
mini,
cube,
mini
cube.
It
is
a
it's
pretty
cool,
pretty
cool
project.
Actually,
so
it's
actually
have
ability
to
run
kubernetes
locally,
but
it
has
a
concept
of
drivers.
Let
me
let
me
do
this
screen
a
little
bit.
Bigger
yeah
so
allows
you
to
run
kubernetes
locally
and
it
has
this
concept
of
of
driver.
A
So
with
the
with
driver,
you
can
run
this
using
any
underlying
technology
that
you
have
available.
You
might
have
a
docker
installed
so
hyperkit.
It
is
a
low
level
virtualization
framework
that
are
available
in
mac
os.
A
A
I
try
to
run
multiple
things
and
somehow
only
more
or
less
successful
in
the
portable
solution
that
I
have
found
if
you
found
something
different
or
if
your
if
your
mileage
is,
is
different,
also
write
down
in
the
comments
for
me.
Only
combination
of
mac
os
and
the
docker
as
a
driver
worked
fine.
A
I
tried
other
things,
hyperkit
used
to
work,
but
it
really
does
make
any
sense,
because
it's
a
docker
still
uses
hyperkit
portman
didn't
work
for
me,
people
from
podman.
If
you
watching
this,
let
me
know
what
I'm
doing
wrong
couldn't
could
make
it
work
I'll,
try
it
yesterday
very
hard
virtualbox
and
other
solutions.
A
Virtualbox
is
not
working
with
my
current
setup,
so
we're
going
to
stick
to
mac,
os
and
and
this
so
I,
since
I
am
on
the
mac
since
I'm
on
a
mac.
I
can.
I
just
do
blue
install
mini
q,
and
in
this
case
it
will
go
and
installs
the
mini
cube
for
me.
I
already
have
it
installed,
and
so
I
don't
need
to
do
anything
yeah,
so
I
already
have
it
and
and
if
I
do
version
yeah,
so
it's
124..
A
So
with
this,
with
this
setup,
we're
gonna
try
to
run
some
of
the
apps.
We
will
deploy
this
in
kubernetes.
We
will
deploy
coin
and
call
and
congrats
controller
and
try
to
do
stuff
locally,
so
idea
is
to
have
a
local
experience,
have
somehow
accessed
the
applications
from
within
currencies
cluster
locally,
meaning
that
on
the
host
machine,
so
I
will
be
using
the
this
is
kind
of
like
as
a
success
criteria.
Right
now,
the
I
do
have
a
docker
and
nothing
is
running
with
docker.
A
Yet
when
I
do
a
mini
cube
start,
so
I
want
to
use
this
type
of
setting,
so
I
will
using
four
gpu
cpus
eight
gigabyte
of
memory.
Maybe
it's
overkill
for
someone.
I
just
want
to
have
a
little
bit
more
flexibility
on
this
one.
A
A
Drivers
and
we'll
see
what
kind
of
drivers
and
how
to
enable
those,
the
there's
okay,
so
it's
a
development
of
the
drivers.
No,
no.
We
want
to
do
something
like
getting
starting
and
handbook.
Let's
see
deeply
accesses
configuration
your
cluster
yeah,
so
there
is
a
command
that
allows
to
set
the
driver
by
default
in
this
case,
if
I'll
just
do.
A
Mini
cube,
config.
A
Gets
driver
there's
no
there's
no
driver
by
default.
So
let's
let
me
put
the
let
me
put
the
my
docker
as
my
as
my
default
driver
and
I
will
just
do.
Driver
docker.
A
I
I
don't
need
to.
I
don't
need
to
start
this
docker
thingy
anymore,
but
since
I
already
have
this
command
so
as
you
can
see
here,
I'm
running
this
in
the
apple
cynical
machine
and
this
the
thing
that
I'm
I'm
doing
right
now
actually
works
on
other
machines.
I
think
linux
should
be
not
a
problem.
We
it's
just
like
it
just
works
there.
Usually
it.
A
You
know
this
type
of
tooling,
this
type
of
setting
the
shows
some
of
the
some
of
the
problems
on
some
of
the
strange
places
like
windows
or
mac
the
yeah.
So
I
think
it's
a
it's
a
pretty
good
use
case,
because
many
people
trying
to
run
this
locally
kubernetes
locally,
and
you
know
running
this
with
quank
as
well
all
right
as
you.
I
know
you
can
always
write
down
in
the
comments
in
a
youtube
section
or
linkedin.
A
And
also
you
can
comment
on
the
recordings
if
you
want
to
have
you
know
some
of
the
questions
answered
during
this
show.
I
can
do
that
for
you
well
the
next
time,
but
don't
forget,
put
the
comments
in
comment
section
all
right.
So
if
I
run
this,
it
looks
like
the
it's
up,
so
if
I
will
run
the
docker
ps,
I
do
see
that
this
one
image
of
minicube
is
running.
This
is
our
kubernetes
cluster
running
and
my.
A
A
Okay,
it's
my
alias,
so
I
have
oops,
not
the
kind,
it's
my
alias
to
coop
control.
So,
like
don't
worry,
if
every
time
I
type
something
like
cool,
apply,
minus
f
and
after
that
gps
bit
bitly,
slash
quang
for
kubernetes
and
what
it
does.
A
Let
me
do
this
one,
okay,
oops,
for
some
reasons,
I
did
not
allow
me
to
open
another
session.
Let
me
fix
my
terminal,
so
when
I
have
a
profiles,
my
default
is
actually
being.
A
Why?
Why
is
that?
A
Let
me
fix
this
little
quick
that
happens
to
to
do
live
streams
yeah,
because
live
streams
uses
wrong
terminal
session.
Now
it
should
be
come
on
anyway,
we're
going
to
be
using
the
another
window.
A
Live
streams.
This
is
dark,
dark
session,
okay,
k9s,
that's
my
local
deployment,
and
now
I
see
there's
bunch
of
nodes
that
running
here
and
there's
nothing
services
due
to
do
space,
yep
and
nodes,
just
one
node
and
pods
yeah,
some
of
the
ports
that
running
inside
this
is
the
ports
that
support
kubernetes
itself
now.
So
what
I
will
do
here-
and
I
will
run
this
stuff
here
and
we
will
see
a
bunch
of
stuff
that
happens
from
from
from
quonk
and
let's
go
to
service
now.
A
Many
of
you
will
see
this
type
of
stuff.
That
happens
like
you
see
that
when
you're
running
this
somewhere
in
outside,
but
you're
running
this
in
managed
cluster,
you
will
see
something
like
this
now,
like
external
address
is
pending.
So
there's
a
couple
options.
What
you
can
do,
one
of
the
options
is
to
install
load
balancer
and
when
you
go
to
say,
mini
cube,
addons
list
you'll
see
a
bunch
of
interesting
things.
A
I
have
a
limited
success
with
running
this,
this
tool,
with
the
on
my
current
setting
on
the
setting
that
I
use
right
now.
So
I
found
there
is
another
option
and
the
option
is
says
it's
a
meaningful
tunnel.
It's
a
mini
kiln
tunnel
allows
to
kind
of
connect
to
services
that
required
load
balancer
and
apparently
this
is
the
only
working
way
that
is
portable,
more
or
less.
I
know
that
many
people
in
in
my
team
they
use
installing
some
of
the
metal
lb
and
configuring
this
one.
A
So
I'm
going
to
be
using
this
one.
So,
for
example,
if
I
run
this
stuff
somewhere
in
separate
window
so
say
mini
cube
tunnel.
A
So
what
it
will
do,
because
quan
requires
access
to
port
80
and
40
443.
Those
are
privileged
ports,
so
I
need
to
get
access
in,
like
a
pseudo
fashion.
A
So
with
this,
so
I
can
do
something
like
http
localhost
and
all
of
a
sudden.
We
see
these
one
two
three,
four
five.
We
see
this
five
words
that
every
system
industry
want
to
hear
no
route
much
with
those
values.
I'm
just
kidding
also
write
down
in
the
comments.
If
you
got
the
reference,
so
it
works.
So
we
hit
this
end
point,
meaning
that
the
quality
is
listening.
Now
we
can
configure
things
like
ingress.
A
It
turns
out
that
in.
A
It
turns
out
in
the
in
the
world
of
in
the
world
of
kubernetes
the
usage
of
this
http
echo
server
is
kinda,
it's
kind
of
the
thing,
so
if
I'll
do
hashtag,
corp
http
http
echo,
so
I
saw
this
in
the
many
documentations.
A
It
looks
like
it's
a
really
small,
a
container
that
listens
particular
port
and
replies
with
the
same
request,
so
we're
gonna
be
using
the
same
thing.
So
what
it's
it's
my
personal
model,
I
would
say,
let
me
developers
where
they
are.
So,
if
you
already
using
this
one,
I'm
not
trying
to
change
your
habit.
We're
gonna
we're
gonna,
we're
gonna
change
this
later,
but
for
now
we're
going
to
use
this
one.
A
Here's
the
problem,
though
the
problem
with
this
with
this
tool
is
that
the
currently
I'm
running
this
in
in
in
arm
64
architecture,
machine
and
this
image
was
not
built
there.
So
it
fails
when
I
deploy
this
image
into
kubernetes.
So
that's
why
I'm
using
this,
someone
in
community
just
went
and
forked
there
and
created
multi
multi-arc
multi-arc
the
image,
so
we're
going
to
be
using
this
port
and
two
services,
one
service
will
have
will
will
receive
the
full
as
a
again.
This
is
the
example.
It's
not
my
example.
A
I
didn't
come
up
with
this
example.
This
example.
It
came
from
the
communication
somewhere
either
from
kind
or
for
mini
cube.
So
I'm
just
using
this
whatever
is
there
and
another
same
thing
but
replies
back
with
bar
and
we
have
a
two
services.
One
service.
Let
me
show
you
yeah,
so
we
have
our
two
pods
two
services
and
one
ingress.
A
A
A
No
route
much
this,
this
ingress,
which
is
okay
because
we
created
ingress,
but
we
didn't
specify
a
class
of
this
ingress,
so
the
controller
that
will
be
listening
for
this
aggressive
doesn't
know
what
to
do
so.
Let's
patch
this-
and
in
this
case
I'll
just
do
the
patch
like
this.
So
with
the
the
we
need
to
specify
the
ingress
class.
So
right
now,
as
you
can
see
here,
there's
nothing
here,
so
we
will
wait
until
the
patch
command
will
be
executed
and
tunnel
is
still
running.
A
If
I
go
ingress
now
has
a
class,
as
you
can
see
here,
class
called
meaning
that
now
conch
will
be
listing
everything
that
happens
with
this
ingress
and
allows
us
to
get
access
to
this
guy.
So
if
I'll
just
do
localhost
foo
and
we
get
the
response
from
full,
that's
that's
some
of
the
headers
that
kong
added
and
some
of
the
responses
that
we
get
from
from
this.
If
I'll
just
do
bar
it
works.
A
So
that's
that's
how
you
do
it
so
in
this
case
it
is
mini,
cube,
we're
going
ahead
with
the
where's
mine.
A
So
ask
your
dog
just
some
of
the
notes
that
we're
going
to
be
doing
with
mini
cube
with
mini
cube.
We
do
first
thing:
mini
cube,
start
install
kong
with
the
shortcut
command,
use,
mini
cube
tunnel
in
order
to
provide
access
to
to
load,
balancer
services
and,
after
that,
specify
deploy
ingress
with
ingress
address
class
name.
A
So
speaking
about
this
ingress
class
name.
So
let's
talk
about
this
for
a
little
quick.
We
do
have
ingress
class
and
specifically,
there
was
a
url
that
I
wanted
to
show
you
ingress.
A
A
Get
ingress
example,
ingress
and
output,
a
yaml
and
we'll
just
do
pipe
see,
might
say
some
nice
pipe.
So
now,
as
you
can
see
here,
we
have
this
ingress
class.
That's
that
says
that
this
is
going
to
be
a
part
of
your
specification,
so
there
is
a
I'm
still
trying
to
figure
out
default
behavior.
A
What's
going
to
be
default,
be
ever,
and
I
mean
in
that
like
why
it's
not
picking
up,
because
there's
would
be
only
one
if
you
know
the
answer
write
down
in
the
comments
or
like
a
tweet
at
me
and
I'll
figure
out
what
to
do
this
this
next,
but
essentially,
ideally
since
it's
a
you
know,
ingress
is
kind
of
like
a
vendor
neutral
specification.
A
There
should
be
no,
you
know
the
only
only
one
controller
that
listens
to
this
one.
Maybe
it
should
be.
Maybe
it
should
be
just
a.
A
Yeah,
no
one
is
is,
is
listening,
so
in
this
case
it
should
like
pick
up
whatever
ingress
controller
available.
I
don't
know
so.
That's
that's
how
it
works
with
the
with
the
mini
q.
So
we
start
this.
We
install
in
corn,
we
have
a
tunnel
and
also
we
enable
ingress.
So
another
thing
that
I
was
starting
working
while
I
was
was
on
the
on
the
break
is
actually
instead
of
just
there's
plenty
of
don'ts
in
in
mini
cube
I'll.
A
Just
do
a
list
there's
stuff,
but
there's
no,
there's
no
con
add-on.
Here
the
thing
that
I
started
to
work.
Hopefully
it
will
emerge
sometime.
It
is
a
quan
congress,
controller
add-on
for
for
mini
cube.
Hopefully
it
will
be
merged
sometime
soon
and
we'll
see
so.
You
will
be
able
to
enable
this
like
very
quickly
just
do
like
mini
cube,
add-on,
enable
clonk
and
you
will
be-
and
you
will
be
golden
now.
Okay.
So
let
me
let
me
clean
up
this
little
bit
so
tunnel.
A
I'll
stop
the
mini
cube,
so
in
this
case
we'll
continue
to
to
next
one.
Okay,
I
don't
see
many
comments.
Okay,
I
don't
see
many
comments.
Only
only
tony
is
cheering
me
up
is
only
tony
is
cheering
me
up
for
for
this
live
stream.
So
if
you
want
to
see
yourself
in
the
live
stream,
go
there
in
the
comments-
and
I
will
see
you
there:
okay,
that's
with
a
mini
cube,
so
I'll
just
do
docker,
ps,
another,
very
popular
approach
and
how
the
people
use
this.
A
It
is
actually
kind.
So,
let's
take
a
look
on
the
client,
then
so
kind.
That's
another
option,
that's
another
thing
and
kind
allows
you
to
create
the
kubernetes
clusters
it
as
far
as
I
can
understand,
kind
has
only
one
driver
and
this
driver
is
docker,
so
it's
kind
kubernetes
in
docker.
That's
why
a
logo,
hence
logo,
and
if
I'll,
just
do
a
kind
version.
A
So
it
is
also
supported
on
new
apple
silicon
and
arm
architectures,
which
is
good.
You
potentially
can
run
this
in
a
raspberry
pi.
I
think
in
in
this
case,
since
it's
also
arm.
A
A
Yeah,
so
what
we
can
do
here,
which
is
the
kind?
Oh
okay,
so
we
need
to
do
kind
start
class,
create
cluster.
A
Kind
doesn't
have
a
similar
functionality
like
like
a
tunnel
in
in
mini
cube.
However,
there's
the
communication
that
explains
how
this
can
be
done,
but
essential
idea
is
to
create
a
cluster
with
special
configuration,
and
specifically,
this
is
the
custom
resource
definition
for
a
kind
cluster
and
one
of
the
things
that
we
need
it's
extra
port
mapping.
So
essentially
we
will
take
the
the
map,
the
port
of
the
pod,
that
runs
inside
a
particular
node
and
those
map.
A
There
are
configurations
for
three
ingress
controllers.
One
thing
is
missing
that
I
also
working
on
adding
here.
So
that's
why
I'm
kind
of
like
running
this
experiment
here
with
you
folks
how
to
enable
this,
but
I
will
be
using
this
config
if
I
will
go
ahead
and
say
my
current
cluster,
that's
that's
what
we're
gonna
be
using.
Also
with
the
with
this
configuration
I
will
be
able
to.
You
know,
specify
where
to
deploy.
My
conc,
so
this
is
the
what
so
do
not
forget
about
this
one
yeah.
A
So,
let's
run
if
I'll
just
do
create
cluster.
This
dash
just
config
kind
cluster,
so
it
will
create
the
cluster.
Can
cluster
with
configuration
that
they
provide.
A
And
starting
the
control
plane,
all
it's
always
fun
to
show
this
type
of
stuff
real
time
and
see
how
it
will
affect
the
the
quality
of
the
of
the
stream
and
see
how
the.
A
It
performed
so
far
so
far
the
performance
of
this
new
machine
was
stellar
and
the
live
stream
quality
didn't
suffer.
So,
let's
see
what,
while
it's
starting
it's
time
for
checking
some
twitters.
If
we
doing
the,
if
you're
doing
good
there,
not
many
people
watching
there
people
you
should
that's
that's
important
things
now
the
kind
started
it
was
pretty
quick
and
if
I
go
to
my.
A
K9S
now
it
says,
let's
see
not
supposed
to
say,
kubernete
k9s
and
now,
if
I'll
just
do
nodes
as
onenote
and
now
we
see
it's
a
it's
a
kind
control
panel
and
there's
a
bunch
of
stuff
is
deployed
here.
That's
cool.
A
We're
just
doing
the
same
thing,
just
applying
the
quant
configuration
here
and
now
what
we
can
see
is
same
thing
with
with
external
ip,
so
expanding,
because
there's
no
load
balancer
deployed
here
so
the
so.
A
What
what
I
want
to
do
is
use
different
approach,
so
the
cluster
ip
it's
one
of
the
default
network
configuration
that
will
be
available
for
the
port,
but
in
the
load
balancer
that
we
use
a
lot
so
load
balancer
allows
you
to
get
external
ip,
and
this
external
ip
will
be
routed
to
cluster
ip
of
the
port
of
the
port.
That
runs
my
api
gateway.
A
So,
first
of
all
I
need
to.
I
need
to
patch
my
deployment
first
thing
that
I
need
to
punch
my
deployment
to
if
we're
running
this
in
in
a
static.
Maybe
you
know
kind
with
multiple
with
multiple,
like
virtual
nodes,
that
might
be
multiple,
like
docker
containers
that
run
inside
same
network.
A
I
want
to
make
sure
that
my
my
port
that
runs
quank
will
land
on
the
same
node
that
was
marked
as
a
ingress
rating.
How
does
it
work
so
in
this
case
we
are.
This
is
my
kind
cluster
with
the
kind
cluster
I
created
for
the
for,
for
for
kubat
mean
for
the
node,
where
my
stuff
would
be
running.
I
will
be
running
this.
A
And
if
I
will
see
here,
there
should
be
something
like
extra
label,
so
the
actual
labor
was
added
to
this
node
config
in
grass,
ready
so
meaning
that
so
this
node
already
have
external
port
externalized
to
to
my
host
network.
So
to
my
to
my
to
my
host
now,
I
want
to
run
my
port
on
the
same
node
and
take
the
the
port
that
was
exposed
outside
world
and
use
this
for
my
port,
so
it
will
be
routed
to
my
port.
A
A
Another
thing
I
need
to
modify
the
ports
that
my
clunk
will
be
using,
so
container
port
will
be
the
same
because
it's
configured
to
use
the
same.
This
port
through
in
through
environment,
variable.
A
The
host
port-
this
is
something
that
we
want
to
map
to
to
to
to
to
port
of
the
of
particular
of
particular
node.
So
we're
going
to
be
using
this
one.
A
A
A
Now,
if
I
go
here
now,
it
doesn't
have
to
be
node
port,
and
now
it
needs
to
be
just
noteport.
So
essentially
that's
pretty
much
it.
If
I
will
go
do
http
localhost,
and
we
see
this
like
the
the
routing
that
the
tunnel
did.
We
did
this
manually.
Essentially,
you
know
the
ultimate.
The
result
would
be
the
same
now.
The
next
thing
is
that
there's
no
ingress,
there's
no
ingress
again.
We
can
do
this
two
ways.
One
way
would
be
creating
the
standard,
ingress
and
patch
it
or
we
can
create
this
ingress
with
the.
A
In
grass
class
name
equals
quank,
so
in
this
case,
conch
will
start
using
this
ingress
immediately.
So
if
I'll
just
do
usage-
and
when
I
hit
this.
A
A
That's
pretty
much
it
so
with
this,
with
the
with
the
kind.
A
We
changed
service
from
load,
balancer
to
a
node,
port
and
yeah
ingress
same
thing.
The.
A
Yeah,
so
that's
now
you
have
it
like.
You
have
a
two
solutions:
how
you
can
run
your
quang
and
how
you
can
run
your
kubernetes
services
inside
the
quank
inside
the.
A
Yeah
you
can,
you
can
run
this
in
a
in
a
different
setting,
so
the
you
can
find
the
previous
versions
when
I
was
talking
about
how
to
use
this,
how
to
test
your
services
quank
with
the
test
containers
locally.
Also,
if
you.
A
A
Tutorials
view
full
playlist
and
you
can
find
the
the
one
that
how
you
can
start
quant
with
docker
using
what
is
that
the
docker
and
docker
compose
and
all
these
kind
of
things
you
can
always
that
you
know
text
me
to
my
twitter
account.
You
definitely
want
to
follow
me
for
more
updates.
Hopefully
you
enjoyed
this
show
the
same
way
as
I
enjoy
doing
this.
A
It's.
I
don't
have
any
problems
with
coming
up
with
some
of
the
subjects,
because
I
do
there's
a
lot
of
things.
You
know
the
the
things
in
the
stack
overflow,
where
the
people
asking
questions,
people
in
quan
nation
in
our
forum,
where
the
people
asking
questions
but
also
like,
let's
make
it
a
little
bit
more
interactive,
and
you
can
always
ask
these
questions
about
quant
connectivity
and
cloud
in
in
my
twitter.