►
From YouTube: Kubernetes bill shock
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
My
name
is
Nick
and
I'm
heading
the
devil
team
at
spectral,
Cloud
I
spend
around
the
last
six
years
working
on
kubernetes
in
various
areas
of
the
product,
and
you
can
often
find
me
giving
talks
on
various
community
events
such
as
kcd
or
other
devops
meeting
or
meetups
in
various
cities.
So,
let's
take
a
look
at
the
agenda
for
today
we're
going
to
start
by
taking
a
look
at
what
are
the
billion
factors
then
we're
going
to
be
talking
about
kubernetes,
architectures
and
the
patterns
that
can
be
applied
to
optimize
your
costs.
A
Then
we
also
talk
about
various
software
that
can
help
with
your
build,
optimization
and
we'll
follow
up
with
a
quick
summary.
So,
let's
get
started,
there
are
multiple
Dimensions
to
consider
when
tackling
kubernetes
costs.
They
are
essentially
three
first,
the
infrastructure,
then
your
cluster
design
or
your
kubernetes
cluster
architecture
and
then
the
human
aspects
or,
more
specifically,
the
skill
sets
for
the
infrastructure.
You
have
to
make
a
fundamental
decision
on
whether
you
want
to
implement
your
communities
cluster
in
the
cloud
or
on
premises.
A
Of
course,
it's
going
to
have
a
huge
impact
on
your
compute
cost.
If
you
run
a
on-prem
design,
then
you're
going
to
have
to
pay
for
your
servers
and
all
your
compute
resources
up
front
I
mean
unless
you
find
a
subscription
deal
with
your
with
your
Hardware
provider
and
if
you're
running
in
the
cloud,
the
build
will
be
spread
across
multiple
months
multiple
years,
and
you
can
also
benefit
from
special
discounts.
A
If
you
commit
for
a
certain
amount
of
time,
then
there's
also
the
storage
and
networking
costs
again
they
will
vary
and
we
take
we're
going
to
take
a
look
at
it.
They
will
vary
depending
on
whether
you're
running
on
premises
or
in
the
cloud
and
then
finally
for
the
info
still
for
the
infrastructure
you
have
to
factor
in
the
extra
software
and
licenses
you
would
need,
especially
if
you're
running
on
premises,
things
like
virtualization
management
software
for
the
hardware
all
this
may
come.
You
know,
in
addition
to
to
your
bill,
then
for
the
cluster
design.
A
You
have
to
carefully
take
a
look
at
that
things
like
multi-region
or
availability
zones,
may
have
a
an
impact
on
your
costs
and
we're
going
to
see
this
in
a
minute.
Then
you
may
want
also
to
have
a
multi-tenancy
environment
for
your
cluster
to
sort
of
a
subscribe,
your
cluster
and
reduce
the
total
amount
of
cash
flow.
A
You
want
to
deploy
and
then
finally,
we're
going
to
also
take
a
look
at
why
it's
important
to
profile
your
application
properly
know
the
type
of
workloads
you're
going
to
run,
how
much
memory
they're
going
to
consume,
how
much
CPU
if
this
is
bursty
workloads.
If
this
is,
you
know,
steady
sort
of
workloads,
it's
going
to
have
impacts
on
your
scaling
capabilities,
and
then
you
have
the
human
aspects
or
how
to
grow.
The
skill
sets
within
the
different
teams.
A
So
for
the
devops
devops
teams,
they
will
have
to
learn
how
to
factor
in
kubernetes
in
the
different
pipelines
they
are
building
developers.
They
will
need
to
understand
some
of
the
basic
which
is
construct
facilitate
their
application
integration
into
the
platform.
Things
like
environment
variable,
not
necessarily
all
they
don't
have
to
know
all
the
equities
concept,
but
just
enough
for
them
to
start
to
code,
efficiently,
Within,
kubernetes
and
then,
of
course,
the
SRE
or
platform
engineering
team.
A
A
You
know
providing
you
with
a
cloud-in-the-box
solution
that
is
fully
API,
driven
with
kubernetes
in
mind
and
can
also
customize
the
type
of
processors
you
want
to
use
and
to
save
extra
money
on
energy
or
maybe
a
better
option
than
Intel
I
mean
it's.
It's
really
up
to
you
and
depending
on
your
application
type
and
what
they
support.
A
But
this
is
a
good
option
nowadays,
and
Cloud
providers
also
provide
this
sort
of
solutions
where
they
you
have
an
extension
of
their
environment
on
your
premises,
things
like
AWS
Outpost,
where
you
can
leverage
a
local
stack
but
still
having
the
benefits
of
the
cloud
provider.
You
know
billing
model,
although
this
may
be
more
pricey
than
going.
You
know
up
front
with
a
full
bespoke
stack.
Then
you
have
to
do
the
math
yourself
and
plan,
for
you
know
a
certain
number
of
years
until
you
will
have
to
renew
everything.
A
Typically,
this
would
be
like
a
five
years
period
time.
So
you
can
do
your
math
and
compare
both
type
of
environment
and
you
know,
take
the
the
cheapest
of
them
now,
whether
you
run
to
you
want
to
run
kubernetes
cluster
as
virtual
machines
again,
they
are
definitely
benefits,
such
as
high
availability
for
your
virtual
machines,
distributed
scheduling
for
your
VMS
within
your
hypervisor
environment.
They
are
you
know,
well-known
platforms
such
as
VMware
and
other
you
know,
Microsoft
hyper-v,
maybe
or
other
hypervisor.
Maybe
you
know
red
ads.
A
Those
are
valid
option
because
they
provide
in
you
know
built-in
automation,
backup
and
Disaster
Recovery
Solution,
so
those
important
Concepts
are
essentially
part
integrated
in
the
solution.
If
you
use
a
permitol
solution,
then
you
may
have
to
use
agent
or
you
have
to
backup
a
different
at
a
different
layer,
and
do
you
know
storage,
replication
at
a
different
layer,
so
this
may
get
things
a
bit
more
complicated.
A
If
you
choose
to
run
your
kubernetes
cluster
in
a
public
Cloud
environment,
then
the
compute
instance
type
is
probably
going
to
be
the
component
that
is
going
to
affect
your
bill
the
most.
So
you
may
want
to
treat
this
carefully
and
have
multiple
work
on
node
pools,
depending
on
your
workout
workload.
Type
things
like
you
know,
heavy
workload
or
greedy
workloads.
You
may
want
to
run
them
on
dedicated
pools
day-to-day
lower
resources,
profile
workload.
You
want
to
run
them
on
another
type
of
work
on
node
pool
Etc.
A
Another
aspect
to
take
into
account
is
the
CPU
type,
of
course,
with
different
node
pool
or
even
different
clusters.
Then
you
may
take
advantage
of
discounted
nodes
things
like
gcp
spot
VMS,
AWS
spot
instances
where
you,
your
node,
can
actually
be
be,
killed
right
or
can
be
shut
down
now.
This
may
have
some
impacts,
of
course,
on
on
your
application,
depending
on
the
type
of
application.
So
here
what
I'm
thinking
about
is,
if
you
are
running
stateful
applications.
A
This
is
where
you
you're
going
to
be
depending
on
your
storage
availability.
So
that
means
that
if
a
nodes
get
shut
down,
you
want
to
get
the
the
storage
still
available.
For
that
purpose.
What
you
can
do
is
choose
a
CSI,
so
a
storage
driver
within
kubernetes
that
will
be
able
to
replicate
volume,
synchronously
and
present
those
volume
remotely
to
nodes
so
that
when
the
node
is
shut
down,
the
part
can
be
restarted
on
another
node,
but
still
have
remotely.
Is
its
storage
available
right
so
with
no
impact
on
your
stateful
application?
A
Because
if
the
net
the
node
get
killed,
then
first
you
will
have
to
restart
the
pod
on
another
node
and
then
you
will
have
to
rebuild
the
storage
base
on
the
software
replication
layer,
for
example.
If
it's
a
database
cluster,
then
it
will
be
up
to
the
database
cluster
to
rebuild
the
data,
as
opposed
to
relying
on
the
storage
layer
to
get
the
data
immediately
available
for
the
the
Pod
that
will
be
scheduled
on
on
another
node.
So
take
a
look
into
those
csis
things
like
Port
Works
enable
those
features
within
your
cluster.
A
Then,
if
you're
running
infaster
service
this
as
opposed
to
manage
kubernetes
like
eks.
This
means
that
you
are
responsible
for
your
own
control
play
nodes
running.
As
you
know,
cloud
cloud
virtual
machines,
then
those
control
play
nodes
need
less
resources.
Typically,
this
is
because
you
don't
want
to
run
any
workload
on
the
control
plane
nodes
other
than
you
know.
Daemon
sets,
or
some
control
plane
and
other
software
components
that
are
required
by
kubernetes
as
a
system.
A
Another
couple
of
gutshas
you
want
to
pay
attention
to
are
regarding
the
network
Ingress,
especially
considerations
for
multi-region
and
availability
Zone
cluster,
where
data
transfer
will
ensure
an
extra
charge.
Things
like
backup,
replication
from
one
easy
to
another
easy
so
coming
at
the
Ingress
of
a
of
the
second
AC
will
incure
extra
charges.
So
there
are
a
couple
of
things
you
want
to
do
to
alleviate
that.
A
So,
first
of
all,
you
want
to
do
VPC
peering
between
all
your
VPC
to
reduce
that
cost
and
then
constrained
data
intensive
workload
within
the
same
AZ
in
general
to
avoid
leaking
from
one
easy
to
another
and
then
also
do
DNS
caching
to
avoid
those
extra.
You
know
this
extra
DNS
requests
from
one
easy
two
to
another,
and
then
you
have
the
internet
to
your
environment
path,
so
internet
to
your
workload,
traffic
and
there
you
want
to
Cache
as
much
thing
as
possible.
So
all
your
static
content.
You
want
to
put
that
into
CDN.
A
You
want
to
have
share
with
answer
with
anycast
IP
to
have
multiple
of
them
and
then
as
well
as
centralized
Ingress
controllers.
That
can
then
redistribute
traffic
internally
where
appropriate.
Finally,
you
also
want
to
pay
attention
to
some
of
the
storage
considerations,
so
data
locality
within
your
AC.
A
If
you
have
very
chatty
system,
components
or
application,
they
can
fill
the
log
quite
quickly
and
because
you
know
your
storage
rescale
on
demand
at
the
end
of
the
month,
you
may
have
like
an
extra
terabyte
of
storage
that
will
appear
on
your
bill
and
that,
maybe
even
you
know
more
than
that-
and
that
may
be
a
very
you
know
bad
surprise
on
your
bill.
So
you
have
to
monitor
all
this
very
very
closely.
A
Next,
let's
take
a
look
at
kubernetes
architecture
patterns
for
cost,
optimization,
starting
with
the
fundamentals
and
answering
the
question:
why
should
you
care
about
those
patterns?
The
easiest
answer
is
because
those
patterns
definitely
have
an
impact
on
your
communities
bill
at
the
end
of
the
month.
Things
like
resource
efficiency
will
give
you
the
ability
to
adjust
cluster
size
and
resources
based
on
real
usage.
A
That
gives
you
scalability
capabilities
both
for
scaling
out,
which
means
that
you're
going
to
be
able
to
increase
the
size
of
a
cluster
during
you
know,
bursty
periods
or
you
can
also
scale
in
and
reduce
the
size
of
the
cluster
of
your
workloads
of
the
number
of
pods
for
a
particular
application
using
recommendations
from
the
system.
But
for
this
there
are
some
prerequisites
you
need
to
enable
resource
settings
at
the
Pod
level
for
everybody,
which
means
that
you
have
to
Define
resources,
requests
and
limits
for
every
pod,
and
you
have
to
do
this
properly.
A
A
So,
let's
take
a
look
at
how
to
do
this
properly
now
kubernetes
defines
three
types
of
quality
of
service
classes
for
your
workloads,
Boost
for
CPU
and
memories.
So
this
is
where
you
set
limits
and
requests
for
both,
so
those
three
classes
are
best
effort
where
the
pods
will
be
evicted
first
in
case
of
resource
pressure.
Burstable,
those
part
are
evictable
under
resource
pressure,
but
only
after
all
best
effort,
pods
are
evicted,
and
then
we
have
guaranteed
which
mean
that
those
parts
are
less
likely
to
be
killed
if
they
exceed
their
limits.
A
A
Kubernetes
also
introduces
the
notion
of
priority
class,
which
completely
exists
independently
of
the
quality
of
service
classes
and
determines
the
Pod
eviction
order.
It
really
comes
into
play
when
a
new
pod
can't
be
scheduled
due
to
Resource
constraints.
So,
while
quality
of
service
class
classes
primarily
deal
with
pod
Behavior
under
resource
pressure
on
a
node,
the
Pod
priority
classes
are
more
concerned
with
the
older
in
which
pods
are
evicted
to
make
room
for
new
higher
priority
parts.
A
There
are
different
factors
for
the
eviction
order,
so,
first,
if
the
resource
requests
are
exceeded,
then
there
is
consideration
about
the
Pod
priority
levels
that
is
calculated
by
the
system
and
then
relative
resource
usage
compared
to
request,
so
those
classes
are
set
manually
and
arbitrary
by
the
user.
So
it's
again,
it's
set
declaratively
within
the
Pod
configuration.
A
A
So
now
that
you
know
the
proper
resources
request
and
limits
to
set,
you
can
start
playing
with
auto
scaling
in
this
section
we're
going
to
be
talking
about
cluster
autoscaler,
vpa,
HPA
and
Keda.
Let's
start
with
cluster
Auto
scalar
cluster
autoscaler
has
the
ability
to
resize
the
kubernetes
cluster
based
on
workload
requirements.
A
A
So
if
Custom
Auto
scalar
detects
any
pods
in
pending
state,
it
will
select
a
node
group
to
Scale
based
on
pod
constraint,
using
a
priority
score
algorithm.
Conversely,
if
the
cluster
is
utilizing
less
resources
and
can
be
scaled
down,
then
cluster
auto
scanner
with
select
nodes
with
the
least
amount
of
resource
usage.
A
A
This
include
the
cni
to
make
the
cluster
work
at
the
beginning
and
then
additional
and
any
additional
software
such
as
nginx
or
Prometheus,
for
example,
and
then
finally,
cluster
also
scalar
is
responsible
for
scaling
all
the
workload
cluster
depending
on
resource
usage,
the
corresponding
cluster
or
test
scalar
configuration
contains
a
couple
of
interesting
arguments,
so,
first
off
the
cloud
provider
in
our
case
is
specified
as
cluster
API,
as
well
as
the
node
Group
auto
discovery
which
specify
the
cluster
name.
This
is
copied
Dev.
A
So
that
means
that
you
have
to
run
that
configuration
for
every
Target
cluster.
So
capidev
is
one
destiny
workload
cluster.
If
you
want
to
deploy
copy
prod
cluster,
you
will
have
to
deploy
another
kubernetes
deployment
with
that
particular
template,
specifying
cluster
name
equal
copy
prod,
you
only
need
to
specify
those
two
lines
of
settings
and
finally,
you
annotate
the
cluster
API
machine
deployment,
object
corresponding
to
the
Target
cluster
with
the
maximum
size,
as
well
as
the
minimum
size
you
want
to
set
for
the
cluster
and
you
are
done.
A
Another
scaling
tool
I
want
to
mention
is
the
vertical
part
of
the
scalar
or
vpa.
It
can
adjust
part
CPU
and
memory
request
and
limits
based
on
real
world
usage.
It
gets
its
data
from
the
metric
server,
which
means
that
you
have
to
install
the
metric
server
as
part
of
your
kubernetes
installation
for
vpa
to
work.
A
Is
composed
of
multiple
Parts?
First,
we
have
the
recommender
whose
responsibility
is
to
monitor
the
resources
of
the
Pod
and
compute
recommendations.
Then
the
updater
is
in
charge
of
checking
pod
resource
allocation
and
if
an
update
is
required,
it
may
evict
the
Pod
for
rescheduling
with
the
new
settings.
A
Finally,
we
have
the
admission
controller
and
its
role
is
to
update
the
Pod
resources
requested
limits
before
the
part
rescheduling
happens.
So,
let's
take
a
quick
look
at
an
example
of
a
vertical
political
scalar
configuration.
The
interesting
section
is
the
target
reference.
In
our
case,
the
kind
is
a
deployment
which
means
that
vpa
is
in
charge
of
scaling
all
the
pods
within
that
deployment
and
the
update
mode
within
the
update
policy
here
at
the
bottom
is
Odo.
A
It
is
natively
part
of
kubernetes
as
horizontal
scalar
object,
always
seen
by
a
controller
part
of
the
general
controller
manager.
It
supports
both
Resource
as
well
as
custom
metrics,
which
means
that
it
can
both
look
at
resource
utilization
such
as
CPU
and
memory,
as
well
as
use
Prometheus,
more
particular
metrics
made
available
at
the
custom
metrics
API
endpoint.
A
Here
you
can
see
an
example
of
an
HPA
manifest
which,
as
you
can
see,
is
quite
similar
to
the
vpa
Manifest
configuration
and
we
can
see
the
scale
Target
reference
field.
In
this
case,
we
are
managing
a
deployment
whose
name
is
PHP
Apache.
The
minimum
of
replicas
is
set
to
1
and
the
maximum
to
10..
You
can
also
notice
the
target
CPU
utilization
percentage,
which
is
50
in
that
case.
A
So
in
this
configuration
if
the
average
CPU
utilization
goes
above
the
50
threshold
across
all
part
in
average,
then
the
HPA
will
calculate
the
number
of
additional
replicas
needed
to
bring
the
CPU
utilization
back
down
to
around
50
percent.
It
will
then
adjust
the
number
of
Parts
replica
accordingly,
but
HPA
won't
instantly
scale
out
as
soon
as
the
50
threshold
is
crossed,
it
generally
waits
for
a
certain
payment
to
ensure
that
the
condition
persists
before
taking
action,
depending
on
the
configuration
and
stabilization
windows,
a
couple
of
consideration
around
HPA
and
vpa
with
vpa.
A
The
cluster
resources
are
not
taken
into
consideration,
which
may
be
an
issue
if
the
total
amount
of
requests
or
limits
is
exceeding
the
available
resources
in
the
cluster.
Also,
only
CPU
and
memory
are
taken
into
account,
although
there
is
some
project
to
include
custom
metrics,
which
are
already
available
in
Google
Cloud
HPA,
there's
no
scale
to
zero.
For
this
we're
going
to
see
another
example
with
Keda
in
a
minute
it
use
aggregate
metrics,
only
And
Delay
may
be
introduced
when
scaling
also
vpa
and
HPA
can
be
used
together,
but
monitoring
different
metrics.
A
There
is
also
the
multi-dimension
auto
scaling
project
that
is
available
in
gke
that
makes
use
of
both
vpa
and
HPA
capabilities
and
I've
kept
the
best
for
the
n.
Now
it's
killer,
kubernetes
haven't
driven
Auto
scaling,
it's
an
open
source
project
that
extends
kubernetes
to
provide
event-driven
Auto
scaling
capabilities.
A
A
A
Let's
take
a
look
at
an
example
with
a
Kafka
scaler,
so
here
we
have
our
scale
object,
which
makes
reference
to
a
deployment.
The
name
is
scale
consumer.
There's
a
couple
of
parameters:
the
cooldown
period,
the
max
replica,
counts
as
usual.
Minimum
replica
count
the
polling
interval
and
the
trigger.
So
in
our
case,
it's
Kafka,
and
then
we
specify
also
a
couple
of
more
information
such
as
the
Kafka
server
address
the
topic.
A
We
are
monitoring
as
well
as
the
consumer
group,
and
finally,
we
have
the
like
threshold,
which
defines
how
many
messages
in
a
Kafka
partition
are
yet
to
be
consumed
by
a
consumer
or
the
consumer
group.
In
that
case,
if
the
lag
exceeds
that
threshold,
it
usually
indicates
that
the
consumer
is
not
keeping
up
with
the
producer
rate
of
message,
creation,
which
could
lead
to
various
types
of
issues
like
increased
latency,
resource
exhaustion
or
even
data
loss.
If
the
message
have
a
time
to
leave.
A
So
in
this
case,
when
the
luck
threshold
exceeds
10,
then
Keda
will
increase
the
number
of
consumer,
which
means
that
the
number
of
PODS
is
going
to
be
increased.
So
a
couple
of
limitations
and
best
practices
with
Scala.
First,
don't
use
multiple
triggers
for
the
same
scale.
Target
don't
use
HPA
together
with
cataflow
same
Target
resource
and,
as
for
HPA
delay,
may
be
introduced
when
scaling,
and
you
can
use
an
advanced
Behavior
setting
to
mitigate
that
another
way
to
save
cost
on
your
communities.
A
Bill
is
by
enabling
multi-tenancy
and
limit
the
number
of
running
kubernetes
clusters.
One
of
the
solution
is
to
use
namespace
as
a
nazolation
mechanism,
although
it's
considered
as
a
soft
boundary,
meaning
that
you
are
still
going
to
share
some
of
the
cluster
resources,
such
as
custom
resource
definition
that
are
not
isolated
on
a
per
namespace
basis.
A
Another
solution
is
to
use
the
cluster
by
Loft.
It
allows
you
to
over
subscribe
to
your
host
cluster
by
creating
virtual
cluster
within
your
physical
cluster,
such
as
nested
cluster.
It's
like
giving
each
developer
or
team
their
own
sandbox
to
play
in
without
the
hassle
of
managing
separate
Hardware
or
Cloud
resources.
It
is
perfect
for
Dev
test
scenarios
and
super
handy
for
cicd
pipelines
too.
It
also
provides
very
handy
capabilities,
such
as
pause
in
case.
You
don't
need
the
resources
anymore,
or
quota
and
limit
on
a
perfect
cluster
basis.
A
Once
you
have
implemented
all
those
infrastructure
changes,
you
can
use
additional
tools
to
Monitor
and
optimize
your
bill.
Poop
cost
is
a
software
that
you
install
via
Helm
or
a
kubernetes
manifest
within
your
communities
cluster.
It
has
Prometheus
as
a
prerequisite
and
provides
Cloud
bidding
and
complete
view
of
your
expenses
across
your
Cloud
providers.
A
The
tool
not
only
displays
cost
data,
but
also
correlates
it
with
resource
usage
for
precise
cost
management.
It
features
a
cluster
controller
that
automates
tasks
like
cluster
right
sizing
and
turndown,
allowing
for
optimized
resource
allocation
and
the
ability
to
scale
down
when
needed.
It
makes
use
of
crds
to
extend
kubernetes
native
functionality,
enabling
fine
grain
control
over
cost
allocation
and
Reporting
parameters.
A
Cool
green
takes
another
approach
with
the
goal
of
minimizing
CO2
emissions.
It
is
provided
as
a
kubernetes
operator
and
add
by
suspending
idle
pods.
It
behaves
as
a
watchdog
intercepting
lifecycle
even
through
a
web
hook.
The
users
Define
a
slip
info
manifest
where
they
can
specify
working
hour
for
pods,
then
could
green
will
suspend
the
pods
outside
of
those
working
hours.
A
Opencast
is
a
cncf
Sandbox
project
that
collects
data
from
kubernetes
clusters
and
Cloud
providers
such
as
pod
resource
utilization,
Associated
Cloud,
cost
pod,
runtime
duration,
it
uses,
then
this
data
to
calculate
expenses
for
kubernetes
workloads.
It
is
worth
noting
that
the
cost
allocation
engine
in
open
cost
is
powered
by
Coupe
cost.
On
top
of
this,
in
cluster
software
and
tools,
you
can
find
cloud-based
SAS
cost
management
platforms,
for
example
replex,
cast.ai
or
Cloud
zero.
They
all
provide
real-time
monitoring
and
Analytics.
A
A
A
If
on-premises
carefully
choose
your
hardware
and
bear
in
mind,
virtualization
costs
also
don't
neglect
human
aspect
and
time
costs
start
small
and
build
incrementally
depending
on
your
requirements.
The
next
one
is
very
important
before
using
cluster
and
pod
scaling
understand
your
application
profile.
You
have
to
set
resources,
requesting
limits,
adapt
node
pools
to
workload,
types
and
finally
use
cada
for
scale
to
zero
capability.
A
Once
you
have
done
all
those
tweaking
within
your
environment,
you
can
start
optimizing
your
bill
with
additional
software,
but
there's
another
tool
I
wanted
to
mention
today,
palette
virtual
cluster
by
Spectral
cloud
based
on
the
v-cluster
technology.
It
allows
you
to
group
cluster
provision
by
palette
our
cluster
API
based
engine
into
cluster
group,
that
you
can
further
cover
up
into
virtual
clusters.
A
The
mission
to
do
so
are
distributed
to
developers
via
all
back
once
they
have
the
sandbox
ready.
They
can
start
modeling
their
application
using
pallet
Dev
engine,
which
gives
them
a
flexible
way
to
deploy
their
code
using
containers,
Helm,
charts,
kubernetes,
manifests
and
catalog-based
components
such
as
message
queuing
system
or
databases.