►
From YouTube: KCD Sri Lanka 2022 | Main Event
Description
Kubernetes Community Days Sri Lanka 2022 | First KCD in Sri Lanka ever | Supported by CNCF | Fully Virtual and Free | 3rd of September - Main Event
A
Thank
you
so
much
for
joining
with
us.
It's
so
excited
to
be
with
you.
Initially
we
were
planning
to
have
all
in
person
and
get
to
know
each
other
to
share
knowledge
with
you
all.
Unfortunately,
we
had
to
make
this
virtual
event
due
to
the
privileged
constraint
in
sri
lanka,
since
this
is
our
first
ever
kubernetes
community
event
in
sri
lanka.
A
You
may
be
wondering
what
kind
of
event
this
is
and
how
it
will
encourage
you
to
upgrade
your
skills
and
get
to
know
about
cloud
native
products.
We
hope
you
first
get
an
idea
about
from
our
faq
series
to
tell
you
about
here's
camilla.
B
Thank
you
tomorrow,
kubernetes
community,
there's,
a
community
organizations
that
gather
adapters
technologies
from
open
source
and
cloud
native
communities
to
learn
collaborate,
share,
knowledge
and
network
further
advancement
in
kubernetes,
actually
not
only
kubernetes.
We
will
share
the
knowledge
on
any
other
related
cloud
native,
open
source
projects
as
well.
So
we
would
like
to
invite
the
person
behind
this
extraordinary
event
in
sri
lanka
and
he
will
join
virtual
from
melbourne.
Here's
khan,
who
is
the
main
organizer
vice
president
and
gm
at
wsotto,.
C
C
He
took
time
out
of
his
busy
schedule
to
have
a
chat
with
me
and
to
tell
me
that
he
has
traveled
to
sri
lanka
and
fell
in
love
with
the
people
and
the
country,
and
one
day
he
would
like
us,
sri
lankan
kubernetes
community,
to
host
a
community's
community
day.
Unfortunately,
with
the
kovit
and
other
challenges,
we
couldn't
get
this
done
before
his
death.
C
C
D
Thank
you.
Kanchan
kubernetes
is
a
complex
platform
and
requires
extensive
configuration
management
to
keep
kubernetes
workload
safe,
especially
in
production
environment.
You
need
to
address
key
architectural
vulnerabilities
and
platform
dependencies
by
implementing
security
bespoke
here
we
have
nilesh
accomplished
software
architect
at
wso2
with
over
10
years
of
experience.
D
D
E
So,
as
the
first
slide,
I'm
introducing
you
to
the
area
of
attack
when
it
comes
to
kubernetes
cluster
and
at
the
bottom
you
can
see
infrastructure
as
cloud
which
is
basically
where
you
have
your
data
centers,
your
firewalls,
your
network
and
service,
and
on
the
next
layer
we
have
our
cluster.
Where
we
have
our
back
authentication
authorization,
then
we
have
admission
control
and
we
have
network
policies.
E
E
Attacker
has
access
to
your
network
so
now
the
attacker
has
somehow
found
a
way
to
find
your
network.
And
now,
how
can
you
prevent
that?
So,
basically,
if
you're
using
cloud,
you
can
basically
use
ssh
based
key
auth,
and
then
you
can
make
your
kubernetes
api
private.
To
avoid
these
attackers
or
intruders,
discovering
your
api
and
try
attacking
it,
and
then
you
can
have
a
firewall.
You
can
secure
your
network
through
a
firewall
and
then
you
can
have
proper
outback
policies
implemented
in
your
cloud
and
also
in
your
cluster.
E
So
now
the
attack
has
somehow
gotten
through
your
network,
and
now
it
has
access
to
the
infrastructure
of
your
kubernetes
control
plane
so
before
jumping
into
what
we
can
do,
let's
just
go
through
a
few
few
slides
on
how
you
know
what
kubernetes
are
back
is,
and
you
know
who
can
what
things
are.
So
on
the
first
slide
we
have
on
the
left
we
have
who
can
access
so
in
kubernetes
we
have
two
types:
users
and
service
accounts.
E
E
In
kubernetes,
really,
as
I
mentioned
it's
handled
externally,
and
then
we
have
service
accounts,
so
service
accounts
is
for
processors,
it's
not
for
human
use,
but
rather
when
there's
a
spot
or
some
other
resource
in
kubernetes
trying
to
talk
to
the
kubernetes
api,
let
that
be
a
worker
node,
let
that
be
a
pod
crd,
whatever
they
need
a
service
account
to
access
the
api.
So
these
are
the
users
in
a
kubernetes
api
and
then
comes
permission.
E
So,
as
you
can
see,
on
the
right
hand,
side
we
have
two
kinds
of
permissions.
One
in
blue
is
called
role
and
the
one
in
orange
is
called
cluster
role.
So,
as
you
can
see,
the
role
is
namespace
bound.
So
when
you
create
a
role,
it
only
affects
the
name,
whereas
when
you
create
a
cluster
role,
it's
rather
global
in
kubernetes
it's
accessed
across
all
the
namespaces
in
the
in
the
cluster
and
then
to
bind
this
role
into
a
user
or
a
service
account.
E
We
have
something
called
a
role
binding,
so
there
are
two
kinds
of
role
bindings,
one
being
the
role
by
the
name:
role,
binding
itself
and
the
next
one
being
the
cluster
role
binding.
So
role
binding
is
also
name
space
bound
and
when
you
have
a
user,
you
can
bind
that
binder
role
to
that
user
in
a
given
name
space,
and
that
role
can
either
be
a
role
or
a
cluster
role.
But
when
it's
a
cluster
or
binding,
it's
it's
more
advanced
and
it's
more.
E
It
has
a
lot
of
privilege
because
it's
it
doesn't
scope
into
a
namespace.
It's
there
for
the
entire
cluster.
So
when
you
do
a
cluster
role
binding
to
a
certain
person
on
a
certain
permission,
the
person
can
access
that
permission
across
all
the
name
spaces
in
the
kubernetes
cluster,
which
is
a
bit
dangerous.
So
we'll
look
at
an
example
in
the
next
slide.
E
Here
we
have
john,
who
is
who
can
do
read
secrets
in
the
full
name:
space
and
there's
a
role
called
read:
secret
role
in
full
name
space,
and
you
can
see
on
the
slide
that
there's
a
read
secret
role,
binding
to
john
who,
which
gives
jon
access
to
read
secrets
in
the
full
name
space.
That
is
all
right,
then
there's
jane
so
jane
needs
access
to
read
and
write
secrets
to
the
phone
for
namespace.
However,
read
and
write
secret
is
a
cluster
role
we
have
created.
E
For
some
reason,
and
in
this
scenario
we
can
still
create
a
role
binding
to
the
cluster
role,
which
is
read,
write
secret
to
j,
which
gives
her
the
correct
access
to
four
name
space.
But
now
then,
look
at
the
admin
user
admin
user.
We
have
given
admin
user
read,
write
secret
cluster
role,
binding,
which
means
the
admin
user,
as
you
can
see,
from
the
red
dotted
lines
that
they
can
read
across
namespaces
of
all
the
secrets
that
there
are
on
the
names
all
the
namespaces
available
in
the
cluster.
E
So
this
can
be
pretty
dangerous
if
you
don't
properly
give
access.
So
it's
recommended
that
you
use
role
bindings
and
roles
instead
of
cluster
role
bindings,
so
hardening
your
kubernetes
cluster.
We
already
spoke
about
our
back
policies
and
how
to
set
it
up.
The
next
thing
you
got
to
do
is
enable
audit
logging.
So
when
a
disaster
or
something
has
happened,
someone
has
access
to
your
cluster.
E
E
It's
managed
for
you,
but
if
you
are
managing
your
own
clusters,
if
you
are
managing
your
control,
planes
run
cis
benchmark
and
do
the
recommendations
that
they
have
given
the
next
one
is.
This
is
applicable
for
the
clusters
in
cloud
as
well
manage
clusters
as
well
use
a
cis
hardened
image.
So,
by
default,
when
you
get
a
gke
or
aks
or
eks
cluster,
they
provide
you
a
a
simple
node,
a
node
with
a
linux,
runtime
and
container
d
runtime
for
image
for
containers,
but
you
can
have
cis
hard,
node
images
which
has
more
second
profiles.
E
The
next
thing
is
encrypt
xcd
doesn't
matter
whatever
the
security
tools
you
run,
you
might
be
running,
runtime
security
sandbox,
you
might
be
running
everything,
but
if
your
hcd
is
not
encrypted
in
your
control
plane,
any
attacker
who
has
access
to
your
control
plane
should
be
able
to
easily
look
at
the
key
value
store
that
fcd
provides
and
then
read
from
it
and
get
your
secrets
same
goes
for
your
secrets,
so
the
common
recommendation
is,
I
mean
12
factor,
apps,
taught
us
to
inject
variables
as
environment
variables.
That's
not
the
case
anymore.
E
E
Let's
go
to
the
next
section.
Attack
has
access
to
a
clock
to
a
service
account
in
your
cluster,
all
right
so
now
attack
has
come
in
now.
It
has
has
the
control
plane
access
and
somehow
it
has
access
to
talk
into
your
kubernetes
api
with
some
permissions
right.
So
how
do
we
stop
that?
E
The
concept
of
network
policies
is,
as
you
can
see,
on
the
example
here
we
have
a
web
server
in
green,
a
python
backend
and
a
database
and,
as
you
can
see,
there's
no
need
for
the
web
server
to
talk
to
the
database.
Only
the
python
backend
needs
to
talk
to
the
database
and
on
the
the
on
a
separate
name
space.
We
have
something
called
a
super
important
api
right
by
network
policies.
E
What
allows
us
to
do
is
we
can
specifically
say
which
pod
can
talk
to
which
service
right.
So
we
can
specifically
say
web
server
cannot
talk
to
the
database.
Web
server
can
only
talk
to
the
python
backend
and
python.
Backend
can
talk
to
the
database,
so
if
somebody
has
exploited
web
server
and
is
it
has
access
to
the
web
server,
they
won't
be
able
to
write
directly
to
the
database.
They
won't
be
able
to
access
the
database
they
have
and
then,
if
somebody
goes
and
you
know,
comes
and
gets
into
the
python
backend.
E
Of
course,
then
they
have
access
to
the
database,
but
still
because
of
network
policies.
They
will
still
not
be
able
to
attack
the
super
important
api.
We
have
on
a
different
name
space
right.
So
that's
what
network
policies
allow
us
to
do
and
by
default
you
know
you
have
to
use
a
cni,
a
container
networking
interface
plugin
on
kubernetes
that
supports
these
network
policies.
Network
policies
can
be
at
many
layers,
so
common
leads
at
tcp
four
but
layer,
seven
network
policies,
network
policy,
supporting
cni's
are
there
such
as
psyllium
etc.
E
Then
it's
added
advantage.
If
you
have
an
api
gateway,
you
can
even
make
the
inter-service
communication
go
through
an
api
gateway.
If
you're
using
a
service
mesh
use
mtls,
you
know
if
you're
using
a
service
mesh,
that's
powered
by
ebpf.
You
know
network
encryption,
etc
happens
by
default.
You
can
do
that.
So
that's
great!
That
means.
No
one
who
has
access
to
infrastructure
cannot
overlook
your
network
and
figure
out.
What's
going
on?
E
Finally,
there's
something
called
admission
controllers
on
kubernetes,
so
there's
open
policy
agent,
kibro,
etc,
which
allows
you
to
write
policies
on
kubernetes,
your
kubernetes
cluster,
which
I'll
explain
in
a
bit
more
in
a
later
slide.
So
now
attacker
has
access
to
your
code
base.
E
E
With
your
ci
pipeline.
You
can
do
sonar,
cube,
scan,
etc
and
figure
out.
If
you
are
committing
any
sensitive
information
to
get
which
is
accessible
by
anyone.
If
it's
public
and
then
you
can
do
code
vulnerability
scanning,
as
I
mentioned,
and
then
you
can.
You
have
to
keep
scanning
this.
I
doesn't
matter
like
today,
you
scan
it.
It's
all
good.
E
Everything
is
green,
but
I
mean
there
could
be
an
exploitation
or
a
vulnerability
discovered
tomorrow,
so
you
gotta
scan
you
know
on
a
daily
routine
or
on
on
whatever
the
preferred
way
that
you
you
do
then
goes
image
vulnerability
scanning.
So
here
we,
our
images,
might
contain
vulnerabilities
this
also
again,
you
got
a
scan
at
a
at
the
interval
or
a
duration.
You
can't
just
scan
once
and
if
it's
okay,
you
can
just
forget
about
it
right,
you
gotta
scan
continuously
and
you
know,
exploiting
and
a
vulnerable
image
could
lead
to.
E
You
know
privilege
escalation.
Someone
can
get
remote,
shell
access,
your
information
could
leak.
Someone
could
give
you
a
ddos
attack
within
the
cluster.
I
personally
use
clear,
but
people
use
3v
and
any
other
tool.
You
can
do
to
use
image,
vulnerability
scanning
and
then
there's
another
one
called
configuration
scanning
so
configuration
scanning
is
you
know
when
you
are
committing
your
yamls
to
your
github's
repository,
etc?
E
So
let's
talk
about
kubernetes
admission
controller,
which
I
promised
I
talked
in
the
previous
slide.
So
let's
see
what
what
it
does.
So
in
this
scenario,
there's
a
create
pod
request
by
a
user.
So
when
that
request
is
gone
to
the
kubernetes
api
via
cube
ctl,
what
happens
is,
firstly,
it
tries
it
figures
out
who
you
are
right.
Then
it
goes
to
the
authorization.
What
can
you
do?
Can
you
actually
create
a
bot
then?
Finally,
there's
something
called
admission
control,
so
in
admission
control
there
are
so
many
policies.
E
Even
we
can
write
our
own
policies
in
this
scenario.
What
here,
what
it's
going
to
look
at
is
whether
you
know
have
had
the
pod
limit
has
been
reached
on
the
given
ns,
whether
you
are
able
to
create
a
pod
or
not
so
admission
control
has
two
types:
when
we
plug
in,
we
have
something
called
validating,
webhook
and
a
mutating
webhook,
so
validating
webhook
is
a
read-only
type.
Where
you
can
it
scans
a
given
request
and
it
just
either
allows
it
or
denies
it.
E
This
is
perfect,
for
you
know
third
party
policy
controllers
like
open
policy
agent
or
something
else.
So
here
you
can
give
any
additional
policies
like
don't
pull
images
from.
You
know
public
docker
right
pull
images
only
from
your
private
docker
registry,
so
you
can
give
these
policies
and
it
would
automatically
deny
at
the
admission
control
level
when
you
give
that
policy
via
a
third
party
policy
controller,
or
you
can
write
your
own
validating
web
for
it
as
well.
Then
there's
mutating
webhook,
so
that's
a
bit
different,
so
it
changes
the
payload
dynamically.
E
This
is
mostly
used
with
crds,
and
you
know
controllers
that
you
know
work
with
these
operators.
That
see
are
these,
so
here
you
give
a
payload
and,
according
to
some
some
logic,
it
changes
the
payload
dynamically
and
applies
to
kubernetes
in
a
different
manner
right.
So
that's
what
kubernetes
admission
admission
controller
does,
and
this
would
allow
you
to
write
your
policies
and
secure
your
cluster
right.
E
The
next
step
is
now
the
attacker
has
access
to
your
container.
Somehow
they
have
exploited
your
container.
Now
they
have
a
shell,
they
have
a
shell
access
and
now,
let's
see
what
we
can
do
so
when
it
comes
to
content
hardening
when
you're
building
the
container
itself,
there
are
some
best
practices
that
you
got
to
follow.
E
First
thing
is:
remove
the
basher
right
so
that
they
won't
be
able
to
access
your
shell
remotely
make
the
file
system
read
only
so
that
they
won't
be
downloading
anything
or
writing
anything
into
your
container
file
system
and
then
make
sure
the
user
is
running
as
a
non-root
user.
E
So
when
these
are
done
it
it's
hard
for
us
to
you
know
for
an
attacker
to
attack.
Can
you
exploit
a
container
so
basically
make
your
content
immutable?
Then
there
are
other
things
that
we
can
do.
There
are
images
which
we
don't
have
access
to.
You
know
to
change
the
image
right.
There
are
scenarios
like
we
don't
build
the
image
somebody
else
built
it.
We
only
run
it
so
in
those
scenarios
we
can
use
kubernetes
without
changing
image
to
enforce
immutability.
E
So
we
can
run
just
this
thing
called
startup
probe
in
kubernetes.
Where
a
content
is
running,
you
can
run
some
script
before
the
container
actually
starts
running,
using
that
we
can
remove
bash
if
you
want
to,
and
we
can
set
like
run
as
group
run
as
user
run
as
an
unroot
set
security
context,
remove
privilege
escalation
can
do
a
lot
of
things
so
by
using
those
we
should
be
able
to
make
the
containers
immutable
at
a
kubernetes
level
as
well
then
comes
runtime
security.
E
Doing
all
of
that
is
sometimes
not
enough
and
you
need
more.
Let's
look
at
how
containers
work
and
what
this
runtime
security
means
right.
So,
as
you
can
see
in
a
vm
when
a
content
is
running,
it's
running
at
lxc.
On
top
of
lxc,
and
then
we
have
linux
kernel
at
the
below
right.
So
basically
a
container
is
a
you
know,
group
of
namespaces
and
c
groups.
E
So
when
containers
are
making
this
doing
things,
it
does
syscalls
to
the
linux
kernel
so
say
that
means,
if
there's
a
vulnerability
in
the
linux
kernel,
the
container
should
be
able
to
exploit
it.
If
someone
with
the
proper
knowledge
and
tools
are
there.
E
So
first
thing
to
do
is,
as
I
mentioned
earlier,
disable
privilege
escalation
and
drop
all
the
capabilities
of
a
container
and
then
only
add
the
things
that
are
needed
using
kubernetes
constructs
and
there
are
there
are
things
like
app
arm
and
setcom
profiles
which
restricts
the
ciscos
that
you
make
to
the
linux
kernel
so
that
you
know
you
don't
make
any
weird
ones
and
you
know
try
to
exploit
okana.
E
Then
that's
container
sandboxing,
there's
firecracker,
there's
g
visor.
There
are
a
few
out
there,
which
is
you
know.
Allah
makes
makes
this
issue
go
away,
but
on
top
of
it
adds
more
performance
issues,
but
still
it's
great.
If
you're
running
a
multi-tenant
system-
and
if
you
don't
trust
the
images,
then
there
are
tools
like
cystic,
falco,
etc,
which
monitors
the
abnormalities
of
the
container
run
time
and
then
gives
you
alerts
like
this
is
happening,
stop
this
etc
right.
So
that's
continue,
runtime
security
for
you,
so
you
got
a
concealer
information.
E
This
is
something
I
told
earlier
as
well.
I
asked
you
to
do
not
inject
environment
variables,
but
rather
do
them
as
file
mounts
and
when
you
are
doing
that,
it
is
recommended
to
inject
wire
secret
manager
at
runtime.
Don't
save
it
as
secrets
on
kubernetes
itself.
Base64
is
not
an
encryption,
so
use
hashicorp,
volt
or
azure
kv
aws.
E
Google.
There
are
so
many
secret
managers
out
there
use
one
of
them
and
then
inject
your
information,
sensitive
information
as
files
to
the
container
at
time
you
can
very
easily
do
that
with
those
technologies,
and
I
already
told
make
your
container
root
the
system
read
only-
and
this
is
a
developer
discipline-
do
not
lock
sensitive
information.
E
Kubernetes
security
is
still
new
and
you
know
vulnerabilities
get
discovered
every
day
and
then
they
get
patched
frequently
so
update
your
clusters
as
soon
as
you
could,
if
you're
running
your
own
or
even
in
cloud
just
upgrade,
keep
upgrading
your
clusters
when
it
comes
to
managed
kubernetes
clusters
in
the
cloud
security
is
mostly,
I
mean
70
of
security
control,
plane
security,
it's
managed,
you
don't
have
to
do
much
there
and
many
organizations
today
need
some
level
of
multi-tenancy.
E
E
D
D
Kubernetes
deployments
have
become
the
de
facto
standard
for
the
orchestration
of
these
containerized
applications,
and
also,
we
all
know
that
the
distributed
applications
in
our
in
any
architectural
environment,
including
the
cloud,
have
always
required
rules
to
control
how
their
request
get
from
first
place.
It
means
we
need
to.
We
need
a
tool
that
inserts
security,
observability
and
reliability
features
the
applications
at
the
platform
layer
is
instead
of
the
application.
D
That
tool
needs
to
be
sustainable
and
demands
less
operational
burden.
You
know
as
the
engineers
at
the
end
of
the
day,
you
folks
need
to
handle
it
right,
and
this
should
be
scalable
too.
That's
where
the
service
mesh
comes
into
the
stage.
Now
we
are
going
to
see
service
mesh
in
action
with
linkadin
the
graduated
cncf
service
mesh.
That
makes
the
fundamental
tools
for
software
security
and
reliability
freely
available
to
every
engine
in
the
world.
D
F
Hi
there
my
name
is
flynn.
I
am
a
tech
evangelist
for
buoyant
the
makers
of
linkerdy
if
you're
not
already
familiar
with
linkerdy.
It's
the
only
cncf
graduated
service.
Mesh
linkerity's
purpose
in
life
is
to
arrange
it
so
that
every
cloud
native
developer
on
the
planet
has
access
to
the
tools
that
they
need
to
do
secure,
reliable,
easily
observable
cloud
native
applications
and
for
those
tools
to
be
freely
available.
F
My
role
as
a
tech
evangelist
is
to
make
sure
that
people
know
that
and
also
to
make
sure
that
people
have
the
knowledge
and
resources
that
they
need
to
really
succeed
at
the
whole
cloud
native
thing.
So
to
that
end
today
I
will
be
talking
about
observability
using
linker
d.
This
is
kind
of
the
classic
problem
in
cloud
native.
F
It's
very
very
hard
to
really
see
what's
going
on
inside
the
cluster,
even
when
things
are
going
well,
when
things
start
going
badly,
it's
even
harder
service
meshes
are
well
positioned
to
help
with
that
and
for
linkerd.
There
are
two
things
in
particular
that
make
it
really
good
at
it.
One
is
this
linker
dvis
tool,
the
other
is
the
service
profile.
Crd
liguity
vis
is
a
tool
that
just
gives
you
easy
access
visually
to
a
bunch
of
things
within
your
cluster.
Here
we
can
see
the
topology
of
our
application.
F
F
Also,
the
really
killer
bit
of
this
is
that
as
soon
as
the
service
profile
is
created,
linker
d
will
watch
that
and
it'll
aggregate
statistics
for
you
on
its
own.
Without
you
having
to
do
anything,
special
you'll
be
able
to
go
through
and
look
back
in
time
to
get
access
to
those,
it's
a
really
really
wonderful
tool
for
troubleshooting.
F
Also,
I
said
management.
So,
for
example,
you
can
use
a
service
profile
to
do
things
like
configure
retries
automatically.
We're
gonna
see
some
of
that
with
the
rest
of
the
demo
here
and
yeah.
This
is
pretty
much
it
for
the
slides.
The
rest
of
this
presentation
is
a
live
demo,
so
yeah,
let's
get
to
it,
shall
we
okay?
So
I
have
here
a
running
kubernetes
cluster.
I'm
doing
all
this
on
a
k3d
cluster,
that's
running
in
my
laptop
just
because
I
kind
of
like
having
control
over
everything.
F
The
first
thing
that
you'll
see
here
is
we've
got
the
books
demo
and
we
have
the
emoji
vote
demo.
We
have
both
of
those
running
in
the
cluster.
We
can
take
a
look.
This
is
the
books
demo.
You
may
have
seen
it
before
you
can,
you
know,
go
click
around
and
look
at
books
and
look
at
authors
and
yeah.
G
F
It's
pretty
simple:
this
is
the
emoji
vote,
application
where
you
can
go
and
vote
for
emoji,
and
then
you
can
view
the
leaderboard
and
that's
all
there
is
to
it.
I
should
point
out:
there
is
a
traffic
generator
in
here.
I
am
not
just
clicking
endlessly
on
emoji
all
day.
So
let's
go
back
here.
F
F
All
of
this
I
pretty
much
set
up
just
using
the
quick
starts
available
from
the
linkerd
documentation.
I'll
have
the
link
at
the
end
of
this
presentation,
where
you
can
go
and
see
exactly
how
I
set
everything
up.
It's
pretty
standard,
pretty
easy
to
get
going
very
important
thing
here
is
that
we,
you
have
run
linkery
check.
We
can
see
that
linker
d
is
actually
running
cleanly
on
its
own,
so
we're
good
to
go
all
right
now,
let's
suppose
it's
friday
night
and
somebody
calls
up
and
says
something
is
wrong
with
the
emoji
mode
application.
F
F
Well,
I
guess
the
first
thing
we
can
do
is
we
can
just
look
over
all
the
name
spaces
in
the
cluster
using
linker
dvis
from
the
command
line,
and
we
can
see
immediately
that
yeah,
there's
there's
some
challenging
stuff
here.
The
emoji
boat
application
actually
is
not
showing
us
100
success.
F
Neither
is
books,
but
we'll
have
to
come
back
to
that.
So,
given
that
we
can
see
that
there's
something
wrong
in
this
namespace,
let's
drill
into
that
a
little
bit
here,
we're
going
to
look
just
at
deployments
in
the
emojivo
namespace,
because
that's
where
we
already
know
that
there
is
a
problem
anyway.
F
So
if
we
do
that
again,
we
can
immediately
see
that
the
web
deployment
and
the
voting
deployment
look
like
there's
some
kind
of
unhappy
things
going
on.
So
those
also
seem
like
pretty
natural
places
to
look
so
we're
going
to
go
ahead
and
take
a
look
at
the
web
deployment.
We'll
use
linker
dvi's
top
for
this.
That's
going
to
give
us
a
rundown
in
real
time
of
the
most
common
requests
that
are
going
to
or
from
the
web
deployment.
F
F
F
F
At
this
point
we
probably
could
go
off
and
hand
this
off
to
the
developers
and
say
hey.
It
looks
like
there's
a
problem
voting
for
donuts,
but
we
can
probably
do
better
than
that.
So
another
thing
we
can
do
is
instead
of
running
linker.
Dvis
top
will
run
linker,
dvis,
tap
tap,
shows
us
real
time
request
by
request.
It
just
gives
us
a
running
list
of
everything
going
on.
F
It's
a
really
nice
way
to
get
a
quick
look
at
you
know
the
actual
real
live
traffic,
and
here
I
see
a
bunch
of
things
that
are
working
right.
I
see
a
post,
it
gets
200,
it's
got
a
grpc
status
of
okay.
So
far
so
good,
I
don't
actually
see
any
donut
requests
so
far.
Oh
wait.
Here's
one
all
the
way
down
at
the
bottom
yeah,
so
you'll
notice
that
this
says
grpc
status,
unknown
very
important.
To
note
here,
status
unknown
does
not
mean
that
linker
d
doesn't
know
what
the
status
is.
F
F
F
F
There's
one
more
thing:
if
we
tack
dash
o
json
onto
that
same
command,
so
we're
going
to
look
at
the
donuts,
but
we're
going
to
look
at
it
in
json
and
instead
of
giving
us
that
nice
three
line,
summary
it'll
break
everything
out
into
a
huge
json
block
and
we'll
get
information
on
the
requests
and
the
responses
there.
We
go
there's
one
so
we
can
see.
This
is
a
request.
F
F
F
F
On
the
other
hand,
this
took
a
little
while
and
the
reason
that
it
took
a
little
while
was
that
we
needed
to
go
through
and
watch
the
traffic
and
wait
for
somebody
to
vote
for
a
donut.
So
we
could
see
the
problem.
Then
we
had
to
wait
for
them
to
do
it
again,
so
we
could
see
if
the
problem
persisted
it.
You
know
there
should
be
a
better
way
and,
as
I
was
mentioning
earlier
with
service
profiles,
there
is
a
better
way.
The
emoji
vote.
Application
is
a
grpc
application.
Grpc
means
protobuf.
F
Protobuf
means
that,
rather
than
writing
service
profiles
by
hand,
we
can
just
ask
linkery
profile
to
go
through.
Read
the
protobuf
definition
and
write
us
a
service
profile
for
it.
So
let's
do
that
for
the
emoji
proto
there
you
go,
there's
not
much
to
it.
You
can
it's
both
posts,
which
is
kind
of
interesting.
You
can
list
all
the
emojis.
You
can
find
a
given
emoji
by
short
code,
that's
kind
of
it.
F
H
F
Same
command
right
just
generate
the
proto
and
then
apply
it
that
works
out
pretty
well,
and
then
we
can
do
the
same
thing
for
the
other
grpc.
That's
part
of
the
emoji
bed
application
the
voting
proto,
I'm
not
going
to
bother
showing
that
it's
pretty
much
just
more
of
the
same,
so
we'll
apply
that
one.
Now
we
really
want
to
see
some
things
about
the
web
too
and
that's
problematic,
because
the
web
app
here
is
not
grpc.
The
web
app
is
just
plain
old
rest,
so
we
could
write
the
service
profile
by
hand.
F
If
we
had
a
swagger
definition,
we
could
have
linker
d
profile,
just
read
the
swagger
definition
and
write
a
profile
for
us.
In
this
case
we
don't
have
either
of
those
things.
So,
instead,
what
we're
going
to
do
is
this
other
trick
where
we
can
have
linker
d
just
watch
the
traffic
going
by
for
a
little
while,
in
this
case
we're
going
to
say
10
seconds
and
generate
the
profile
based
on
what
it
actually
sees
in
the
traffic
which
is
kind
of
cool.
F
F
F
So
here
we
are
at
the
dashboard.
We
have
the
books,
app
namespace,
the
emoji
about
namespace
and,
as
we
saw
from
the
command
line,
things
are
not
working
flawlessly
here.
Let's
go
into
this
namespace
and
you
can
see
with
the
topology
graph.
We've
got
this
vote
bot,
that's
generating
traffic,
spinning
it
sending
it
over
to
the
web
service.
The
web
service
in
turn
is
talking
to
the
emoji
service
and
the
voting
service.
All
this
lines
up
with
what
we
saw
from
the
command
line,
which
is
kind
of
nice.
F
Let's
go
take
a
look
at
the
web
deployment
itself
and
you'll
also
see
a
different
graph
here,
of
which
deployments
are
talking
to
which.
But
the
really
neat
thing
we
can
do
now
is
we
can
click
on
this
route,
metrics
tab
where
we
can
just
immediately
see?
Oh,
hey,
here's
the
route,
that's
not
doing
so
well
get
api
vote.
F
F
It
moved
up
because
it
was.
This
is
a
top
view.
Let's
click
on
that
that
fills
in
a
tap
page
for
us.
You
can
see
it
filled
it
in
with
donut
click
on
start,
as
requests
for
donuts
come
in,
then
this
will
populate
here.
There's
one
and
we
can
click
on
this
tab
and
there
you
go.
There's
the
json
view
from
before,
so
we
can
kind
of
get
to
have
it
both
ways.
F
Obviously
this
is
much
faster
than
going
through
and
running
all
the
stuff
from
the
command
line,
but
important
to
note
it's
working
on
exactly
the
same
information,
so
everything
you
can
do
here.
You
can
do
from
the
command
line
all
right,
so
at
this
point
we
would
hand
this
over
to
our
developers
tell
them
that
there's
a
problem
with
the
donuts
we
can
go
on
and,
of
course,
that
would
be
the
time
that
something
comes
up
with
the
books
app
because
I
don't
know
that's
the
way.
Life
goes
right
now.
F
The
nice
thing
about
the
books
app
is
that
the
books
app
already
has
service
profiles,
so
we
don't
have
to
go
through
and
build
them
manually,
so
we
can
go
straight
to
the
route
metrics
from
the
command
line
and
see.
Okay,
what's
going
on
with
the
web
app
service
here
in
the
books
app,
and
we
can
immediately
see
that
oh
right,
there's
two
things
in
here
that
seem
to
be
failing
about
half
the
time
that
looks
you
know
problematic.
F
F
F
F
Yeah,
there's
one
call
it's
just
a
head
call,
but
it's
failing
about
half
the
time
kind
of
interesting
now.
We
could,
of
course,
do
all
of
this
from
the
gui
as
well.
Let's
go
ahead
and
do
that
back
up
to
namespaces
then
duck
into
the
book.
Apps
namespace
there's
our
topology
graph
again
saw
this
at
the
beginning
of
the
presentation:
traffic
generator
talking
to
the
web,
app
talking
to
the
books,
app
talking
to
authors
and
books
and
authors
are
talking
to
each
other.
F
So
here,
if
we
go
through
and
take
a
look
at
one
of
these,
let's
look
at
the
web
app.
Shall
we
once
again
we
get
the
neat
little
graph
here
and
once
again
we
can
go
down
and
look
at
route
route,
metrics
and
kind
of
immediately
see
right.
So
these
are
a
problem
if
we
go
in
and
we
look
at,
let's
look
at
the
author's
deployment.
Actually,
if
we
look
at
the
author's
deployment
and
we
look
at
its
route
metrics
and
again,
we
can
immediately
see
okay,
this
head,
that
that's
that's,
got
some
trouble
right.
F
F
If
we
look
over
here
routes
that
are
item
potent
and
don't
have
bodies,
you
can
edit
the
surface
service
profile
and
add
isree
tribal
to
the
retriable
route.
That's
the
only
thing
we
have
to
do
to
enable
retries
happening
down
in
the
mesh.
We
don't
have
to
change
any
application
code.
So
let's
go
ahead
and
try
that
out
we
will
do
that
using
kubectrol
edit
we'll
do
that
the
really
simple
way,
that's
the
author's
service
profile,
all
the
way
at
the
bottom.
You
can
see.
F
This
is
the
head
that
we're
talking
about
retrying
and
we
are
literally
gonna
just
add
israel,
tribal
true
and
we'll
save
that
quit
that's
updated
and
now
let's
go
and
watch
and
see
what
happens.
If
we
tack
dash
o
wide
on
this
lingerie
route,
it
will
tell
us
the
effective
success
rate
as
well
as
the
actual
success
rate,
and
if
this
worked,
we
should
see
these
two
start
to
diverge.
We
should
see
that
the
effective
success
rate
will
be
going
up,
even
though
the
actual
success
rate
isn't
doing
much
and
yeah.
F
What
we
don't
know
yet
is
whether
this
is
the
only
problem
with
our
books
application
right
now,
so
let's
go
take
a
look
and
see
from
what
should
we
do?
Let's
look
from
the
web
apps
point
of
view.
I
think
yeah
web
apps
talking
to
books.
That'll
probably
do
this,
and
here
we
can
see
right
so
we've
hit
100
on
everything
at
this
point
and
here
we're
only
seeing
the
effective
success
rate,
but
the
effective
success
rate
is
the
one
that
we
care
about
from
the
user's
point
of
view.
F
So
at
this
point
we've
been
able
to
use
linker
d
to
figure
out
what
was
failing
and
to
put
in
a
mitigation
so
that
the
end
user
is
no
longer
affected
by
this
problem.
We
still
have
work
to
do
with
the
developers.
Obviously
we're
going
to
have
to
go
through
figure
out
why
exactly
the
thing
was
failing,
which
we
don't
know
right
now
and
we're
going
to
have
to
you
know,
get
a
fix
put
in
place
to
really
solve
the
problem
for
real.
F
But
if
you
remember,
I
started
this
by
saying
it
was
a
friday
night
and
putting
in
this
quick
change
to
the
service
mesh
means
that
we
don't
have
to
go
back
and
bug
the
developers
on
friday
night.
Everything
is
working
from
the
user's
point
of
view.
We
can
come
back
and
tackle
this
in
the
morning
on
monday,
so
that's
about
it!
For
this
demo,
you
can
find
more
information
about
linker
d
at
lingerie.io
or
you're
always
welcome
to
join
our
slack
go
to
slack.linkerdotio.
F
For
that
I
hope
you
do.
The
source
code
for
linkerd
is
in
the
github.com
linkedin
organization.
You
can
also
find
us
at
linkedin
twitter
if
you're
curious
about
how
this
particular
demo
got
set
up
and
run.
You
can
look
in
the
service
mesh
academy,
repo
you'll
find
everything
there
all
the
details,
and
you
can
always
reach
me
at
flynn,
buoyancy
for
email
or
at
flynn,
on
the
link
of
you
slack
hope
to
hear
from
you
thanks.
I
I
I
So
today
we
will
showcase
how
vs
code
and
azure
involvement
of
kubernetes
and
cloud
development
to
showcase
this
session.
Beta
is
the
best
person
who
can
deal
with
azure
and
kubernetes.
Vito
is
a
steam
cloud
solution.
Architect
at
microsoft,
he's
been
developing
application
for
27
years
and
a
big
fan
of
as
well
github
and
visual
studio
code
and
microsoft
is
our
platinum
sponsor
of
kcdsl.
J
J
Let's
start
by
setting
up
ourselves
for
success
with
visual
studio
code,
there's
a
few
extensions
that
are
used
in
this
session,
and
I
recommend
that
you
install
them
as
well
as
it
is
really
good
for
ease
of
development.
The
first
one
is
azure
account
to
sign
in
to
our
azure
subscription
from
visual
studio
code.
The
second
one
is
azure
container
apps,
the
third
one
is
code
spaces
and,
lastly,
github
co-pilot.
J
J
J
J
J
If
you
look
at
the
kind
of
software
that
we
need
to
run,
it's
kind
of
like
emulating
the
server
right,
so
we
need
a
powerful
machine,
so
it's
expensive.
The
third
thing
is
security.
There's
a
couple
of
issues
here
right.
The
one
thing
is
that
your
machine
can
be
vulnerable
to
say
malware
and
your
code,
that's
actually
residing
on
your
machine,
may
be
copied
or
exposed.
J
So
the
other
thing
is
the
security
team
can't
really
review
code.
That's
on
your
machine.
So
that's
from
a
security
point
of
view
from
a
collaboration
point
of
view
as
well.
There's
also
challenges.
Today.
It
takes
a
huge
amount
of
time
for
someone
to
drop
in
and
make
a
fix
to
a
project
because
of
the
environment.
That's
required
to
first
set
it
up
in
the
first
place.
J
J
A
qra
engineer
running
on
that
project.
We
have
a
virtual
desktop
solution
for
contractors,
but
it's
very
hard
to
use
running
multiple
docker
containers
on
a
company
issued
laptops
gets
tricky,
inconsistent
local
environments
are
making
it
hard
to
standardize
on
build
tools
and
code
spaces
to
rescue.
J
Codespaces
does
this
by
running
code
and
dependencies
in
the
vm
on
azure,
the
development
environment,
whether
that's
visual
studio
code,
visual
studio
or
in-browser
vs
code
connects
there.
Let's
have
a
look
at
code
spaces.
This
is
the
sample
app
that
I
fought
from
azure
samples
that
I
will
use
for
this
session.
H
J
Connect
to
a
code
space
through
a
visual
studio
code
client
as
well.
The
code
remains
on
the
cloud.
The
idea
is
to
be
able
to
develop
from
anywhere,
and
that
can
mean
in
a
remote
location
or
from
any
particular
device
or
from
a
laptop
that's
more
of
a
dumb
client,
rather
than
something
that's
fully
configured.
J
That
is
really
about
providing
the
premier
developer
experience
in
a
flexible
way.
The
other
idea
is
developer
velocity.
You
can't
achieve
this
in
any
other
way.
With
a
code
space,
a
developer
has
an
environment
ready
to
go
that
they
can
pick
up
right
away,
switching
between
tabs
to
go
between
projects
instead
of
spending
literally
hours
to
spin
down
one
project
and
then
spin
up
another
one
and
onboarding
and
training
will
go
much
faster
as
well.
J
J
J
Personally,
as
a
developer,
I'm
a
minimalist.
I
don't
like
having
a
lot
of
stuff
on
my
laptop
right.
It
slows
down
the
computer.
It
takes
up
memory
right,
so
what
happens
is
with
code
spaces?
All
of
these
are
actually
on
the
cloud
and
it
frees
up
those
resources
on
my
local
laptop
and
it's
super
fast.
So
that's
one
advantages
that
you
get
as
a
developer.
J
The
next
thing
is
the
storage
and
compute.
It
actually
scales
up
on
demand.
So
that's
the
other
perk
that
you
get
using
code
spaces
coming
back
to
our
code
space.
Remember
earlier,
I
mentioned
that
we
will
need
to
modify
the
code
more
so
here
I'm
noticing
that
there
is
no
depth
container
definition.
J
J
So
by
having
this
dev
container,
it
will
not
just
benefit
us.
It
will
benefit
other
developers
as
well
as
they
will
definitely
need
to
have
these
kind
of
extensions
working
at
dr
code
space
as
well.
So
here
there's
no
container
configuration
currently
found
so
I'll,
add
one
and,
let's
add
one,
that's
based
on
node.js:
let's
do
for
the
default
one
and
we'll
need
docker
as
well.
So
let's
go
for
docker
from
docker,
let's
reuse,
the
host
docker
engine
and
we'll
look
for
yeah
go
for
the
latest
one
and
there
we
go.
J
Let's
look
here
right,
cool
within
the
devcontainer.json
definition.
We
will
now
see
that
the
docker
extension
had
been
added
to
the
list
of
extensions.
These
are
the
extensions
that
will
be
automatically
added
when
we
turn
on
our
code
space
the
next
time
and
we
have
docker
from
docker
as
a
feature
as
well.
J
J
Okay,
so
now
this
is
a
new
code
spaces,
but
it's
actually
turned
on
based
on
our
newer
code
with
that
container
added.
So
this
is
that
container
file
right.
So,
let's
see
if
we
have
our
magic
here,
which
is
the
docker
extension
installed,
yay
so
yeah,
so
the
next
developer.
Imagine
will
then
save
a
step
by
having
this
extension
already.
There.
J
Remember
seeing
the
notification
on
docker
from
docker
earlier,
while
turning
on
the
code
space
right,
let's
find
out
what's
that
about,
we
can
do
that
with
the
creation
log
and
we
can
access
it
from
the
command
palette.
Let's
go
to
creation
log
yeah,
so
so
this
is
the
creation
log
and
let's
head
to
docker
from
docker
yeah.
J
J
J
J
Now,
let's
have
a
look
at
our
api:
let's
open
it
in
browser
yeah,
as
you
can
see,
visual
studio
code
will
set
up
the
port
forwarding
as
well
and
the
authentication
as
well.
If
you
run
it
through
video
studio
code
and
there
you
have
it
our
api.
J
Okay,
now
that
we
have
looked
at
how
we
can
have
a
development
environment
on
the
cloud
we've
given
up
code
spaces,
let's
have
a
think
around
how
we
can
then
deploy
our
app
our
micro
service,
our
api
to
the
cloud,
and
here
we
know
that
we
have
a
docker
file,
which
is
a
containerized
application.
J
J
K
J
So
what
is
azure
container
apps
azure
container
apps
operates
on
top
of
the
kubernetes
cluster
okay.
So
it's
on
top
of
the
kubernetes
cluster,
but
it's
entirely
managed
the
whole
cluster
is
is
managed,
so
you
don't
have
to
worry
about
the
managing
the
cluster
yourself
right.
So
it's
actually
very
interesting
for
those
that
are
focused
on
application,
delivery
right
and
you
want
to
focus
on
really
iterating
on
your
features
and
all
of
that
right
and
not
on
maintaining
clusters
right.
J
So
the
scaling,
the
http
routing
and
service
discovery,
all
those
are
still
there
for
you
in
azure
container,
apps,
so
yeah.
It's
a
very
interesting
service,
you'll,
deep
dive
on
it
very
shortly.
But
let's
now
take
a
look
at
the
bottom
right
right,
so
we
have
azure
app
service
and
azure
functions,
azure
app
service,
it's
a
way
to
host
web
apps
and
web
apis.
J
You
can
publish
code
directly
to
it
and
you
don't
have
to
measure
container,
and
you
know
it
has
cool
features
like
deployment
slots
and
all
that
right.
But
if
you
compare
azure
app
service
to
azure
container
apps
right,
azure,
container
apps
is
actually
more
optimized
for
micro
service
that
needs
to
communicate
to
each
other
right,
so
azure
container
apps
is
more
suitable
if
you
are
looking
at
developing
micro
services.
J
Lastly,
we
also
have
azure
functions
right.
Okay,
so
asset
functions
is
serverless.
J
The
thing
about
azure
functions
is
that
it's
it's
you
know
you
need
to
interface
with
it
with
functions,
sdk
right,
so
it's
a
little
bit
more
opinionated,
but,
on
the
other
hand,
azure
container
x
allows
for
you
know
any
containerized
air
right
to
to
execute,
while
still
you
know
being
triggered
by
the
same
model,
the
serverless
model,
using
events
right
so
yeah
there
you
go
that
that's
all
the
overview
of
the
azure
containers
portfolio
right
now
for
our
app
and
for
today's
session,
we'll
deep
dive
into
azure
container
apps
as
the
balance
option
here.
J
So
azure
container
apps
is
the
latest
container
offering
from
microsoft
that
wraps
around
kubernetes
kida,
dapper
and
envoy,
to
provide
you
with
a
serverless
way
on
top
of
a
kubernetes
cluster.
So
the
serverless
part
is
interesting
right.
So
you
get
similar
level
pricing
and
a
kubernetes
cluster.
On
top
of
it.
J
So
service
containers:
how
does
that
even
work?
In
short,
cada
kubernetes
event,
driven
auto
scaling
with
azure
container
apps?
You
can
run
containers
and
scale
in
response
to
http
traffic.
It
also
supports
a
growing
list
of
cada
supported
skill
triggers
like
azure
event,
hub
kafka,
rapid
mq,
mongodb,
mysql
postgresql.
J
J
Azure
container
apps
is
another
example
of
the
kind
of
advantage
that
you
get
on
the
azure
cloud
because
you're
not
burdened
by
managing
a
container
orchestrator.
You
get
more
time
to
focus
on
building
your
app
with
container
apps.
You
can
build
microservices,
apis
event,
processing
workers
and
background
jobs
using
containers.
J
J
Container
apps
is
built
on
great
open
source
technology,
kubernetes
keda
lapper
envoy.
All
these
are
open
source.
You
have
the
assurance
of
being
able
to
create
modern
apps
with
open
standards
based
on
a
kubernetes
foundation
and
portability
in
mind
with
container
apps.
You
can
rely
on
streamline
application
lifecycle
tasks
such
as
application
upgrades
and
versioning
traffic,
shifting
service
discovery
and
monitoring
all
on
open
source.
J
J
J
Okay,
cool
so
with
azure
container
apps
installed.
We
can
then
proceed
to
deploy
our
app.
K
H
J
Okay,
cool
so
yeah,
so
we
are
doing
this
within
visual
studio
code,
so
we
are
doing
all
of
this
within
just
this
environment
right.
Okay!
So
now
we.
H
J
And
we'll
begin
pushing.
J
Next,
we'll
need
to
create
a
container
apps
environment,
so
we'll
go
here
and
yeah
from
here.
We
can
see
that
I
already
have
one
environment,
but
let's
create
a
new
one
for
demo
purposes,
so
we'll
create
a
new
container
environment
and
I'll
call
it
v-demo.
J
J
J
So
this
is
useful
for
situations
where
you
are
managing
related
services,
so
the
container
apps
can
talk
to
one
another,
so,
for
example,
in
situations
where
you're
using
dapper.
This
is
also
very
useful.
Now,
if
you
need
container
apps
to
be
isolated,
then
you
need
to
create
a
separate
container,
apps
environment.
J
The
next
interesting
aspect
of
azure
container
apps
is
github
integration.
If
you
use
azure
container
apps,
you
should
use
github
actions
to
publish
revisions
to
your
container
app
as
commits
are
pushed
to
your
github
repo.
A
github
action
is
triggered,
which
updates
the
container
image
in
the
container
registry
once
the
container
is
updated
in
the
registry.
J
J
J
J
I
I
Monitoring,
cpu
memory
database
and
networking
condition
is
usually
enough
to
understand
these
systems
and
apply
proper
solution
to
the
problem
so
to
share
the
knowledge
of
a
high
level
overview
of
how
kubernetes
observability
help
us
to
get
the
right
direction.
Here's
janaka
vu
is
the
director
of
ascension
iit.
He
is
a
seasoned
person
in
the
open
source
ecosystem
going
back
to
days
of
building
sahana
project
atlanta
software
foundation.
I
L
Thank
you,
kavisha
hi,
everybody
kubernetes
is
a
unique
beast,
which
consists
of
a
multi-layered
weber,
resources
and
services.
Achieving
observability
is
a
daunting
task,
even
for
the
best
and
brightest
among
us.
This
presentation
will
give
you
a
high
level
overview
and
some
implementation
approaches
to
achieve
observability
in
kubernetes.
L
So
let's
first
look
into
what
is
observability
to
understand.
Observability
the
observability,
as
well
as
monitoring,
is
used
interchangeably,
although
there's
a
subtle
difference
between
them,
while
observability
actually
talks
about
the
ability
of
internal
to
assess
the
internal
system
state
based
on
the
data,
it
provides
monitoring,
deals
with
the
collection
and
analysis
of
data
pulled
from
the
infrastructure,
so
observability
will
help
you
to
gain
deeper
insight
to
health
and
status
of
different
application
and
resources
across
infrastructure.
This
help
to
proactively
detect
abnormalities,
analyze
and
analyze
issues
and
resolve
them.
L
So
the
three
pillars
of
the
observability
are
logs,
metrics
and
traces.
I
will
discuss
that
in
later.
Let's
move
into
the
monitoring,
so
the
foundation
of
observability
is
monitoring
which
involves
pulling
and
analyzing
data
from
your
infrastructure,
so
the
it
started
with
snmp
the
simple
network
protocol
since
back
into
the
1988,
where
we
used
to
get
information
about
the
network
say,
for
example,
like
every
10
seconds
or
30
seconds.
L
Now
there
are
much
more
newer
protocols
available,
such
as
the
open,
config,
gpr
grpz
network
management
interface
protocol
that
can
provide
much
more
real-time
information,
okay,
telemetry
and
a
apm
application
performance
monitoring,
also
a
type
of
monitoring.
It's
a
bit
advanced
type
of
monitoring,
especially
telemetry,
focuses
on
collecting
information
from
distributed
system.
Well,
the
apm
is
focused
on
more
application
level.
L
Let's
look
at
what
are
the
different
types
of
levels
of
kubernetes
monitoring,
so
there
are
two
types
of
kubernetes
monitoring
available:
two
levels
of
kubernetes
monitoring
available:
one
is
the
cluster
level
which
focuses
on
the
node
information,
the
for
the
information
and
the
cluster
resource
level.
Information
utilization
and
the
pod
level
focuses
more
on
the
container
level,
especially
on
the
application,
as
well
as
the
container
level
information.
L
Okay,
let's
see
why
why
this
monitoring
is
so
important,
so
in
cloud
native
or
microservices
apps
are
very
complex
and
you
got
a
lot
of
moving
parts
when
an
issue
occurs,
it's
very
difficult
to
pinpoint
and
identify
the
issue,
so
monitoring
is
important
for
reliability
as
well
as
troubleshooting.
Secondly,
knowing
your
infrastructure
will
help
you
to
optimize
your
hardware.
L
Thirdly,
public
cloud:
if
you
are
using
a
public
cloud
infrastructure,
you
will
play
a
the
cost
will
play
a
major
role
so
having
insights
into
the
kubernetes
environment
will
help
you
to
reduce
the
cloud
spending
some
instances
you
may
be
using
kubernetes
in
a
multi-tenant
or
you
are
providing
it
to
your
internal
customers.
So
in
that
case,
having
insights
will
help
you
to
charge
back
or
show
back
to
your
internal
customers.
L
Finally,
observation
observability
is
actually
a
cornerstone
for
your
security
strategy,
so
you
will
be
able
to
identify
any
malicious
ingress
and
egress
traffic
or
any
unwanted
pods
and
services
that
is
running
in
your
environment,
all
right.
So
there
are
some
challenges
that
comes
through
with
this
observability
as
well.
So
one
is
the
amount
of
data,
so
you
get
data
from
your
nodes.
L
You
get
data
from
your
pods,
the
flow
data,
so
much
of
data
that
you
have
to
manage,
and
secondly,
you
have
you
also
have
difficulty
because
it's
a
distributed
system,
you
have
so
many
moving
parts.
Getting
this
full
picture
also
is
a
bit
of
a
challenge
that
you
will
face
as
well.
L
Finally,
you
will
because
kubernetes
is
a
declarative
in
nature.
You
can
actually
define
how
it's
how
you
want
the
pods
to
run
and
created,
so
that
actually
might
give
a
false
positive,
especially
when
it
comes
to
performance,
all
right,
so
the
best
practices
when
it
comes
to
best
practices.
Firstly,
the
granular
resources,
like
the
cpu
memory
load,
those
things
are
very
important,
but
can
be
very
complex
and
convoluted
so
to
easily
identify
the
microservices
issue.
The
api
matrix
are
the
main
part.
L
So
what
you
can
use
is
like
the
request
trade,
the
call
error,
latency
that
will
help
you
to
quickly
identify
the
degrading
components
in
your
microservice.
Another
aspect
is
high
disk
usage.
So
that's
a
very
common
common
issue
that
you
will
come
across.
So
there's
no
like
a
straight
away:
magical
solution,
just
make
sure
you
know
when
it
hits
about
70
to
80
percent
of
your
storage
notified.
Take
some
actions
right.
L
All
right,
let's
get
back
to
the
three
pillars
of
observability.
As
I
mentioned,
it's
very
important
when
you
create
and
implement
your
observability
for
kubernetes,
so
for
the
first
pillar
is
the
logs
the
log.
Basically,
a
log
is
a
representation
of
a
discrete
event.
In
most
cases,
it
describe
what
will
happen
with
your
service.
Log
will
produce
in
multiple
ways
in
kubernetes,
especially
so
you
will
have
cluster
related
log.
You
will
have
pod
related
logs,
you
will
application
related
logs
network
and
all
that.
L
L
Second
pillar
is
a
matrix,
so
matrix
is
a
numerical
representation
of
data
measured
over
a
period
of
time
say,
for
example,
how
many
200
requests
did
I
get
in
last
30
seconds
and
the
last
one
is
the
tracing
tracing
is
a
mechanism
that
will
help
you
to
track
end-to-end,
end-to-end
and
identify
so
your
whole
transaction.
You
can
manage
into
you
can
track
it
end-to-end
and
identify
troubleshoot
the
issue
all
right.
So
let's
look
at
a
sample
of
purpose-built
observer,
observability
platform
for
kubernetes.
L
So
this
one
we
have
three
layers.
So
the
but
the
first
layer
is
the
telemetry
collection.
So
this
is
where
you
collect
various
flaws
and
different
data,
and
the
second
layer
will
give
you
the
analytic
and
visibility
where
the
different
analysis
can
be
performed.
So
you
could
even
put
some
machine
learning
machine,
learn,
anomaly,
detection
and
finally,
on
the
security
and
troubleshooting
layer
tools
for
implementing
kubernetes
observability
includes
first
at
the
bare
minimum.
L
L
Secondly,
prometheus
is
a
collection
and
storage
of
observability
data,
so
it
provides
you
with
a
multi-dimensional
data
model
with
time
series
format,
and
it
provides
also
provides
a
query
language
called
from
ql
to
analyze.
Further
several
modes
can
be
available
with
different
type
of
graphs,
as
well
as
dashboards.
L
Prometheus
can
be
integrated
with
many
things
like
the
grafana
and
all
that
as
well,
so
on
on
grafana
site.
It
will
give
you
a
visualizing
observability
data,
so
it
graphana
gives
you
a
data,
rich
dashboard
and
using
information
sources
like
the
promises
like
I
just
mentioned,
and
it
also
provides
you
built-in
dashboards
for
kubernetes,
like
actually
four
dashboards.
L
One
is
the
cluster
dashboard
node
dashboard
port,
then,
for
that
container
dashboard,
as
well
as
the
deployment
dashboard
further
graphana
has
a
few
more
plugins
that,
for
example,
graphana
loki
also
can
give
you
a
similar
observability
to
your
kubernetes
environment
and
jaeger
is
a
distributed
system
tracking.
A
tracing
system
for
kubernetes
they
actually
came
from
uber's
engineering
team.
L
It
gives
you
end-to-end
tracing
solution,
so
it
helps
you
to
monitor
troubleshoot
transactions
in
a
complex
distributed
system
and
help
you
to
identify
the
root
causes,
as
well
as
optimize
performance
and
latencies,
and
monitor
distributed
transactions
as
well
further.
Finally,
the
elastic
stack
elastic
stuck
includes
elastic
search,
which
is
the
analytical
engine
and
logstash
and
beats
captures
and
sends
the
log
to
elasticsearch
and
kibana.
You
can
use
it
to
get
the
dashboards.
You
can
also
drop
log
stash
and
beats
and
combine
it
with
something
like
a
fluent.
L
Then
you
get
a
efk
stack,
which
also
will
give
you
a
stack
for
you
to
implement
your
observability
in
kubernetes
with
that
because
of
the
scale
of
modern
infrastructure
and
the
dynamic
nature
of
kubernetes.
L
Observability
is
a
critical
component
and
the
three
pillars
I
have
mentioned,
that
is
the
log
matrix
tracing-
will
not
only
help
you
to
increase
the
observability,
but
also
help
you
to
gain
insights
to
your
infrastructure,
regardless
of
your
technology.
Set
that
you
use
the
tools
mentioned
here.
Are
the
de
facto
standard
tools
used
by
the
cloud
native
community
and
implementing
them
will
help
you
to
gain
observability
in
your
kubernetes
environment?
A
Thanks
janakar,
it
was
a
great
session.
Our
next
session
will
be
very
interesting
because
it's
about
complexity
of
cloud
native
ecosystem
mike
who
is
the
senior
marketing
manager
for
trainer-
and
he
is
going
to
talk
about
this
based
on
his
years
of
experience
in
kubernetes
cloud,
radio
technologies
distributed
and
many
complex
computing
system
prior
to
joining
into
portland
mike,
was
a
leading
person
at
cisco,
kubernetes
initiatives.
G
Good
morning,
everyone
and
welcome
to
what
I'm
sure
is
going
to
be
an
action-packed
day.
I'm
michael
chenetz,
I'm
the
head
of
technical
marketing
at
bertinerio,
and
I'm
going
to
talk
to
you
a
bit
today
about
adopting
cloud
native
tech
and
why
this
is
awesome,
but
also
needs
to
happen
with
eyes
wide
open.
G
Hopefully
I
can
share
some
insights
picked
up
over
the
many
years
working
with
kubernetes
and
cloud
native
technologies.
My
background
my
background.
I
have
had
a
pretty
varied
career
so
far
and
I
think
it's
one
that
has
really
helped
me
immensely
and
and
really
helped
deal
with
this
latest
technology
trend.
G
G
Staff
are
demanding
higher
pay
lots
of
six
leave
to
navigate
business.
Travel
costs
are
through
the
roof.
Vc
funding
is
super
tight
and
the
number
of
companies
that
have
unfilled
vacancies
is
just
insane,
so
this
is
causing
terrible
service
across
so
many
areas
of
business.
Have
you
traveled
lately?
Have
you
seen
what
traveling's
like
because
of
this
consumers,
and
even
businesses
with
spare
money,
are
spending
it
extremely
wisely
and
only
spend
yet
with
companies
that
have
a
clear
point
of
differentiation
and
for
business
purchases
only
with
proposals
that
have
a
rapid
roi?
G
So
that's
right,
there's
an
economic
squeeze
right
now.
It's
impacting
not
only
b2c
but
b2b
most
companies-
I
speak
to
have
frozen,
spend
on
all
points
of
I'm
sorry
on
all
non-essential
projects
and
lots
have
put
hiring
freezes
in
place
with
this
competition
and
the
money
being
brutal.
At
present,
every
single
company
should
be
looking
at
how
to
serve
their
customers
and
asking
can
I
do
it
better?
G
G
G
Thriving
is
the
new
hyper-competitive
global
economy
means
embracing
change
and
embracing
technology,
every
single
business
owner
and,
in
fact
every
single
business
employee
should
be
thinking.
How
can
I
help
my
company
thrive
and
what
disruptive
services
can
we
deliver
or
which
one
of
our
legacy
services
can
we
improve?
G
If
you
are
not
looking
at
what
your
competitors
global
as
well
as
domestic,
are
offering,
then
you
are
already
in
trouble,
you
just
don't
know
it.
Yet
problem
is
there's
only
so
much
existing
teams
can
do,
and
so
all
this
information
needs
to
be
done,
while
constraining
while
construct
with
the
constraints.
How
can
you
embrace
the
new
now
without
incurring
substantial
cost
increments
in
people
or
tooling?
This
is
the
art.
G
G
It's
more
than
just
the
technology.
It's
an
architectural
philosophy,
cloud
native
apps
take
the
very
best
from
what
we
have
learnt
over
the
last
10
years
around
how
we
should
architect
build
support
and
scale
applications
and
cloud
native
lets.
You
start
small
scale
on
demand,
innovate,
discrete
components
and
due
to
that
nature,
build
with
extreme
efficacy,
no
need
for
heavy
servers,
expensive
proprietary
software
or
anything
like
that
cloud
native,
defines
the
applications,
are
inherently
portable
free
from
vendor
lock-in
and
provides
versions,
extensible
apis
that
make
it
simpler
to
manage
resources
and
services.
G
It
doesn't
matter
if
you're
innovating
in
the
data
center
in
the
cloud
on
the
factory
floor
cloud
native
apps
help
you
build
fast,
learn
fast
and
improve
fast
cloud
native
apps.
Let
you
get
a
product
into
market
in
the
fastest
possible
way,
because
you
can
launch
an
mvp
and
then
iterate
quickly,
thanks
to
a
micro
services
based
architecture,
you
can
evolve
discrete
elements
of
your
service
based
on
user
feedback
adoption,
and
you
could
do
this
in
a
really
low
cost
manner.
G
Heck
the
vast
majority
of
modern,
cn
applications
are
based
off
open
source
components,
so
you
have
no
expensive
db,
cost
no
expensive
application
server
from
top
tier
vendors
and
no
expensive
api
gateways.
All
this
is
made
with
reusable
and
free
components.
So
no
commitment
upfront
costs
to
try
out
crazy
ideas.
G
G
G
Here's
where
I
might
get
a
bit
jumbled,
but
pretty
much
any
system
of
record,
should
be
procured,
as
the
risks
are
just
too
high
to
build.
Your
own
systems
of
differentiation
and
system
of
innovation,
however,
are
where
you
get
choice
again,
if
you're
just
wanting
to
wanting
an
e-commerce
site,
are
you
sure
that
shopify
magenta
or
any
of
the
other
famous
engines
are
not
good
enough?
Customizable
enough?
Are
you
sure
that
you
want
to
reinvent
the
wheel?
G
It
might
be
better
to
spend
100
hours
customizing
an
off-the-shelf
app
than
500
hours.
Building
your
own,
unless
you
have
a
massive
in-house
development
shop
or
an
external
contract
development
house,
I
would
not
even
begin
to
recommend
creating
something
from
scratch.
Heck
look
at
my
own
company
pertainer.
We
have
spent
over
10
million
in
two
years,
creating
a
kubernetes
management
tooling.
G
Devs
are
either
not
I'm
most
dead.
Sorry
most
devs
are
either
net
or
javascript
and
experience
building
either
web
apps
or
installable
apps
very
few.
In
fact,
we
probably
have
all
of
you
in
this
very
room,
develop
in
microservices
and
deploying
containers.
There
are
so
very
few
cloud
native
devs
that
companies
elect
to
self-build
new
apps.
They
need
to
be
sure
that
they
can
recruit
and
retain
one
of
you.
G
G
Okay.
So
now
you've
selected
your
app
stack
and
you're
either
going
to
buy
or
build,
but
now
what
you
need
some
somewhere
to
run
this
stack
and
that
somewhere
likely
end
up
being
a
container
platform.
Today
there
are
two
primary
container
platforms,
docker
for
smaller
deployments,
and
I
would
argue
for
development
and
kubernetes
for
everything
else.
G
Unless
you
have
been
living
under
rock,
you
will
be
very
aware
that
kubernetes
and
its
ability
to
run
containerized
based
apps
is
in
a
highly
automated
way
across
on-prem
or
cloud
truly
unlocking
hybrid
cloud
and
removing
platform.
Lock-In
kubernetes
is
awesome
and
it's
really
simply
awesome.
It's
the
cleanest
way
to
run
modern.
I
t
stacks,
and
so
it's
something
we
all
need
to
get
comfortable
with,
but
neither
docker
or
kubernetes
should
be
considered
a
platform
they're,
an
orchestrator
which
is
just
one
element
of
a
platform.
G
G
G
A
platform
comprises
all
of
these
elements.
You
cannot
fully
embrace
kubernetes
without
the
ability
to
triage
container-based,
apps
view
logs
from
short-lived
and
highly
dynamic
workloads
and
automate.
The
continuous
deployment
of
applications
running
an
app
in
kubernetes
is
useless
without
people
being
able
to
connect
to
it.
So
inevitably,
you'll
also
have
to
load
balancer
proxy
dns
server
and
dynamic
ssl
certs
to
manage.
G
So
when
someone
says
kubernetes
assume
they
mean
all
the
above,
it's
why
I
laugh
when
I
hear
people
saying
that
they've
adopted
kubernetes
have
decided
on
aks
eks
and
have
made
no
consideration
for
the
additional
tooling
needed,
I'm
sure
you've
all
seen
the
cncf
landscape.
It's
mind-boggling
number
of
members.
G
These
are
all
here
because
they're
an
ecosystem
that
surrounds
kubernetes
and
there
are
vendors
providers
of
the
tech
that
enable
the
platform
mentored
prior.
You
need
to
be
super
careful
not
to
get
drowned
in
here,
though
so
much
choice
is
good,
but
it's
also
very
bad
with
all
that
choice.
How
do
you
find
the
good
from
the
average?
How
can
you
know
which
products
are
worthy
of
your
time
to
invest
pilot?
How
do
you
know
what
can
be
here
six
to
12
months
from
now?
G
This
is
good
if
there
is
a
contributor
pool,
but
if
there's
not
the
tools
age
out
quite
quickly,
even
for
those
that
retain
their
open
source
projects
under
their
own
entity,
it's
not
sustainable
to
support
them.
They
also
end
up
dead,
be
sure
to
be
careful
when
you
choose
a
tool,
make
sure
it's
under
active
development,
make
sure
it's
backed
by
a
commercial
entity,
make
sure
the
commercial
entity
has
a
way
to
sustain
their
open
source
product
and
make
sure
that
other
people
are
using
it.
G
G
G
These
apis
needs
to
release
a
version
in
lockstep
and
the
more
you
have
the
greater
the
chance
of
finding
yourself
in
a
position
where
you
cannot
upgrade
kubernetes
because
of
the
tool
you
rely
on
doesn't
support
the
new
version,
that's
dangerous,
given
the
pace
at
which
cves
are
found
and
fixed
in
kubernetes,
you
should
upgrade
as
soon
as
possible,
but
not
be
forced
to
wait.
Sometimes
the
multi-tool
is
the
best
as
a
multi-tool
is
managed
as
a
single
package.
G
This
name
is
pretty
well
known,
but
the
thing
is,
it's
accurate,
there's
a
lot
to
know
about
kubernetes
and
a
lot
of
things
to
trip
you
up.
It's
critical,
that
the
complexity
is
well
understood,
respected
and
that
a
plan
is
in
place
to
be
dealt
with.
This
don't
be
fooled
by
those
that
said,
kubernetes
is
easy.
They
are
ignorant
to
the
true
complexity
that
lays,
under
the
surface
sure
it's
easy
once
you're
a
trained
expert,
but
none
of
us
are
born
experts.
G
In
anything,
there
are
a
number
of
day
two
challenges
that
you
will
need
to
overcome.
Again.
These
are
challenges
due
to
the
challenge
in
architecture,
and
none
of
these
should
be
a
surprise
when
it
comes
to
your
kubernetes
clusters.
Do
you
want
to
treat
them
as
cattle
or
pets
pets
mean
you
need
to
update
them,
triage
them
closely,
manage
them
and
cattle
means
you
deploy
a
cluster
to
host
your
apps,
and
then
there
is
a
newer
version.
You
spin
up
the
new
cluster,
deploy
your
app
there
and
delete
the
old
far
simpler.
G
How
would
you
monitor
and
report
on
slas?
How
would
you
ensure
security
and
compliance?
How
would
you
back
up
and
pro
if
you
are
modernizing
legacy
apps
are
the
isvs
comfortable
in
supporting
them
in
containers?
Kubernetes
is
designed
to
be
dynamic
if
you
have
a
cab
board
that
can
accommodate
this
dynamic
nature,
but
don't
try
and
be
a
hero
sure
it's
awesome
to
be
the
first
company
to
adopt
a
new
technology
from
cncf
landscape,
but
really
it's
that
is
it
that
smart?
G
G
Also
resist
the
urge
to
build
a
better
mousetrap
sure,
you're,
a
developer
engineer,
and
you
like
having
the
skills
to
build
apps
and
solutions,
but
is
it
really
a
good
use
of
your
time
to
try
and
build
a
better
tool?
That's
already
out
there
already
being
maintained
and
already
having
a
support
network
bash
scripts
powershell
scripts?
These
are
all
nightmare.
To
maintain
verse,
readily
available
tools
and
predictable
release.
Cadence.
My
advice
is
to
ask
yourself:
can
I
pass
the
3am
test,
which
is
it's
3am?
G
The
system
has
suffered
a
catastrophic
failure.
How
many
of
my
teams
can
rally
around
to
help
fix
it?
If
the
answer
is
one
or
two
you're
in
trouble,
I
also
strongly
recommend
ensuring
you
have
a
vendor
support
for
open
source
components
too
sure
they
have
no
upfront
license
costs,
but
then
again
at
3am
mongodb
has
failed.
Wouldn't
it
be
great
to
be
able
to
call
customer
customer
support
somewhere.
It's
why
vmware
nutanix?
Ms
all,
did
it
well
supporting
these
at
times
of
crisis
and
never
lose
sight
of
the
goal.
Remember
slide
one.
G
We
are
doing
all
this
for
one
reason.
One
reason
only
to
help
your
business
thrive.
We're
not
doing
this,
because
the
tech
is
cool,
it
is
cool,
but
it's
irrelevant
we're
doing
this,
because
it's
the
best
way
to
deliver
game,
changing
digital
services
to
the
market
and
so
don't
get
tech
drunk,
remain
focused
and
apply
the
value
of
your
time.
Once
you
apply
a
value
to
your
time,
you'll
find
spend
far
less
time
thrashing
around
and
make
quick
decisions
based
off
decisions
that
other
have
made
before.
B
Thank
you
mike.
We
hope
you
enjoyed
michael's
session
so
yeah.
We
believe
cloud
native
ecosystem
will
help
startup
to
enter
enterprise
companies
to
build
their
own
solutions.
Our
next
session
is
going
to
be
pretty
interesting
one.
We
know
when
you
are
working
with
kubernetes
workflows,
there's
a
huge
deal
to
improve
the
efficiency
and
cost
of
running
it.
So
carpenter
is
an
open
source
by
w
aws,
flexible
high
performance,
not
proximity,
tool
built
for
kubernetes
rawhini
is
the
best
person
to
talk
about
how
carpenter
helps
developers
to
build
their
workloads
over
the
years.
B
She
has
worked
in
multiple
pros,
like
solution,
architects,
support
engineer
in
different
different
geographies
by
helping
customers.
She
is
passionate
to
share
her
experiences
and
help
strength.
The
builders
of
tomorrow,
here's
rohini,
who
is
the
senior
developer,
advocate
at
aws,
and
we
honor
to
say
aws-
is
the
goal
sponsor
of
kcdsf.
Rohini
stage
is
yours.
M
Oh,
thank
you
for
joining
this
session.
My
name
is
rohini
ganker
and
I'm
a
senior
developer
advocate
at
aws.
Today
we
are
talking
about
carpenter,
an
open
source,
kubernetes
cluster,
auto
scaler,
if
you
have
more
questions,
feel
free
to
reach
out
to
me
via
linkedin,
on
the
given
website.
So
let's
quickly
look
at
different
ways.
We
can
do
kubernetes
scaling,
remember.
The
goal
here
is
to
efficiently
use
the
infrastructure,
have
less
wastage
and
save
cost
and
ensure
a
more
highly
available
application.
M
M
You
simply
start
adding
more
and
more
number
of
pods
as
your
demand
increases,
and
if
your
demand
decreases,
you
simply
automatically
stop
the
pods
to
freeze
the
resources,
so
you
scale
out
and
scale
in
as
per
your
need
with
vertical
scaling,
as
it
suggests,
adding
capacity
to
the
same
resource.
So
the
kubernetes
vpa
automatically
adjusts
the
cpu
and
memory
reservation
for
your
pods
to
help
right
size,
your
applications
and,
finally,
the
kubernetes
cluster
order,
scalar,
which
is
a
popular
cluster
order,
scaling
solution
maintained
by
signature
scaling.
M
M
M
You
need
to
make
sure
that
each
type
has
roughly
the
same
amount
of
cpu
and
memory
resources,
otherwise
resources
might
be
wasted
or
insufficient
during
a
scale
up
to
support
different
instance
types.
You
need
multiple
node
group
to
support
different
instance
types.
You
need
multiple
node
groups.
Also,
as
I
mentioned,
it's
recommended
that
each
node
group
span
only
one
availability
zone
so
to
make
sure
that
if
you
want
your
workload
to
span
across
multiple
availability
zones
for
high
availability,
you
need
a
node
group
per
instance.
Type
per
availability
zone.
M
Well,
cluster
auto
scale
was
not
originally
built
with
the
flexibility
to
handle
hundreds
of
instance,
types
across
multiple
availability
zones.
It
loads
the
entire
cluster
state
into
memory,
the
nodes,
then
parts
and
the
node
groups
identifies
the
unscheduled
parts
in
the
cluster
and
simulates
the
scheduling
for
each
node
group.
So
when
you
have
lots
of
node
groups,
this
gets
very
complicated
and
when
run
at
scale,
it
often
takes
up
to
five
minutes
to
actually
scale
your
capacity
in
your
cluster.
M
M
What's
that
asterix
well,
aws
is
the
first
cloud
provider
supported
by
carpenter,
although
it
is
designed
to
be
vendor
neutral
carpenter
works
in
tandem
with
kubernetes
scheduler
by
observing
the
incoming
pods
over
the
lifetime
of
your
cluster.
So
it
will
launch
or
terminate
your
nodes
to
maximize
your
application,
availability
and
cluster
utilization.
M
When
there
is
enough
capacity
in
the
cluster,
the
kubernetes
scheduler
will
place
the
incoming
parts
as
usual
when
pods
are
launched
and
they
cannot
be
scheduled
using
the
existing
capacity
of
your
cluster
carpenter
will
actually
bypass
the
kubernetes
scheduler
and
work
directly
with
your
provider's
compute
service,
for
example,
amazon
ec2.
Instead
of
auto
scaling
groups
in
cluster
order,
scaler,
so
to
launch
the
minimal
compute
resources
that
are
needed
to
fit
those
pending
pods
and
binds
those
parts
to
the
nodes
that
it
provision
so
as
the
pods
are
removed
or
rescheduled
to
other
nodes.
M
M
M
The
provisioner
comes
with
some
smart
defaults,
but
these
are
fully
configurable
and
these
default
include
the
configuration
of
the
instance
type
selection,
the
launch
template
generation,
the
subnet
security
groups,
etcetera,
etcetera.
So
you
could
think
of
two
person.
Okay,
there's
an
administrator
and
there's
an
application
developer.
M
It
is
expected
that
a
cluster
administrator
would
install
an
update
carpenter.
They
find
the
provisioners
to
segment
the
infrastructure
space
as
needed,
so
they
can
define
the
provisionals
based
on
purchase
options,
the
capacity
type,
the
instance
type,
the
availability
zones
etc
and
the
application
developer,
who
is
actually
deploying
these
pods
might
which
might
be
evaluated
by
carpenter.
M
They
write
the
manifest
so
as
long
as
the
requests
are
not
outside
of
the
provisioner's
constraints,
carpenter
will
look
for
the
best
match.
The
request,
comparing
the
same
well-known
labels
of
kubernetes
defined
by
the
pod
scheduling,
construct.
Note.
If
the
constraints
are
such
that
a
match
is
not
possible,
the
pod
will
remain
unscheduled.
M
Kubernetes
features
that
carpenter
supports
for
scheduling
paths
include
the
node
affinity,
the
node
selector.
It
also
supports
for
disruption,
budgets,
topology,
spread
constraints,
interpod
affinity
and
anti-affinity
as
well.
So,
let's,
let's
quickly,
look
at
our
demo
in
this
demo,
I
have
already
set
up
a
kubernetes
cluster.
It
also
has
carpenter
installed.
You
can
find
all
the
steps
in
the
carpenter,
documentation
I'll,
provide
the
link
towards
the
end
of
this
presentation.
Okay,
I've
already
set
that
up.
I've
also
defined
a
default
provisional.
M
So
this
is
something
your
administrator
could
do,
so
they
have
defined
a
default
provisioner
and
in
there
I've
mentioned
that
any
capacity
that
you
launch
should
be
a
spot
should
be
of
this
instance
type
family,
and
it
could
be
of
a
certain
instance
size
right
now.
It's
I
have
just
commented
it
out,
but
you
can
have
it
of
a
certain
instance
size,
and
I've
also
mentioned
that
it
should
be
sorry.
It
should
be
amd
based
instances
as
well.
M
You
can
also
mention
what
is
the
limit
of
number
of
cpus
that
we
would
want.
How
can
carpenter
understand
where
to
launch
these
eco2
instances?
Well,
I've
also
mentioned
the
where
the
subnets
are
and
security
groups
are,
I've
already
tagged
them,
so
it
will
go
ahead
and
discover
that
hey
these
are
the
subnets
that
you
want
to
go
ahead
and
launch
your
ec2
instances
or
the
nodes
that
you
would
need
and
there's
an
important
point
here
that
that
I've
mentioned
that
ttl
seconds
after
mts10.
M
What
does
this
mean?
Is
that
once
a
node
is
empty,
there
are
no
pods
running
on
it.
It
will
wait
for
10
seconds.
Carpenter
will
wait
for
10
seconds
before
terminating
that
node
or
that
ec2
instance.
I've
kept
it
low
because
it's
a
demo.
I
want
to
show
it
quickly,
but
you
can
keep
it
higher
if
you
are
running
a
production
workload
so
already
applied
the
default
provisioner
all
could
what
I'm
going
to
do
next
is
going
to
see
that
will
create
more.
M
You
know
replicas
in
this
case,
so
before
we
do
that,
you
can
see
that
there
are
no
pods
running
right
now.
You
can
see
that
there
is
only
one
node
running
right
now
and
these
are
the
carpenter
logs.
So
generally,
I
start
with
one
and
you
know
escalated
further,
but
now
to
save
time,
what
I'm
going
to
do
is.
M
I
am
going
to
just
ask
for
maybe
four
four
pots
that
I
wanted
to
create
the
moment
I
say
yes,
what
it
is
going
to
do
is
it
is
going
to
have
four
parts
that
are
in
pending
state
and
you
can
see
in
the
logs.
Let's
go
up
a
little
bit.
M
It
says
that
hey
create
a
node
with
four
parts
requesting
certain
capacity,
okay,
that,
yes,
it
is
now
waiting
for
this
ec2
instance,
so
it
has
already
created
that
ec2
instance
and
you
can
find
that
it
has
already
launched
an
ec2
instance
23
seconds
ago.
What
is
this
in
easy
to
instance,
size?
The
size
is
obviously
anything
that
would
fit
all
these
four
parts
on,
but
it
is
of
c5.
M
It
is
amd
64,
and
if
you
move
a
little
bit
here,
you
could
see
that
it
is
or
d
spot
and
the
instance
is
already
running
so
it's
39
seconds,
but
you
can
see
the
status
has
changed
from
pending
to
container
creating.
So,
as
we
talked
about
this
in
the
presentation
when
kubernetes
is
creating,
those
ec2
instances
is
not
only
considering
that
hey.
I
need
to
schedule
these.
M
It's
not
only
creating
that
issue
to
instances,
but
also
making
a
scheduling
decision
so
when
it
is
creating
these
recent
instances,
it
is
by
passing
the
cube
scheduler
and
by
directly
binding
these
parts
to
these
to
these
nodes
as
well.
So
you
can
see
within
few
seconds
like
I
think
it
was
58
or
60
seconds.
You
can
see
that
all
these
spots
are
actually
running
on
these
ec2
instances.
Let's
escalate
it
a
little
bit,
let's
make
it
instead
of
4.
M
Let's
say
I
want
100
parts
and
we'll
see
how
quickly
carpenter
is
able
to
compute
that
how
many
nodes
it
needs
for
all
these
100
pods
and
is
going
to
quickly
launch
all
these
ec2
instances.
M
You
can
see
that
within
seconds
that
the
ec2
instances
that
it
has
calculated
so
let's
go
up
and
see.
Okay,
so
create
a
node
with
85
parts,
so
it
could
fit
few
pods
on
the
other
ec2
instance.
So
it
has
gone
ahead
and
deployed
that
that
is
something
that
cube.
Scheduler
will
do
quickly.
So,
if
you
want,
we
can
also
check
right
away
of
how
many
pods
are
actually
in
the
running
state
right
now.
M
So
right
now,
15
are
actually
running
on
the
ec2
instance
that
was
ready
or
the
node
that
was
already
ready,
the
one
that
is
not
ready.
That's
where
the
other
85
spots
are
going
to
be
placed
okay,
and
you
can
see
that
it's
already
75
seconds
and
this
e2
instance
will
get
ready
in
couple
of
few
more
seconds
or
a
couple
of
minutes
more
before
it
can
actually
have
all
these
pods
going
and
placed
on
a
running
state.
M
So
by
bypassing
auto
scaling
groups
and
directly
talking
to
ec2
instances,
we
are
able
to
save
30
to
35
seconds
actually,
when
we
are
trying
to
schedule
a
lot
of
pods
and
if
you
have
been,
if
you've
seen
this,
that
in
108
seconds
our
ec2
instance
was
up
and
running.
Let's
see
how
many
parts
are
up
and
running
right
now
you
can
see
that
yes,
20
parts
are
up
and
running.
There
are
some
in
container
creating
mode,
so
they
are
downloading
that
image
and
getting
ready,
and
if
you
want,
you
can
also
keep
checking
that.
M
How
many
of
these
are
you
know
getting
created,
so
you
can
now
see
that
that
number
has
quickly
started
escalating
and
you
can
within
what
it's
been
two
minutes
since
that
ec2
instance
has
been
launched,
and
you
can
see
already
most
of
the
parts
have
been
deployed.
So
that's
how
quickly
carpenter
can
actually
get
the
institute
instances
up
and
running
right.
So
all
the
hundred
parts
are
up
and
running.
So
what
we'll
do
next
is
actually
just
go
ahead
and
remove
all
these
spots.
M
Okay,
so
I'm
just
gonna
say:
hey
just
go
ahead
and
have
zero
and
you
can
see
how
quickly
it
is
going
to
scale
down
so
the
ports
will
go
off
instantly,
but
for
the
ec2
instances
you
can
see
that
it
has
added
ttl.
If
you
see
the
logs
that
I've
highlighted,
it
says
that
added
detail
to
the
empty
node
and
because
it
was
just
10
seconds,
it's
saying
that
it's
triggered
the
terminations
are
within
10
seconds.
All
my
ec2
instances
have
been
deleted.
M
If
I
want
to
make
it
more
interesting,
I
can
also
go
ahead
and,
let's
say
patch
the
deployment
and
say
hey
instead
of
amd.
I
want
arm
based
ec2
instances
and
once
that
is
done,
I'm
going
to
ask
for
let's
say
two
parts
that
need
a
node
that
is
arm
based
now,
in
this
case,
let's
scroll
down,
we
also
got
arm
ec2
instances,
but
they
will
be
like
when
didn't
you
mention
amd
already.
M
But
yes,
I
have
also
mentioned
applied
another,
so
you'll
be
able
to
see
that
here
one
second
okay,
so
you
can
see
that
it
already
found
a
provisioner
for
arm64
and
the
request
that
I
just
had
matched
that
arm
64
requirement.
M
It
was
there
in
one
of
the
provisionals,
and
so
it
went
ahead
and
deployed
that
ec2
instance
with
64..
So
you
can
have
multiple
provisioners.
In
this
case,
these
provisioners
could
have
different
constraints.
Different
requirements
and
carpenter
will
automatically
pick
up
that
hey.
There
is
already
a
provisional.
If
there
was
not,
there
was
no
provisional
for
arm
64.
It
wouldn't
have
allowed
the
user
to
actually
go
ahead
and
deploy
this
particular
application.
So
that's
it!
M
That's
the
simple
demo:
let's
go
back
to
our
presentation
and
wrap
up
this
section,
the
key
takeaway,
so
you
use
the
default
provisioner
for
diverse
instance,
types
and
avatar
zones.
You
can
add
up
additional
provisionals
as
you
need.
You
can
also
control
your
scheduling,
based
on
your
topology,
spreads
your
attains
and
tolerations,
and
provisions,
etc.
Use
hpa
with
carpenter
to
scale
in
and
out,
and
you
can
schedule
these
pods
with
spot.
If
you
need
to
save
cost
right,
if
you
want
to
install
carpenter,
you
want
to
play
with
it.
M
You
want
to
contribute
to
carpenter,
do
check
out
the
documentation
and
the
github
link
I
have
mentioned
here.
There
are
some
best
practices
that
we
discussed
about
how
to
use
this
with
eks.
There
are
also
certain
workshops
if
you
want
to
do
more
hands-on
with
respect
to
carpenter,
and
you
can
find
all
that
detail
on
these
resources.
So
that's
it.
That's
me
thank
you
for
joining
me
for
this
quick
demo
and
discussion
about
carpenter.
M
I
hope
this
was
insightful
and
this
was
useful
and
I
hope
we
all
experiment
and
continue
innovating
in
the
way
our
kubernetes
clusters
are
clean
today.
So
thank
you
again
see
you
again
next
time.
B
Thank
you,
rohini
yeah.
That
was
great
tool
when
you're
working
with
cube
cluster.
We
hope
you
got
the
best
practices
to
follow
while
using
it.
Our
next
session
is
going
to
be
kind
of
exciting
one.
We
have
planned
to
have
a
panel
discussion
with
khan
channel
yeah.
All
of
you
know
now,
who
is
the
main
organizer
vp
and
gmat
wso2
eric
with
the
cto
at
wso2,
and
we
have
morality
on
from
data
and
also
here
with
me,
chamos
and
kavitch,
to
continue
the
discussion
with
our
panel.
B
A
I
When
I
was
hearing
about
your
stories
actually,
I
was
an
undergraduate.
I
got
to
know
about
the
topic
kubernetes,
but
I
already
know
about
the
containers
and
docker
when
I
was
working
in
a
group
project.
At
that
time
I
was
trying
to
troubleshoot
in
an
app
which
was
working
on
my
laptop,
but
not
working
in
my
friend's
laptop
due
to
a
dependency
issue.
I
C
C
Actually
goes
really
long
way
back,
but
it
didn't
actually
kind
of
end
there.
This
was
the
days
of
when
sun
solaris
had
zones,
so
bsd
jails
whatever,
but
the
it
was
very
challenging
to
use.
I
mean
it
didn't
call
containerization.
It
was
kind
of
like
separating
what
you're
running
within
a
physical
machine
and
practically
kind
of
everyone
gave
up.
But
then,
when
docker
started
in
2013,
you
know
I
got
really
hooked
into
it
and
then
over
time.
C
As
you
know,
docker
matures
we,
you
know,
I
was
you
know,
building
our
own
company
platformer
and
then
we
were
kind
of
very
close
to
building
what
book
was.
You
know
kubernetes
was
going
to
build,
but
when
we
heard
about
like
you,
know,
google
kind
of
working
this
we
we
decided
we
actually
kind
of
put
our
efforts
in
to
work
with
them
and
actually
build
our
platform
around
communities
right.
So.
C
Think
it
was
kind
of
20
plus
years
ago
start
and
stop,
because
it
was
very
hard
to
use
kittens
back
then,
which
wasn't
really
called
containers
but
yeah,
most
recently
with
the
advancement
of
docker
and
so
on
and
so
forth.
That
was
kind
of
my
my
journeys,
but
over
the
last
seven
eight
years
I
have
seen
significant
improvement
in
this
and
and
will
continue
to
evolve.
N
Well,
I
traced
it
back
to
a
presentation
again
at
hpts
in
2007
from
google,
where
they
explained
the
decisions
to
create
their
commodity
data
center
architecture,
and
I
realized
from
that.
They
had
reinvented
it
and
we
needed
to
think
about
moving
all
of
our
systems
over
because
they
had
created
the
most
cost
effective,
scalable
infrastructure.
N
Adrian
cockcroft
was
at
the
same
presentation-
I've
known
him
for
for
many
years
as
well
when
he
was
his
son.
I
was
a
digital
and
I
followed
his
journey
to
netflix
very
closely
and
from
that,
when
I
got
to
city
decided
to
follow.
Netflix
adrian
came
to
talk
with
us
several
times
and
he
was
the
one
who
told
us
about
docker
and
he
said
this
has
solved
the
problem
for
containerization.
N
The
next
big
challenge
is
container
orchestration,
which
turned
out
to
be
to
be
kubernetes,
and
during
that
time
I
was
working
primarily
on
service,
oriented
architecture,
designs
and
implementations.
First,
at
at
credit
suisse,
where
I
went
after
I
was
at
iona.
They
were
a
big
customer,
my
owners
to
help
there
and
then
at
city
cio.
One
time
did
ketchup
and
he
asked
what
I
was
working
on.
I
told
him
about
soa
and
managed
evolution
modernizing
I.t
systems,
and
he
said
I
don't
want
you
to
do
that.
You
might
break
something.
N
No,
no,
no
we're
not
going
to
break
anything,
but
you
might
so
I'm
not
sure
you
should
do
that.
So
I
said
well
what
about
this
stuff?
Google
is
doing.
Are
you
interested
in
that?
He
said?
Oh,
yes,
that
sounds
really
interesting,
so
he
allowed
me
to
sponsor
a
project
to
redesign
our
payment
processing
systems
using
microservices
in
the
cloud.
So
we
spent
you
know
several
months
doing
this
and
instead
of
batch
space,
we
had
events
based
microservices.
Then
we
set
out
to
find
the
implementation.
N
I
found
when
I
spoke
with
sanjeeva,
also
just
about
two
years
ago
now.
What
he
was
working
on
for
ballerina
and
coryo
in
the
area
of
cloud
native
solves
a
huge
problem
that
we
found
at
city,
which
was
even
if
we
understood
the
benefits
of
getting
payment
processing
systems
to
the
cloud
to
have
the
scale
up
the
auto
scale
and
scale
down
for
seasonal
workloads.
We
had
a
huge
challenge:
understanding
how
to
do
it.
N
Getting
people
who
understood
the
difference
between
microservice
and
monoliths
and
how
to
recode
them
and
the
stuff
that
we're
working
on
wso2
really
should
help
with
that
in
a
great
great
way.
So
this
is
kind
of
where
I
am
on
that
on
that
journey,
and
I
hope
it's
going
to
have
a
good
good,
ending.
H
O
History
part,
but
I
would
say
I
formally
joined
the
band
back
in
in
probably
like
mid
2015.
You
know
when
I
started
looking
at
doctor
and
then
I
joined
the
startup.
We
were
trying
to
build
a
container
service
platform
and
kind
of
didn't
pan
out
and
I
was
looking
around
and
at
the
time
there
were
not
many
options
right
like
no
docker
itself
and
then
apsara
and
the
rancher.
G
K
O
And
it
has
been
an
amazing
journey
for
almost
like
five
and
a
half
years,
and
even
now,
in
the
current
startup
that
I'm
working
at
you
know,
we
are
still
using
containers
for
various
things.
So
it's.
K
C
C
Thank
you
both
of
you
back
to
the
panel
yeah.
B
P
Why
do
you
think
kubernetes
still
used
by
different
cloud
providers
as
managed
service.
C
I
know
I'm
probably
gonna
ask
eric
and
morally,
I
think
the
the
manage
community
space
is
growing
compared
to
customers.
You
know
deploying
their
own
communities
onto
public
cloud,
so
maybe
or
eric
can
answer
first
and
then
we
can
go
for
a
bit
of
a
discussion.
N
Well
sure
I
can
say
what
we
did
at
city
while
I
was
there
I
left
about
a
year
and
a
half
ago,
and
before
that
I
was
leading
cloud
migration
for
one
of
the
the
divisions
we
had
chosen
openshift
as
kubernetes
on
the
idea
that
this
could
be
used
anywhere.
We
had
a
private
cloud
internally
built
for
us
by
dell
and
we
deployed
initially
internally
and
then
we
were
moving
to
aws.
We
did
a
pilot
for
that.
N
So
I
think
the
value
prop
for
the
cloud
providers
to
have
a
managed
hosted
kubernetes
is
that
they
can
integrate
it
with
all
their
monitoring
tools
and
security
and
reporting
and
analytics
that
are
difficult
to
to
handle.
As
we
found
out
when
you're
bringing
a
separate
kubernetes
product
into
into
the
cloud
they
at
the
time
were
not
as
mature
and
in
the
consumer
part
of
the
bank.
N
They
were
trying
to
use
eks
from
amazon
and
found
a
lot
of
security
challenges,
but
I
believe
they
have
been
working
on
those
the
last
year
and
a
half
or
so,
and
it
solved
most
of
them.
But
I
think
it's
it's
anything
else
for
them.
Part
of
the
value
proposition
of
moving
applications
to
the
cloud
kubernetes
is
the
default
deployment
and
providing
it
as
a
service
and
integrating
with
all
their
other.
C
So,
thank
you
eric.
I
am
really
anything
you
might
want
to
add.
So
your
experience
of
manage
versus
you
know
bring
your
own
actually
on.
O
Eric's
question
is,
I
mean
eric's
answer
is
like
pretty
valid.
You
know
you
get
to
use
the
native
services
of
the
cloud
provider
if
you
use
the
managed
service
right
so.
O
On
top
of
it,
the
next
thing
that
I
can
think
of
is
support
right.
K
If,
let's
say
you
are.
K
K
O
Paradigm
is
you
know
the
ramp
up
time
is
pretty
fast.
If
you're
trying
to
use
managed
services
like
you
know,
you
don't
have
to
deal
with
the
automation,
you
don't
have
to
train
your
team
to
you,
know
or
expose
them
to
tools.
Like
you
know,
k3
as
or
you
know,
you
know
some
other
tools
like
open
shift
or
range
right,
so
you
can
just
let
it
start
and
then
start
running
your
applications.
So
probably
these
are
some
of
the
reasons.
O
You
know
if
you
are
a
customer
who
is
trying
to
runs
kubernetes
on-prem,
then
pretty
much.
You
don't
have
much
options
there
right,
so
you
would
end
up
going
with
the
you
know:
building
your
own
kubernetes
stack.
So
in
that
case,
even
if
you
expand
your
footprint
to
the
cloud,
you
would
end
up.
You
know
building
your
own
community
site
because
it's
kind
of
natural
you
build
the
stack
for
on-prem,
so
you're,
just
extending
it
to
the
cloud.
So
these.
O
Of
why
people
would
work
with.
A
O
Versus
build
their
own
stack.
C
Thank
you.
I
think,
that's
probably
why
I
think
especially
google
like
on
another
step
ahead.
Isn't
it
like
the
google
gk
autopilot
they
call
it,
which
is
completely
managed
and
kind
of
serverless
communities
in
in
gcp
and
where
you
don't
have
to
even
you
know,
figure
out
how
many
nodes
and
things
like
that
you
want
and
they'll
actually
do
that
for
you
yeah.
Thank
you
very
much
both
of
you
kevin.
Are
we
moving
on
to
the
next
question
or
do
you
want
me
to
ask
the
next
question.
B
A
Okay,
so
the
second
question
is:
why
do
you
think,
rather
than
okay
kubernetes,
is
good
for
fuller
with
a
huge
hype,
any
specific
reasons.
N
Yeah,
I
I
I
look
at
this.
Try
to
look
at
this
a
bit
from
first
principles,
and
I
remember
I
was
attending
a
conference
in
2015
acpts
in
california.
We
had
a
speaker
from
google.
The
name
was
john
wilkes
and
he
gave
us
a
talk
about
borg
and
he
gave
us
a
demo
of
porg,
and
this
is
where
kubernetes
comes
from.
N
I
think
is,
as
we
know,
the
story
and
his
demo
was
to
show
how
he
could
deploy
10
000
copies
of
a
simple
hello
world
program
in
a
matter
of
seconds
using
using
bohr-
and
I
think,
what's
very
important
about
kubernetes-
is
that
it's
it's
solving
a
very
important
problem
in
cloud
computing,
which
is
you
know,
as
he
mentioned,
he
was
showing
these
arrays
of
commodity
servers,
which
I
think
you
can
see
if
you,
google,
if
you
check
for
google
data,
centers
or
azure
data,
centers
or
amazon
data
centers,
you
see
racks
and
racks
of
consumer
grade
hardware.
N
This
is
a
huge
difference
from
how
we
used
to
do
systems
on
enterprise
grade
mainframes
and
servers
made
for
specific
loads
and
specific
programs
running
on
specific
nodes
and
in
this
environment,
where
you're
running
on
consumer
grade
hardware,
you're
running
at
very
low
cost,
the
lowest
cost
possible,
but
you
have
a
highest
failure
rate
compared
to
enterprise
software.
So
what
you
need
to
do
is
to
deliver
a
way
to
deliver
multiple
copies
of
programs,
so
that
you
have
a
failure.
N
Another
copy
is
able
to
take
its
its
place
and
in
his
presentation
he
mentioned
that
for
every
2000
machine
service
deployments
that
you
have
in
those
large
racks,
it's
very
common,
more
than
10
exits
per
day,
and
they
expect
this
and
kubernetes
is
really
coming
from
this
world
of
I
have
consumer
grade
hardware.
I
have
racks
and
racks
of
servers.
N
I
have
the
lowest
cost
approach
to
it
possible,
but
in
order
to
make
the
low
cost
work,
I
have
to
be
able
to
sustain
the
failure
rates
of
consumer
grade
hardware
and
keep
everything
keeps
going
and
kubernetes
comes
out
of
that
and
becomes
became
the
the
default
way
to
deploy
your
container-based
microservices.
To
achieve
all
these,
these
benefits
of
cloud
computing
and
I
think,
that's
the
key
problem
that
it
solves
and
that's
why
it's
becoming
very
popular.
O
Have
a
go
yeah
yeah,
I
mean
eddie
covered
like
a
very
good
point,
like
you
know,
just
the
way
you
know
a
person,
you
know
if
you
have
to
compare
humanities
with
a
person
right,
like
you
know
the
technical.
C
O
C
100
agree
yeah.
I
think
that
was
one
of
the
biggest
things
that
the
community
send
google
and
red
hat
and
ibm
the
early
contributors
were
able
to
do
compared
to
docker.
I
think
docker
kept
it
to
their
heart
very
close
to
right.
They
didn't
get
a
lot
of
community
involvement
so
on
and
so
forth.
I
think
I
I
kind
of
at
one
point
so
like
they
were
kind
of
almost
going
to
create
that
as
a
proprietary
thing
for
themselves,
rather
than
you
know,
engage
with
the
community.
We
saw
the
same
thing
happen
with
misos.
C
Mesos
was
open
source,
but
they
also
did
not
entertain
a
lot
of
community
right.
I
think
that
was
one
of
the
other
key
things.
The
communities
got
so
popular
that
the
engagement
with
the
communities
would
really
be,
and
while
those
three
were
the
biggest
contributors
back,
then
I
think
a
lot
of
the
others.
There
are
a
lot
of
others
to
actually
contribute.
Now
they
have
kind
of
become
they
are
still
significant
contributors,
but
they
are
more,
like
you
know,
less
of
a
contributors
rather
than
the
majority
of
the
other
contributors.
K
I
Next
question
from
me:
so
what
year's
perspective
of
using
an
open
source,
monitoring
and
looking
to,
rather
than
going
with
a
subscription
one
for
a
production
environment?
I
know
some
of
the
companies
highly
recommend
to
use
stackdrive
elk
stack
and
new
relic.
Likewise,
some
payable
monitoring
mechanism.
K
B
C
O
Yeah
sure
I
can
I
can
take
this
one,
so
you
know,
if
I
understand
correctly,
you
know,
why
would
somebody
build
their
own
monitoring
stack
versus
use
some
subscription
model
right,
just
just
to
make
sure
right.
So
my
take
is
it's.
It's
all
dependent
on
business
needs
right.
G
O
Of
the
times
you
know,
let's
say
if
you,
if
you're
a
startup,
you
don't
have
much
time
to
you-
know
build
out
the
the
tech
team
around
you
know,
monitoring.
You
know
build
that
focus.
You
know,
build
that
automation
scale
out
everything
right.
So
what
you
would
do
is
like
you
know.
You
consume
the
readily
available
stacks
and
once
you
beyond
grow
beyond
a
certain
critical
mass,
then
you
know
you
would
probably
evaluate
okay.
Does
it
make
sense?
You
know
the
cost
to
value
ratio?
Is
it
still
does
it?
O
Does
it
still
make
sense,
you
know,
does
the
subscription
make
sense,
or
would
it
make
sense?
If
we
build
out
our
own
stack,
would
there
be
cost
benefits?
Would
we
have
better
control
on
the
infrastructure
and
things
like
that
right?
So
this
these
discussions
are
kind
of
a
natural
progression,
as
the
teams
evolve
from
small
size
to
bigger
size.
So
it
it's
it's
more
on
like
the
business
requirements,
I
think
you
know
there's
no
hard
and
fast
rule.
You
know,
depending
on
a
particular
business
needs
or
a
team's
needs.
N
I
absolutely
completely
agree.
One
thing
I
could
add
is
there's
inertia
factor
sometimes,
for
example,
at
city
they
were
using
splunk
very
extensively
and
the
natural
thought
was,
let's
just
keep
using
it.
Even
when
we're
going
to
the
cloud,
it's
sometimes
it's
difficult
to
rethink
the
problem
in
the
context
that
you
just
explained
morally,
which
is
the
sensible
way
to
do
it
if
you've
made
a
decision
years
ago
that
you
just
want
to
stick
with
for
some
reason
or
some
people
do
so.
C
Absolutely
yeah
add
to
that
eric.
I
think
you
know
previously,
like
nbn,
you
know.
Similarly,
you
know
they.
They
also
continue
to
use
blood,
even
though
it
it
became
quite
expensive.
C
It
is
also,
at
the
same
time
quite
challenging
to
move
on
to
either
an
open
source
or
another
product
right,
that's
also
in
their
minds.
I
think
they,
the
subscription
based
monitoring
and
observability
teams
products
have
become
really
smart
in
terms
of
the
way
they
charge
right
and
and-
and
we
know,
even
asia
analytics
was
like
driver
they
charge
for
the
gigabyte
of
data
that
you
actually
push
and
that
can
grow
really
really
rapidly
over
time
with
micro
services
right
so
yeah.
C
I
think
the
yeah
closing
on
that
one
I
think,
like
morally
and
eric
said
it's
pretty
hard
to
say
that
whether
you
should
go
on
one
or
the
other,
but
if
it's
a
startup
I
think
it's
kind
of
highly
recommended.
You
know
you
start
with
some
of
the
friendly
available
sas
tools,
rather
than
actually
going
to
try
and
establish
your
own
stack,
because
you
can
actually
spend
quite
a
lot
of
effort
and
time
and
money
also
actually
getting
resources
to
do
that.
C
A
So
next
question
is:
do
you
think
over
time
the
dominance
of
vmware
will
fade
away
with
companies
such
as
google
aws
is
starting
to
offer
kubernetes
to
run
biometal
in
their
private
data
center?
Oh,
do
you
think
most
of
the
customers
will
move
to
manage
kubernetes
services
in
public
cloud
providers
and
decommission
vmware
inspired
actors.
N
C
Think
it's
kind
of
into
two
questions:
isn't
it
so
yeah?
Maybe
maybe
we
should
break
it
up?
I
think
the
first
part
of
the
questions
I
think
some
of
those
trying
to
ask
whether
the
dominance
of
vmware
will
fade
away
because
google
and
aws
and
others
are
coming
into
data
center
with
their
own
and
those
and
google
outposts
and
asia
and
she
has
tax
or
whether
they
will
actually
completely
forget
about
all
that.
That's
the
second
question
and
yes,.
D
O
I
mean
my
thought
process
is
like
you
know.
Vmware
still
has
a
strong
hood
in
majority
of
the
enterprises
right,
so
the
main
reason
is:
it
works
pretty
flawlessly
for
most
of
the
business
use
cases
and
the
reason
why
people
still
use
their
own
infrastructure
is
because
of
the
data
security.
You
know
I
there
will
be
compliance
reasons.
There
will
be
some
other.
You
know
reasons
where
a
particular
company
doesn't
want
to
push
out
the
sensitive
data
out
into
the
cloud
right.
So
in
that
case
you
know
vmware
makes
sense.
So
now.
D
O
N
I
I
think
vmware
is
on
the
way
out.
Basically,
it's
going
to
take
a
very
very
long
time,
as
morality
said,
it's
very
well
established
and
a
lot
of
people
are
still
using
it
and
still
finding
a
value
in
it,
but
the
original
reason
it
was
created
to
virtualize
the
operating
system
in
a
large
data
center
and
to
create
virtual
machines
independently
for
applications
and
developers
to
segment
resources
of
the
operating
system.
This
reason
is
really
not
very
compelling
in
the
industry
anymore,
for
something
like
vmware
to
exist.
N
When
you
look
at
the
the
cloud
architectures
of
hundreds
and
thousands
of
machines,
they're
more
interested
in
the
operating
system
level
and
the
kubernetes
level,
the
deployment
level,
the
container
level
of
things
than
they
are
in
the
virtualization
and
those
environments,
virtualization
can
add
an
overhead
which,
which
may
not
be
helpful.
C
Yeah,
I
think
you
both
cover
it
really
really
well,
and
I
I
also
think,
like
both
of
you
said,
we
make
a
very
strong
foothold.
So
it's
going
to
take
a
while
to
you
know
fit
there.
You
know
implementing,
I
guess
install
base.
I
would
say
we
get
to
see
what
broadcom
will
do,
but
I
don't.
I
don't
think
customers
will
jump
the
ship
just
because
it's
gonna
acquired
by
yeah.
N
I
don't
think
so,
but
I
also
wonder
whether
broadcom
can
sustain
innovation
and
a
future
vision
for
that
from
vmware.
C
P
Yes,
come
on
yes
conscious,
sorry,
so
so
next
question
is
about
managing
complex
application.
So
when
managing
managing
complex
distribution,
application
on
cage
is
still
a
challenge,
you
know
so
many
companies
are
coming
with
application
packaging
frameworks
to
name
few
open
application
model,
dapper
and
most
recently
archon
labs.
So
what's
your
thoughts
on
that?
Actually
so
far,.
N
This
one
yeah,
it's
a
it's
a
difficult
question,
because
the
reason,
as
we
talked
earlier
about
the
reason
kubernetes
was
developed,
was
to
to
deploy
small
pieces
of
code
onto
small
computers,
hundreds
and
hundreds
of
small
computers
automatically
and
yet
kubernetes
has
no
inherent
restriction
on
running
large
applications
and
large
programs.
It's
very
possible
to
take
a
monolithic
java,
ee
server
and
run
it
in
a
kubernetes
pod.
N
Given
insufficient
resources,
it's
not
the
best
way
to
use
kubernetes
because
it's
not
consistent
with
what
the
problem
with
kubernetes
was
designed
to
to
solve,
but
it
can
be
a
transitional
step.
So
I
think
the
companies
who
are
thinking
of
us
and
we
and
we
this-
is
one
of
the
reasons
that
city
when,
when
I
actually
introduced
openshift
into
city
and
ran
the
initial
booth
of
concepts
and
did
the
first
applications
on
it
and
one
of
the
reasons
we
chose
it
in
my
division
over
cloud
foundry,
for
example,
was
cloud.
N
Foundry
was
much
too
strict
on
the
12
factors
and
they
they
also
didn't
have
a
capability
to
run
the
data
tier
very
well.
At
that
time,
which
was
2015,
I
think
anyway,
it
served
us
better
because
we
had
a
way
to
make
the
transition
in
steps
to
the
microservices
world
that
we
wanted
to
get
to,
but
microservices
being
such
a
different
way
to
compose
and
run
and
manage
applications.
N
It
requires
some
investment,
some
re-engineering,
and
it's
not
possible
to
do
that.
All
at
once
so
openshift
and
kubernetes
version
of
openshift
sorry
openshift
version
of
kubernetes
allowed
that
transition
to
be
taking
place
gradually.
So
I
think
that
there's
some
often
confusion
between
the
fact
you
can
run
something
on
kubernetes
and
that
and
what
you
should
really
be
running
on
kubernetes,
just
because
it
works
doesn't
mean
it's
the
right
thing
to
run
there,
and
it
may
take
some
time
for
that
to
become
apparent
and
for
that
scenario
to
play.
K
K
O
You
know
with
the
intention
that
it
would
be
consumed
by
the
end
users
right,
so
you
always
need
some
kind
of
an
abstraction
or
some
kind
of
management
layer
on
top
of
it,
which
would
make
it
easy
to
work
with
the
entire
community
stack-
and
you
know,
platforms
like
openshift
rancher
attempted
to
do
that
and
they
got
a
lot
of
adoption
and
obviously
the
next
progression
is.
How
do
you
simplify
app
deployment?
O
How
you
compose
the
entire
app
specification?
You
know
people
fell
in
love
with
the
doctor
user
experience
like
it
was
so
simple.
If
you
look
at
kubernetes
manifest
files
like
now,
nobody
can
author
one
from
scratch.
Like
you
know,
if
you
look
at
a
simple
docker
compose
file,
you
know
you
could
probably
just
hand
write
it
like
or
hand
code
it
right.
You
can't
do
that
with
any
of
the
kubernetes
manipulation
you
need
to
copy
paste
or
you
need
to
use
some
kind
of
a
generator,
although
there
are
like
other
wrappers.
P
O
Of
it,
but
still
you
know
it's,
it's
not
easy,
so
quite
complex
of
it,
it's
quite
complex,
so
I
think
it's
kind
of
a
natural
progression.
You
know,
but
it.
O
Now
what
what
has
happened
is
humanities
has
become
the
default
of
standard.
Just
like
an
experiment
right,
like
you
know,
it's
adopted
huge
adoption
and
very
similar
to
what
you
know
how
the
linux
environment
evolved
right.
You
know,
I
I
remember
back
in
my
college
days.
You
know
I
used
to
have
like
three
cds
of
this
linux
linux
distribution
and
getting
it
to
install
just
to
getting
me
to
install
on
my
desktop
was
a
pain
like
you
know.
It
was
very
difficult,
like
the
drivers
wouldn't
work
when
flipkart
doesn't
come
up.
F
O
K
C
Exactly
exactly,
I
think,
the
the
the
one
which
is
very,
very
new,
I
think
the
the
newest
okay,
the
newest
kid
on
the
block
is
your
your
former
co-founders
financial.
C
O
Yes,
but
I
think
that
first
meter
actually
last,
I
think,
probably
wednesday
or
thursday.
We
had
the
first
online
meetup
and
I.
C
O
C
Some
of
the
some
of
the
other
tools
are
also
trying
to
do
similar,
a
way
of
simplifying
it
and
but
yeah.
That's
right.
I
think
you
know
it's
it's
evolving
right,
it's
it's!
It's
not
going
to
it's!
I
mean
communities.
I
think
any.
I
think
we
when
we
talk
about
it,
we
talk
about
it
like
that.
It's
the
new
linux
of
the
cloud
right
well,.
C
N
Like
this,
I
think
I
was
also
a
good
comparison
of
the
evolution
of
linux.
At
one
time
was
not
a
standard
operating
system
and
over
time,
once
you
establish
a
standard,
you
can
start
working
above
it
and
start
to
abstract
it,
and
that's
what
we're
seeing,
I
think
in
the
marketplace
as
well
and
raleigh
is
a
number
of
companies
now
coming
out.
You
know
ourselves
included
with
the
capability
of
abstracting
across
kubernetes
for
applications
that
are
deployed
on
them.
N
I
think
that's,
that's
very
much
the
next
logical
step
as
well,
including
not
just
the
tools
but
also
abstracting
many
common
application
capabilities.
C
Right,
I
think
I
think
we
are
right
at
the
top
of
the
hour.
Is
there
any
questions
or
things
that.
D
Thank
you
folks.
It
was
great
discussion
and
there
was
a
lot
to
learn
through
your
experiences
and
knowledge.
Now,
let's
move
on
to
our
final
session.
For
the
first
day,
this
is
powered
by
our
sealed
sponsor
of
kcdsl
2022
senate
limited.
Here
we
are
going
to
talk
about
ballerina.
Balrina
is
an
open
source
cloud
native
programming
language
that
focused
on
integration
in
the
journey
from
code
to
cloud.
It
helps
to
simplify
the
cloud
native
application
development
by
simplifying
the
network
services.
D
We
have
dakshita
here
to
show
us
how
we
can
leverage
bellrina
for
cloud
native
application
development.
She
is
currently
working
as
wso2
as
a
developer
advocate
and
she
has
over
10
years
of
experience
in
the
roles
of
software
engineer,
solution,
architect,
technology,
evangelist
program
manager
and
developer
advocate
at
wso2,
hi
dakshita.
The
stage
is
yours,.
Q
So
what
are
network
services
they're
pretty
much
apis
or
microservices,
which
are
co-components
of
cloud-native
computing
breaking
applications
into
small
loosely
coupled
parts,
makes
it
easier
for
developers
to
build
agile
and
resilient
software.
So
these
network
services
can
be
in
the
form
of
a
rest,
api,
a
graphql
api
or
a
grpc
service,
and
so
on.
Q
So
what
is
it
like
to
build?
Network
services
distributed?
Applications
are
complex
so,
when
you're
dealing
with
a
network
errors
are
a
normal
part
of
doing
business,
especially
when
you
consider
the
eight
fallacies
of
distributed
computing
that
you
can
see
on
this
slide
to
build
cloud
native
applications
with
network
services.
Q
Q
Q
Q
So
I'm
going
to
talk
about
a
few
characteristics
or
offerings
of
of
the
ballina
language.
That
makes
it
a
really
good
programming
language
to
program
your
services,
so
the
first
one
I
want
to
talk
about
is
the
ballina
language
being
network
oriented.
Q
So
ballerina
has
this
concept
or
it
accommodates
the
concept
of
a
service
and
a
service
can
be
written
in
just
three
or
four
lines
of
ballerina
code
and
services
in
balina
are
powered
by
listeners
and
libraries,
and
so
that's
how
a
service
can
work
right
and
because
balinas
service
approach
is
coupled
with
its
unique
y
oriented
type
system.
Q
So
you
can
actually
write
regular
value
in
our
service
objects
and
generate
your
client
code.
So
the
combination
of
these
features
enables
cloud
integration
to
work
smoothly
and
then
you
also
have
client
objects.
So,
like
you
have
service
objects,
you
also
have
client
objects
to
consume
remote
services.
Q
And
ballerina
is
data
oriented
and
is
not
object,
oriented
in
in
network
interactions.
The
object,
oriented
approach,
bundles
data
with
the
code,
which
is
not
the
most
optimal
way
to
send
data
across
widely
distributed
networks
of
microservices
and
apis,
and
that's
why
ballerina
comes
with
a
network
friendly
type
system
with
powerful
features
to
handle
data
on
the
on
the
wire
ballerina's
plain
in-memory.
Data
values
are
pretty
much
in
memory
json,
so
this
allows
a
json
payload
from
the
wire
to
come
immediately
into
the
language
and
be
operated
on
without
transformation
or
serialization.
Q
We
know
that
json
is
the
ma
is
the
most
widely
used
format
in
the
network
today.
Q
Q
Q
So
when
you
talk
about
configurability,
ballerina
takes
the
concept
of
configurability
into
the
language,
and
that
allows
us
to
keep
the
same
program
and
move
from
one
environment
to
another
environment
without
being
explicit
about
what
the
dependencies
are.
Q
And
then
you
have
transactions
writing
ballerina
programs
that
use
transactions
is
quite
straightforward,
because
transactions
are
a
language
feature.
Q
Q
Q
Q
Then
it's
important
to
highlight
that,
while
ballerina
provides
better
ways
to
write
services,
it
also
comes
with
a
subset
of
features
that
are
familiar
to
a
programmer
of
a
c
family
language
such
as
java,
c,
plus,
plus
or
c,
and
basically
the
the
code
is
not
something
that's
alien
to
you.
You
can
understand
how
it
works
right.
Q
So
ballerina
supports
generating
docker
and
kubernetes
artifacts
from
code
with
simple
configurations,
and
this
simplifies
the
experience
of
developing
and
deploying
ballerina
code
in
the
cloud
to
deploy
your
code
into
different
different
cloud
platforms
such
as
aws
and
microsoft.
Microsoft,
azure,
annotations,
on
service
objects
are
used
to
enable
easy
cloud
deployment.
Q
It
was
in
fact
it
has
been
designed
deeply
into
the
language
in
order
to
provide
real
insight
with
respect
to
a
functions
or
services,
network
interactions
and
their
use
of
concurrency.
Q
So
a
sequence
diagram
is
the
kind
of
diagram
that
works
best
for
that.
So
you
can
see
that
this
is
like
a
mix
of
a
flow
chart
and
a
sequence
diagram.
So
a
function
in
a
ballerina
program
has
equivalent
representations
in
both
textual
syntax
and
as
a
sequence,
diagram.
That
means
you
can
switch
between
the
two
views
seamlessly.
Q
And
in
addition
to
the
powerful
language
features,
ballerina
is
batteries
included,
which
means
that
which
means
that
the
language
comes
with
a
support
system
right,
and
that
means
that
the
language
comes
with
a
rich
standard
library,
with
libraries
for
network
data
messaging
and
communication
protocols
such
as
http
http,
2
nets,
gr
pc
web
sub,
websocket,
etc.
Q
And
if
you
want
to
learn
more
about
the
language,
the
website
is
the
best
place
to
go.
It's
got
a
wealth
of
resources
and
documentation
to
help
you
get
started
and
with
that
I'd
like
to
conclude
my
10
minute
presentation
on
the
ballerina
programming
language,
I
hope
it
was
useful
learning
about
the
babina
language
thanks
for
joining.
A
A
So
we
as
a
community
love
to
enrich
the
community
lives
and
to
empower
them
to
do
with
great
things
every
day
we
hope
you
enjoy
today
even
and
we
have
a
huge
lineup
folks
who
love
to
join
in
demo
sessions.
Tomorrow,
we
will
be
offering
comprehensive,
live
demo
experience
with
wide
range
of
speakers
all
around
the
world.
Thank
you
for
joining
with
us
and
let's
have
a
great
kcds
sale.