►
From YouTube: Knative Meetup #8: 1.27.21
Description
On January 27, 2021 the Knative community hosted a meetup featuring a demo ,"Taking AI to the Edge '' presented by P J Ćaszkowicz, Creative Technologist at Omnijar about a decade in evolving a large-scale sustainable agriculture project, including distributed machine learning across edge and cloud platforms.
Other discussion topics include:
`spec.manifests` to overwrite all manifests
`spec.additional Manifests` to change/add the resources needed.
Install customized knative serving
User experience interviews
A
A
And
welcome
everybody
to
the
k
native
community
meetup
and
welcome
to
2021.
A
My
name
is
maria
cruz,
I'm
a
program
manager
in
the
wool
open
source
program
office
and
we
are
going
to
go
ahead
and
get
started
for
those
of
you
who
are
just
rolling
into
this
meeting.
Here's
the
agenda.
A
We
have
a
few
announcements
today.
So
let's
go!
Oh,
am
I
sending
this
to
anybody
to
everybody?
No
here!
Okay,
sorry
about
that
now
you
should
have
the
agenda
on
the
chat
box.
So
let
me
see
we
have
some
announcements
from
the
operations
working
group.
A
Any
members
of
that
group
that
are
hearing
the
call
that
yes.
B
Yes,
that's
the
lesson
I
am
working
as
a
lead
of
operation,
work
group
and
the
announcement
is
about
the
feature
customize
manifest.
B
I
think
the
new
release
0.20,
we
it's
a
kind
of
improvement
of
with
this
new
release.
We
all
know
that
operator.
Cr
cannot
config
everything.
B
B
Needs
are,
and
either
overwrite
all
the
resources
or
just
partial
of
them,
partially
some
of
them
or
add
some
new
ones
and
documentations
are
available
here,
as
I
posted
these
two
links
over
here
and
in
the
agenda,
they
are
part
of
the
official
documents
of
operator,
so
give
it
a
try,
not
perfect,
but
yeah.
Definitely
you
can
definitely
give
it
a
try
all
right.
Thanks.
A
Thank
you
and
I'm
sorry
about
my
cat
yelling.
I
think
we
have
more
updates,
maybe
from
a
carlos.
Is
that
a
you
that
added
user
experience,
interviews
or.
C
Yeah,
I
was
I
added
that
just
two
minutes
can
I
share
my
screen
for
the
two
minutes.
Just
a
quick.
A
A
A
Do
you
wanna
send
me
at
do
you
have
a
link
that
I
can
share?
Maybe
I
can
share
it
for
you.
C
That's
okay,
I'll
paste
I'll
pass
a
a
picture
of
what
I
wanted
to
show
in
the
chat
when
I'm
done,
but
the
the
announcement
is
that
myself,
an
omer
benson.
We
are
asking
folks
in
the
community
to
see
if
they
can
donate,
probably
40
minutes
to
an
hour
of
their
time
to
do
user
interviews
in
terms
of
user
experience,
and
we
have
been
doing
so
far.
C
We
have
been
doing,
I
think,
around
five
of
them
five
or
six
and
the
idea
is-
and
I
was
going
to
show-
I
will
show
a
picture
of
what
a
preview,
but
we
are
working
with
design,
thinking
and
sticky
notes
and
ideation
and
doing
findings
out
of
the
interview.
C
So
if
you
can
contact
myself
carlos
antenna
on
omar
and
slack
either
if
you
are
getting
started
with
k
native
or
if
you
want
to
provide
feedback
in
terms
of
if
you
are
in
the
next
stage
of
using
it
and
trying
to
get
in
production,
to
understand
better
the
user,
the
user
needs
and
basically
that
will
make
it
into
improving
the
the
experience
that
people
use
with
with
k-native
either
getting
started.
C
Improving
the
dogs,
changing
the
dogs
providing
a
way,
a
quick
way
of
getting
gay
native,
installing
your
local
machine
or
a
different
way
of
explaining
things,
or
maybe
utilities,
and
tools
to
help
you
get
the
best
out
of
k,
native
and
and
get
started.
So
if
you
want
to
donate
your
time
for
this
type
of
interviews,
it's
a
quick
interview.
We
just
ask
a
few
questions
and
we
take
notes
and
we
convert
that
into
a
board
that
we
are
analyzing
now
with
some
designers.
C
Yeah
omar
or
myself,
there's
a
there's,
a
slack
channel
called
user
dash
experience
and
we
also
working
on
a
proposal
for
a
new
working
group
around
these
utilities
tools,
user
experience
type
of
things
so
you're
welcome
to
join
in
that
slack
channel.
A
D
Sorry
go
ahead.
I
couldn't
find
the
I'm
a
polite
person
I'll
raise
my
hand
so
I'll
just
interrupt.
I
looked,
but
what
carlos
said
is
exactly
true,
but
I
would
also
like
to
highlight
that,
even
if
you
don't
want
to
go
ahead
and
participate
in
those
interviews,
you
might
not
be
interested
in
joining
the
the
ux
working
group,
because
it's
a
cross-cutting
area,
so
we
definitely
want
to
go
ahead
and
encourage
folks
to
come
and
hang
out
there
as
well.
E
A
F
That
was
me
okay,
so
I
just
wanted
to
quickly
share
that
we
are
working
on
the
roadmap.
We
kicked
off
a
series
of
meetings
for
the
roadmap
for
the
different
eventing
working
groups,
eventing
event,
sources
and
eventing
delivery
and
you'll
see
the
link
in
the
notes
for
the
roadmap
dock.
The
idea
there
is
once
we
finalize
with
with
the
an
initial
version
of
the
roadmap.
F
It
will
be
present
as
a
markdown
file
on
the
ripple
of
eventing
and
it's
kind
of
gonna
stay
as
the
the
living
roadmap
that
we
present
for
the
toc
from
so
feel
free
to
engage
there
with
your
items.
F
We
basically
differentiate
between
roadmap
items
that
have
folks
who
are
working
on
it
and
there
are
people
who
are
driving
it
and
kind
of
like
wishlist,
or
we
call
it
icebox
items
where
it
would
be
nice
to
have
that,
but
we're
still
looking
for
people
to
drive
it
so
feel
free
to
also
look
into
the
icebox
items.
A
F
Yes,
are
you
able
to
hear
me
better
now.
D
F
Oh
sorry
about
that
that
was
probably
the
wrong
mic
yeah.
Basically,
I'm
saying
please
try
to
look
into
the
road
map
and
feel
free
to
add
items
there.
We
kicked
off
a
series
of
meetings
and
there
are
two
kind
of
items:
ones
that
have
people
who
are
already
dedicated
to
work
on
it
or
willing
to
work
on
and
items
that
are
kind
of
like
a
wish
list
or
icebox
items,
and
these
basically
have
no
people
to
work
on.
F
A
Perfect,
so
I
think
that
if
there
are
no
more
announcements,
we're
gonna
move
on
to
the
demo
and
pj
the
floor
is
yours.
G
Okay,
thank
you
great
I'll,
just
try
and
share
my
screen.
G
G
A
G
G
Okay,
so
I
apologize
firstly
because
this
deck
is
pretty
much
shared
from
conference
to
conference.
At
the
moment,
I
haven't
really
had
time
to
update
it,
but
I'll
try
and
extrapolate
the
these
specific
k
native
elements
as
I
go
through,
and
so
I'm
not
boring
you
with
too
many
superfluous
components.
G
G
So
I
want
to
cover
how
those
two
things
differ
in
this
particular
solution,
and
this
is
an
11
year
old
project
or
almost
11
years
this
year
and
originally
was
started
as
a
hobby
project,
and
it's
now
a
large
commercial
project,
so
I'll
cover
some
of
that
evolution
as
I
go
through
as
well.
G
So
I've
said
adding
to
coffee
here,
because
it's
actually
a
agricultural
project
and
specifically
focused
on
the
coffee
industry.
So
the
case
that
I
I
had
originally
was
to
build
a
sustainable
agricultural
product
and
again
as
a
hobby
project
before
it
kind
of
got
out
of
hand,
so
it
focused
on
urban
farming
and
raw
farming,
predominantly
in
kenya
originally,
but
now
across
africa
and
south
america
around
60
000
farms.
I
think
one
on
one
on
the
platform.
G
It
focuses
on
coffee
production
only,
but
that
was
meant
as
a
case
study.
It's
actually
the
components
now
run
on
forestry
solutions
as
well
to
do
sustainable
forest
foods
and
all
the
food
areas
as
well,
but
not
put
on
me
with
the
same
core
components.
So
the
idea
is
to
track
coffee
production
from
the
origin.
All
the
way
through
to
the
cup
and
make
sure
that
it's
sustainable
throughout
and
make
sure
that
it's
both
sustainable,
ecologically
and
economically
so
covering
both
of
those
elements.
G
So
that
means
understanding
the
market,
data
and
understanding
the
the
environmental
data,
as
well
so
using
predictions
from
weather
systems
and
commodity
markets
and
trading
markets,
and
to
advise
the
farm
producers
and
the
retailers
on
what
they
need
to
do
to
make
it
more
sustainable.
G
So
the
challenges
were
low
cost
delivery.
I
didn't
want
to
pass
on
many
costs
to
farmers,
and
I
also
wanted
to
make
sure
that
I
could
afford
to
build
in
the
first
place:
low
connectivity
services,
so
we
had
2g
and
below.
G
Sometimes
it
was
just
simply
1g
to
get
the
connectivity,
so
we
were
using
sms
data
essentially
to
send
data
up
from
the
farms.
So
those
pipelines
have
to
be
efficient
and
I
say
a
small
team
on
the
next
point.
G
But
essentially
it
was
just
me
for
at
least
five
or
six
years
and
then
it
was
multiple,
complementary
teams
that
helped
me
get
there
from
different
companies
and
then
multi-region
so
again,
I've
said
before
it
goes
across
borders
across
africa
and
south
america
and
europe,
and
then
I've
used
the
term
cell
subfinity
here.
So
for
those
who
aren't
aware
of
the
term,
it's
just
essentially
giving
everybody
the
ownership
of
their
data
here.
G
So
that
was
quite
a
technical
obstacle
to
actually
getting
this
rolled
out,
because
we
wanted
to
make
sure
that
when
the
data
wasn't
centralized
and
none
of
the
actual
software
was
centralized.
So
the
way
we
rolled
out
software
had
to
mean
that
people
could
own
that
pipeline
themselves
as
well.
G
So,
starting
off
with
the
farm,
we
had
the
original
mist.
G
When
I
talk
about
mist,
I
mean
the
sensors
and
we
also
had
automated
machinery
and
drones
and
we
rolled
out
custom
software
to
those
devices
and
we
had
different
partners
at
different
stages.
So
I
think
in
2015
or
2016,
nvidia
helped
out
with
some
drones.
Otherwise
we
couldn't
have
done
that
and
we
did
the
machine
learning
on
board
the
drone.
So
we
didn't
have
to
take
that
data
afterwards
and
so
fully
automated.
G
And
then
we
had
sensors
picking
up
things
like
moisture
lighting
and
different
different
weather
effects
like
wind
as
well
just
to
see
what
the
effect
would
be
on
the
crops
and
with
the
zones.
We
did
a
spectrographic
analysis,
and
so
we
could
scan
the
crops
by
flying
drones
over
and
then
taking
imagery
and
then
working
out
what
looked
healthy
and
what
didn't
and
that's
what
we're
using
in
the
forestry
solutions.
G
Now
in
large
forestry
solutions
in
canada
and
the
nordics,
and
so
we
scan
the
forest
and
then
read
the
data
and
figure
out
how
healthy
the
forests
actually
are,
and
so
that's
missed.
The
fog
is
the
components
that
aggregate
in
different
regions,
so
typically
one
farm
could
span
quite
quite
a
large
landmass
and
then
they'd
have
multiple
mists
and
then
the
fog
would
aggregate
that
data.
So
we
didn't
need
to
send
data
to
the
cloud
so
quite
a
few
different
gateways.
G
We
had
control
management
systems
there,
so
you
could
manage
the
system
with
your
sms
messages,
data
processing
we
had
etl
for
data
transformations
locally,
so
we
didn't
send
superfluous
data
up
and
then
we
had
machine
learning
inference
locally,
so
we
could
figure
out
how
to
operate
the
machinery,
but
also
how
to
advise
the
producers
on
the
ground
what
to
do
in
case
we're
predicting
bad
weather
effects
or
we're
predicting
significant
market
changes.
G
A
lot
of
these
components
are
fairly
off
the
shelf
components,
but
we
worked
with
arm
and
ibm
to
provision
a
lot
of
the
hardware
to
make
it
easier.
So
this
is
holistically
what
a
single
farm
would
look
like
at
a
very
simple
high
level.
We
have
the
new
processing
units,
so
nvidia,
provided
us
with
a
lot
of
hardware
for
that
anticipation,
control
to
actually
control
the
devices
using
sms,
and
we
used
apache
kafka
to
do
aggregation
of
the
data
and
spark
do
the
tl
or
elt.
G
So
the
fog
management
was
pelian.
It
was
originally
owned
by
arm
what
pelian
did,
because
it
had
to
be
multi-cloud
for
the
farms,
because
there's
a
self-sovereignty
issue
and
pelian
was
used
to
do
the
device
management
rather
than
using
something
like
azure,
iot,
hub
or
aws.
Society
management,
software
and
use.
Custom.
Yocto
builds
for
the
implementations
and
then
k3s
orchestration
over
the
top
of
that
on
the
fog
platforms
and
the
missed
platforms
had
lower
level
software.
G
But
the
fog
platforms
used
k3s
orchestration,
both
from
the
cloud
through
to
the
edge
devices
and
that
was
provisioned
predominantly
manually
in
the
beginning
and
then
using
pellion
and
then
eventually
using
an
automated
pipeline
which
I'll
cover
in
a
moment
and
then
on
each
fog.
We
had
a
custom
low-level
update
manager
that
was
written
in
rust
as
well
just
to
ensure
that
we
could
replace
the
orchestration
layer
if
need
be,
and
we
worked
with
arm
on
that
software.
G
So
I'm
going
to
focus
on
the
data
platform,
there's
a
lot
of
other
components
within
this
solution.
There's
mobile
apps,
where
that's
different
interface
and
integration
components
as
well,
but
that
would
take
far
too
long
to
go
over
and
I
don't
think
I
could
keep
everyone's
attention
for
that
one.
So
I'll
focus
on
the
data
platform
for
now
and
that's
where
the
pipelines
are
the
most
interesting,
I
think
so
for
the
interest.
G
I
used
kafka
for
data
streaming,
used
link
for
the
aggregation
spark
for
the
processing
and
then
hadoop
over
hdfs
and
luster
for
storage,
so
that
was
high
performance
storage
of
data,
each
ingest
area
for
the
cloud
and
because
there
was
multiple
ingests
would
be
handling
around
320
terabytes
of
data
a
month
on
average.
So
we
had
to
ensure
that
the
file
system
itself
was
high
performance
and
in
that
solution
and
then
a
data
lake
I
built
again
it
had
to
be
cross-platform,
although
a
lot
of
cloud
providers
provide
their
own.
G
But
this
was
a
generic
portable
one
that
anybody
else
could
own
if
they
wanted
to
reload
this
solution
open
and
manage
it
on
their
own
farm.
So
it's
fairly
standard
components
and
fairly
relevant
to
the
conversation,
but
I'll
just
go
over
quickly.
That
and
we
used
airflow
for
the
orchestration
of
the
pipeline
for
the
data
apache
atlas,
for
governance,
spark
for
the
etl
and
apache
ranger
for
security,
and
then
there
was
a
amundsen's
data
catalog
for
actually
listing
the
data
and
allowing
it
to
be
queriable
by
other
sources.
G
So
the
data
lake
was
singular
and
central,
but
conceptually
could
be
managed
by
multiple
teams
with
their
own
governance
and
their
own
security
protocols,
and
this
is
has
been
reused
on
multiple
solutions.
Since
I
built
this
for
this
project,
it's
kind
of
the
same
components
and
again
solutions
like
the
forestry
solution
and
we're
using
a
lot
of
these
components
in
medical
solutions
as
well,
so
like
medical
equipment,
where
we
can
sense
what's
happening
with
a
certain
piece
of
medical
equipment
in
the
hospital.
G
And
then
separated
the
data
lake
into
a
data
mesh.
So
what
I
mean
by
that
is
technically
it's
very
similar,
but
we
separated
the
teams
to
multi-disciplinary
team.
So
the
data
scientists
were
not
separated
into
their
own
containers.
They
they
worked
with
the
development
teams
into
functional
product
teams.
Instead,
so
they
would
develop
with
the
development
teams
on
an
end-to-end
solution
for
every
data
ingest
that
came
into
every
source
like,
for
example,
lighting
data
or
moisture
data
that
would
be
a
product
and
would
build
a
service
based
on
that.
G
So
we'd
build
a
batch
service
and
a
standard
vest
or
graphql
web
service
as
well.
Based
on
that,
so
it
will
be
managed
by
the
same
same
group
of
people
and
that
allowed
a
different
focus
and
a
set
of
priorities.
Depending
on
who
I
was
working
within
in
scaling
this
solution
up
and
then
each
product
is
served
to
the
lake
and
it's
also
can
be
served
to
third-party
components
as
well
like
mobile,
apps
or
web
apps,
and
then
the
governance
is
distributed.
G
So,
as
I
mentioned
previously,
we
use
apache
atlas
for
distributed
governance,
but
the
data
could
be
centralized.
So
it's
physically
how
it
works
is
different
from
how
the
teams
operate.
The
teams
are
decentralized,
but
the
storage
can
be
centralized,
so
the
policies
were
managed
centrally
as
well,
and
it
was
important
to
manage
those
policies
in
a
ubiquitous
way,
because
the
complexity
of
this
is
is
quite
high
and
then
this
is
specific
to
the
mapping
solutions.
G
We
had
to
do
a
lot
of
augmenting
of
the
data
indexing
again,
it's
irrelevant
to
the
k
native,
but
it's
it's
worth
going
over,
that
there
was
a
lot
of
custom
mapping
solutions
that
were
worked
on
to
to
get
the
indexing
and
performance
upon
energy
members.
Data.
G
So
the
business
intelligence
was
the
bit
that
made
most
organizations
interested
in
this
solution,
so
that
was
all
about
providing
pretty
graphs
and
reporting.
So
there's
a
lot
of
different
components
in
there
that
made
that
happen.
A
lot
of
the
companies
we
worked
with
were
using
things
like
power
bi,
which
is
quite
ubiquitous
in
a
lot
of
organizations.
G
We
needed
something
that
could
roll
over
in
each
organization
depend
independent
of
what
kind
of
cloud
provider
go
with,
or
what
hardware
that
we're
using
so
again
kafka
and
spark
were
there
we
used
to
do
it
for
the
time
series
and
real
time,
data
and
killing
for
historical
data,
and
most
of
these
are
apache
projects
and
they
they're
quite
proven
in
this
kind
of
space.
G
So
it's
been
very
easy
to
get
them
to
roll
out,
but
the
rollout
originally
was
very
manual
to
get
all
of
this
to
work
and
then
re-dash
used
for
the
actual
dashboard
and
superset
as
well
for
for
the
real-time
dashboards
and
added
a
few
extra
things
at
the
end
of
what
I
started
adding
last
year
to
this
solution,
which
is
pinot
for
new
data.
We're
still
reviewing
that
and
then
pressed
over
data
querying.
So
that's
the
facebook
aggregation
for
data
query
and
then
the
data
science
elements.
So
this
is
where
the
research
happens.
G
So
a
lot
of
this
is
hypothesis
during
development.
A
lot
of
the
machine
learning
to
do
the
predictive
intelligence
is
based
on
ideas
and
concepts
that
aren't
proven.
So
we
need
the
data
scientists
to
be
able
to
look
at
that
data
in
a
safe
way
without
any
of
the
data
getting
leaked.
And
so
we
create
a
secure
data,
science
pocket
that
comes
through
the
same
data
lake
and
then
they
can
experiment
with
that
data
without
actually
having
access
to
that
data.
So
they
go
through
the
same
catalog.
G
G
Typically,
they
use
something
like
terraform.
I
I
don't
if
I
can
get
away
with
it,
so
I
didn't
on
this
project.
I
had
free
range
to
do
whatever
I
want
wanted,
so
I
created
a
custom
cloud
bootloader
and
this
bootloader
would
load
up
any
cloud
provider,
and
that
was
supported
on
this.
G
So
we
did
the
big
three
and
we
also
supported
red
hats
and
alibaba
as
well
and
digitalocean
as
well
from
the
offset,
and
then
we
supported
custom
hardware
if
somebody
wanted
to
install
any
of
these
components
on
their
own
infrastructure
so
that
custom
boot
loader
was
written
in
rust
and
go
combination
of
both
go
for
the
cdk
and
sdks
for
the
cloud
provider
and
was
for
the
actual
runtime,
and
then
it
was
event
based
so
rather
than
things
like
terraform,
which
are
configuration
based
for
deployments.
G
This
you
could
tell
it
how
to
react
to
different
events
happening
within
the
cloud
infrastructure
or
even
on
the
boot,
to
to
be
reactive
and
make
additional
changes
to
the
infrastructure.
We
actually
added
machine
learning
to
this
component
as
well,
so
it
could
actually
react
based
on
what
it
learned
and
to
do
on
to
scale
infrastructure
appropriately,
and
this
is
essentially
a
single
click
application.
G
So
you
double
click
once
you've,
given
it
the
config
of
what
the
where
the
hardware
is
in
terms
of
the
network,
and
it
will
copy
itself
onto
that
what
network
and
build
it
and
also
secure
it
so
we'll
create
the
firewalls
and
unlock
down
the
ssh
port.
So
no
nobody
can
physically
get
onto
that
infrastructure
once
it's
built
and
then
it
just
destroys
and
rebuilds
itself
over
time,
and
so
the
cdks
and
sdks
are
already
provided
by
the
cloud
providers
and
then
it's
stateless
and
reusable.
G
So
if
you
run
it
again,
it
will
just
look
at
things
like
dns
settings
and
figure
out
what
the
current
infrastructure
looks
like
and
destroy
and
rebuild
what
it
needs
to
each
time.
G
G
I
spent
a
stupid
amount
of
time
writing
code
over
and
over
again
and
then
eventually,
I
moved
out
using
argo
cd
for
getting
the
clusters
up
and
because
I
could
just
do
the
configuration
once
I'm
using
customize
and
throwing
some
helm
configs
over
and
it
would
build
that
infrastructure
for
me
and
that
would
bring
it
from
get
git
originally.
But
that's
something
I
tried
to
avoid.
So
I
didn't
want
to
drive
the
infrastructure
from
git.
G
I
wanted
to
drive
it
from
the
bootloader,
so
the
bootloader
would
say
what
infrastructure
needs
to
happen,
regardless
of
what's
happening
in
git.
It
would
just
pull
it
out
of
it
when
it
decides
it
needs
to,
but
essentially
argo
cd,
which
is
what
I'm
using
at
this
point
is
it's
git,
ops,
driven
and
then
it
separates
the
configurations
from
the
code.
So
I
could
just
say:
hey
here's
a
argo
cd
deployment,
which
is
the
first
thing
I
did.
G
I
I
launched
two
components
with
this,
so
I
launched
a
authentication
component
and
then
argo
cd,
and
that
was
essentially
my
pipeline
up
and
running
and
then
once
I'll
go
cds
up,
it
would
build
the
rest
of
the
infrastructure
and
the
code
would
manage
how
that
looks,
and
then
this
would
be
configuration
driven
after
that.
So
this
only
happened
beginning
of
last
year
or
the
year
before,
I
think
2019
and
I
put
our
gossid
in
experimentally,
but
it's
scaling
up
at
the
moment.
G
So
the
cluster
architecture
now
looks
like
this
is
a
very
simple
overview,
but
this
is
zero
trust
architecture.
So
we
have
an
authentication
system
across
the
entire
environment
and
again
you
can,
when
you're,
on
all
both
layers
on
this
oauth,
2
and
oidc
there's
standard
components
that
we
have
is
still
for
the
cloud-based
clusters
and
there's
clusters
on
the
h
devices.
But
I
won't
go
into
detail
on
those
so
they're,
not
that
different.
G
Apart
from
we're
not
running
hdl
on
the
edge
devices
due
to
memory
issues
originally,
and
then
we
have
prometheus
for
logging
and
then
we
have
because
it's
envoy
based,
we
use
webassembly
custom
extensions
to
the
actual
http
traffic,
so
we
handle
the
http
and
the
udp
traffic
use
using
webassembly
extensions
and
then
that
ports
through
on
each
of
these
service
nodes,
we
push
it
through
to
a
k
native
deployment
so
that
workload
service
function
component.
G
You
can
see
there,
that's
essentially
a
k
native
component,
and
then
we
used
nats
for
doing
the
event
queuing
and
there's
cloud
events
throughout
this
architecture
as
well
for
how
things
communicate
with
each
other.
This
was
a
sample
thing
I
sent
to
one
of
the
partners.
We
were
working
with
to
show
them
how
we
did
the
enterprise
integration
patterns,
and
so
that's
why
we've
got
eib
on
the
middle
component
there,
and
but
it's
a
fairly
generic
overview
of
our
k
native
cloud
deployment
for
clusters,
they're
all
essentially
the
same
and
they're
decentralized
stateless.
G
So
we
have
tekton
now
for
building
services,
so
tectum
is
essentially
all
about
ci.
So
we
only
use
argo
for
the
cd
and
tech
10
for
the
ci
and
that
begins
with
the
k
native
tasks.
So
we
from
the
code
repository.
We
do
the
bills
once
and
then
we
can
do
multiple
deployments
from
those
bills,
so
we've
separated
ci
and
cd
completely.
G
That's
two
separate
ideas:
to
simplify
the
the
amount
of
computation,
that's
happening
and
reduce
the
amount
of
potential
forevers
and
using
helm
throughout
as
well
with
a
custom
chart
museum
and
then
it
when
it
gets
detected.
There's
a
couple
of
different
components
that
are
managing
security
to
to
keep
the
integrity
of
the
bills
high.
So
notary
and
falco
are
both
used
to
mine,
the
integrity
of
those
images
and
the
dependencies,
and
then
it
goes
into
container
storage.
G
So
once
the
ci
pipeline
goes
into
container
storage
and
it's
fully
tested-
and
it's
tagged-
and
we
assume
at
that
point
that
that
image
is
is
perfectly
usable
and
we
don't
have
to
retest
it
with
when
we
go
to
a
deployment
and
and
then
we
do,
there's
a
lot
of
canadian
deployments.
G
Blue
green
deployments
and
multiple
version
deployments
at
the
same
time
managed
in
the
cluster
conflicts,
so
multiple
versions
can
be
in
alive
at
the
same
time,
so
we
want
to
make
sure
that
they're
still
being
tested
for
those
defenses
and
nothing
goes
stale
and
then
cloud
events.
As
I
said
before,
cloud
events
are
used.
G
So
cloud
events
are
both
used
in
the
custom
services,
but
also
used
in
the
pipeline
as
well
for
making
sure
that
when
something
happens
within
a
build
or
within
the
infrastructure
that
we
manage
the
provisioning
dynamically
based
on
those
cloud
events
and
then,
as
I
said,
this
integrity
and
security
checks
are
part
of
the
automated
audit
and
then
there's
a
developer
gateway.
So
that
was
simply
web-based
generation
so
using
something
like.
G
I
think,
it's
hugo
for
developing
a
static
website
from
open
api,
three
and
graphql
documents
and
then
generating
a
documentation
and
deploying
it
with
redoc
into
a
cluster
environment.
And
then
it
uses
the
same
zero
torster
warf
layer
to
actually
do
the
authentication
after
that.
So
you're
still
logging
in
through
the
same
system
to
manage,
who
has
access
files
to
different
services
to
actually
develop
against
those.
This
is
for
both
third-party
internal
and
internal
developers.
G
And
now
for
the
building
models,
and
so
this
way
it
got
a
little
bit
complicated.
Originally
we
were
doing
a
lot
of
this
manually
so
we're
using
a
lot
of
it
and
video
hardware
for
doing
massive
model
development
and
building
and
then
testing
which
didn't
really
work
with
them
doing
manual
deployment
and
having
to
go
through
palliation
for
that
as
well
and
to
deploy
it
to
th
devices
and
mobile
devices
and
now
web
applications
as
well
have
models
built
in
so
now
it's
driven
by
tecton.
G
So
we
I
took
the
same
pipelines
once
I
got
them
working
with
the
service
deployment
and
then
got
it
to
continue
that
build
through
into
tecton
through
text
and
into
kuflow.
So
texan
throws
the
process
for
building
model
into
kubeflow.
We
use
feature
extraction
and
locative
for
hyper
parameter.
Tuning
use,
tensorflow
data
validation
to
validate
the
data,
then
use
intensify
model
training,
predominantly
there's
a
couple
of
other.
G
Data
science
tools
that
we're
using,
but
mostly
it's
tensorflow
and
I've,
been
using
tensorflow
for
quite
quite
a
few
years
now.
So
it's
it
works
well
on
this
scale,
and
then
we
use
tensorflow
model
analysis
to
do
the
analysis
of
the
model
and
make
sure
it's
performing.
So
this,
in
fact,
in
the
etl
and
in
the
models
and
model
training,
there's
a
lot
of
unit
testing
and
throughout.
So
this
isn't
typical
in
the
industry
when
you're
doing
data
science
projects,
but
it
is
something
I've
ensured
that
goes
throughout
this.
G
So
there's
unit
testing
and
model
testing
throughout
to
make
sure
that
the
integrity
of
the
models
are
correct
and
there's
federation
on
these
models
as
well.
So
this
is
a
centralized
machine
learning
system.
This
is
a
federated
one
where
we
deploy
the
models
to
the
edge
and
even
mobile
devices,
and
they
run
there.
They
don't
want
an
essential
cloud
environment
that
often,
and
then
they
federate
their
intelligence
together
dynamically.
G
So
they
have
to
be
tested
vigorously
to
make
sure
that
the
results
will
be
usable
and
then
they're
deployed
using
tensorflow,
serving
on
k
native
as
well
so
tensorflow
service
running
on
a
k-native
deployment.
And
then
the
models
are
deployed
there
with
an
api
and
then
they
can
be
pulled
down
there
or
they
can
be
pulled
down
physically
using
tensorflow
lite
onto
devices
dynamically.
G
So
we
can
update
mobile
apps
without
replacing
the
app
itself
just
by
pulling
down
the
model,
and
we
do
that
with
the
edge
deployment
as
well,
so
the
edge
deployment
again
pullian
device
management
and
youtube.
But
when
we're
rolling
out
things
we're
doing
it
with
phase
stage
developments,
so
we
can
schedule
what's
happening.
G
For
example,
if
we
pick
kenya
as
a
region,
we
can
say
kenya's
getting
an
update
next
week,
but
the
rest
of
the
world's
not
going
to
get
it
for
another
month,
and
then
it
allows
us
to
manage
those
rollouts
a
bit
more
effectively.
Again.
We
use
this
process
for
rolling
out
deployments
in
medical
projects
in
the
forestry
solutions
as
well
to
make
sure
that
we
don't
roll
out
everything
at
once
and
we
can
test
different
regions
and
we
also
can
do
roll
backs,
and
so
we
can
do
region
and
customer.
G
In
this
case
there
wasn't
really
a
customers,
but
there
is
a
customer
in
the
medical
solutions
in
the
forestry
solution.
So
we
have
to
be
able
to
tag
those
separately
as
well
and
give
them
a
specific
version
in
the
pipeline
rollouts.
G
And
then
we
do
increment
versions
with
canary
and
semver2
deployments
and
we
use
those
in
the
headers.
So
the
a
lot
of
people
will
specifically
set
versions,
for
example
in
the
urls
for
services.
We
use
http
headers.
So
that's
how
I
determine
which
version
we
should
be.
We
should
be
directing
traffic
to
on
the
actual
clusters
and
then
then
micro
devices
download
the
tf
light
models
and
again
the
federator.
G
So
when
the
tf
light
models
won
the
devices,
then
the
the
models
learn
on
those
devices
and
send
their
learnings,
not
data
back
to
the
cloud.
So
it's
entirely
privacy,
where
we
were
not
actually
sharing
data
from
any
organization,
we're
just
sharing
the
learnings
and
then
the
models
are
improved
at
that
point
and
thrown
back
down,
and
so
this
solution
entirely
from
end
to
end
is
now.
G
We
have
a
cluster
set
up
on
day,
one
with
argo
cd
and
then
we
have
tekton
doing
the
ci
bills
throughout
and
then
that
drives
the
the
versioning
system.
And
then
everything
is
k
native,
including
on
the
edge.
The
k3s
deployment
on
the
edge
devices
is
a
k-native
deployment
as
well.
G
I
I'm
done
that's.
Thank
you.
A
Awesome
so
we
actually
have
a
few
questions
for
you.
We
can
still
see
your
screen
yeah.
A
So
alec
is
asking
what
their
choices
were
considered
performance
cost
trade-offs.
What
is
budget
funding
levels.
G
Okay,
so
the
budget
from
day
one
in
2011
was
zero.
It
was
literally
a
hobby
project
and
that's
that's
how
it
got
a
little
bit
crazy,
so
in
2014,
when
it
started
to
scale
up,
I
realized
that
zero
budget
doesn't
work
for
what
I
was
trying
to
do.
We
had
about
113
components
at
that
point,
so
the
budget
ended
up
going
up
to
six
figures.
I
think
it
was
about
340
000
a
year.
G
Maintenance
costs
to
run
this
and
that
was
funded
by
several
big
cloud
companies,
so
yeah
about
340
thousand
a
year
say
current
running
costs
for
this
for
the
entire
solution
for
the
data
platform
itself,
we're
looking
around
six
or
seven
thousand
a
month,
I
think
minimum
when
it's
scaled
down
on
on
a
kind
of
static
level,
so
yeah
and
the
trade-offs
and
performance
most
of
the
time
we're
looking
at
scaling
resources
more
than
scaling
performance.
G
So
it
was
about
how
we
could
maintain
this
easily
with
very
small
team
and
that's
why,
in
this
case,
k-native
made
sense
because
building
custom
kubernetes
deployments,
which
is
what
I
did
originally.
In
fact,
I
didn't
even
use
managed.
Kubernetes
was
a
nightmare,
so
it
was
about
simplifying
each
layer
to
make
the
rollouts
a
lot
easier
to
maintain
and
maintenance.
A
lot
simpler
and
performance
was
easy
to
get
right.
After
that,
once
we
managed
to
get
a
maintenance
cell
that
was
manageable.
G
Yes,
so
I
can
share
the
size
as
well
for
for,
like
I
said,
the
text
is
too
small,
that's
fine!
I
can
send
those
out
so
yeah.
You
can
look
in
those
yeah.
G
So
yeah,
I
use
the
event
broker
and
I
use
nasq
so
there's
a
combination
of
both
depending
on
times,
but
I
I
try
to
use
the
eventing
broker
as
much
as
possible,
but
there's
a
multiple
different
things.
We
used
other
eventing
systems
as
well,
so
there
was.
G
I
can't
remember
some
more
apache
tools,
obviously,
but
also
some
tools
from
we
used
it
as
yours
event
hub
as
well
for
some
of
the
eventing
at
one
point,
unfortunately,
and
due
to
dependencies
before
we
had
to
pull
that
back
out
again
and
what
version
of
k
native
is
the
latest
because
I'm
living
on
the
edge.
So
I
just
keep
pushing
things
as
late
as
I
can
get.
G
I
think
in
some
of
the
cloud
environments,
I'm
using
older
versions
due
to
the
fact
that
the
managed
kubernetes
that
comes
pre-installed
is
an
older
version
and
then
the
k
native.
I
wanted
to
make
sure
there's
good
compatibility
with
that.
So
there
is
a
few
variations
in
that
most
of
the
time
I'll
go
with
the
latest
version.
The
latest
stable
version
and
occasionally
I'll
go
with
a
later
a
a
night
build
sorry.
G
Yes,
sorry
yeah
yeah,
so
that
was
just
what
version
of
k
native
am
I
using
so
anything
particular
about
k,
native
serving
and
eventing
that
stand
out
on
the
edge,
not
particularly,
I
think
it
solved
a
lot
of
problems
I
had
with
manual
management
of
that
I
think
most
of
what
stood
out
for
me
was
actually
getting
the
kubernetes
because,
as
I
said,
I'm
using
k3s
but
using
kubernetes
at
all
on
edge
devices
was
a
a
difficult
thing
to
manage
with
memory,
so
it
was
always
how
to
get
the
hd
devices
to
operate
effectively
and
a
lot
of
the
time.
G
That's
why
I
evolved
into
kps,
I
used
kubernetes
to
begin
with,
and
we
couldn't
get
it
to
work.
Can
I
use
istio,
it's
your
and
we
couldn't
get
that
to
work,
so
we
had
to
roll
into
using
link
id
on
the
hd
devices.
For
that
reason
so
yeah
I
don't
really
notice
anything
to
do
with
k
native
serving
and
eventing
because
it
works
well
compared
to
all
the
other
problems.
I've
had
going
on
to
each
devices
with
the
rest
of
the
stack
and
yes,
there's
a
lot
of
cost
savings.
G
So,
with
the
kubeflow
thing
in
particular,
tried
ml
flow
as
well,
and
I
tried
using
the
cloud
native
one.
So
I
use
the
azure
and
aws
model
building
infrastructure
for
a
lot
of
customer
projects
and
they
seem
really
straightforward
until
you
try
and
do
anything
custom
or
specific,
where
with
kf
serving
and
kubeflow,
and
it
ended
up
resolving
a
lot
of
those
conflicts
and
allowed
me
to
take
a
custom,
build
on
one
project
and
now
I've
rolled
it
onto
literally
hundreds
of
projects
and
not
had
to
rebuild
it
at
all.
A
G
What
does
k
native
do
for
you
that
you
are
not
do
that?
I
wish
it
did
do
so
k
native,
not
not
too
much
at
the
moment.
I
think
it's
evolving
nicely
and
I
did
use
other
serverless
components.
I've
I've
used
open
fast
originally
as
well.
I
replaced
open
fast
with
k
native,
because
I
found
k
native
more
technical,
but
it
resolved
a
lot
of
the
freedom
I
needed
to
do
with
things
and
once
I
got
inviting
k
native
it
always
worked.
G
I
didn't
have
a
problem
and
with
its
built-in
support
with
seo,
it
just
made
the
rollouts
and
and
everything
a
lot
easier
to
do,
but
I
did
have
problems
with
tecton,
which
obviously
isn't
k
native,
but
it's
a
roll
off
of
of
k
native,
and
I
think
it
wasn't
really
a
problem
with
tecton.
It's
just
a
lack
of
features
and
that's
why
I'm
using
argo,
cd
and
tecton
in
this
solution.
G
Yeah
and
anybody
that
wants
to
reach
out
afterwards
as
well
feel
free
to
to
reach
out
with
any
more
detailed
questions.
If
you
want
some
answers
or
ideas,
I'd
say
for
me:
it's
opinionated
really,
but
it
I
think
it's
good
to
separate.
So
this
is
a
good,
separate
ci
from
cd
get
up.
G
So
I
think
it
is
because
ci
and
cd
are
two
different
concepts
entirely
and
conflating
them
for
me
is
like
conflating
c
and
sequels
plus
they
they're
not
the
same
thing,
and
it
just
ends
up
creating
a
bit
of
a
nightmare.
You
typically
only
build
something
once
and
then
you
deploy
it
multiple
times
so
conflating
ci
and
cd
into
one
thing
ends
up.
I
think
messing
up
the
pipelines
a
little
bit
and
creating
too
many
too
many
problems
in
in
terms
of
verification,
but
also
in
terms
of
the
computation
you're
doing
so.
G
Yeah,
so
so
sorry,
this
alex
said
anything
more
about
the
data
sovereignty
and
how
farmers
are
using
edge,
mist
and
the
farmers
aren't
technically
savvy,
but
they
are
getting.
Unfortunately,
in
the
solution,
they
have
they're
getting
ripped
off
mostly
by
european
markets
and
american
markets,
because
they
have
dominance
in
terms
of
selling
so
giving
them
the
tools
to
actually
run
their
own
solution
was
important,
both
in
cost
reduction,
but
it's
also
important
in
order
for
them
to
roll
out
the
tools
they
see
fit.
G
So
all
I
have
to
do
is
give
them
some
software
or
give
them
an
api
token,
and
they
can
roll
it
out
and
use
it,
and
then
there's
cooperatives
operating
in
in
africa
that
they
bundle
their
resources
to
get
a
fairer
price
and
those
cooperatives
use
their
own
hardware
as
well,
so
creating
their
own
sovereignty,
and
they
don't
want
to
share
it.
G
How
easy
is
it
to
learn
k
native?
It's
really
easy,
compared
to
trying
to
learn
all
of
aws's
different
components
and
as
your
as
your
different
components.
So
I
think
for
me,
I
I
push
k
native
as
much
as
possible,
but
I
I'll
be
honest
that
it
doesn't
go
as
smoothly
on
any
of
the
purchases
I
would
like
it
to
because
most
people
I
work
with
already-
are
bought
into
a
specific
cloud
provider
and
they
will
go
with
their
built-in
they're
built
in
stack
yeah.
G
It
is
easier
than
any
of
your
aws,
because
the
certification
process
is
so
complicated.
I
think
they
literally
give
you
a
certification
process.
That
means
you're
locked
into
their
solution,
because
you
don't
ever
want
to
learn
anything
else
again
after
you've
done
it
so
k,
native's
a
lot
easier
once
you've
got
it
rolled
out
once
it
pretty
much
works
in
any
other
kubernetes
deployment.
G
As
long
as
it's
the
versions
are
similar,
it's
things
like
the
proxy
configs
or
the
orchestration
areas,
I
think,
add
more
complexity
than
k-native
itself
and
when
you're
dealing
with
things
like
open,
fast
and
serverless,
they
have
their
own
complexities
as
well,
which
makes
it
harder
because
they
don't
work
as
well
as
kn80.
Does
it
at
a
lower
level
or
customized
level
and
dealing
with
their
service
stuff?
It
was
the
functions
as
a
service.
G
I
think
I
found
the
most
difficult
getting
right
because
the
serverless
works
fine,
but
if
you're
trying
to
do
a
fast
product
and
again
a
lot
of
the
teams,
I've
worked
with,
they
couldn't
differentiate
what
the
difference
was,
but
if
you're
trying
to
do
a
fast
prototype,
you
had
a
few
more
issues
getting
k
native
to
work,
but
that's
purely
due
to
technical
complexity.
G
I
I
had
more
problems
with
istio,
so
any
problems
with
this
during
k
native,
I
didn't
have
any
problems
most
of
the
time
that
weren't
istio's
fault,
but
that,
for
me,
was
probably
due
to
my
lack
of
understanding
on
seo.
I
I
did
a
lot
of
learning
when
playing
with
stu
and
playing
with
envoy
based
foxes,
so
I
that's
again
why
I've
ended
up
building
a
lot
of
webassembly
custom
proxy
tunnels.
G
Now
is
because
I
realized
solving
them
was
a
programmatic
solution
and
I
didn't
want
to
customize
like
put
code
into
the
upstream
projects
at
all.
I
did
do
that
in
kubernetes
at
one
point
and
I
I
do
regret
it
so
I
instead
customize
it
with
webassembly,
made
a
lot
more.
G
A
Thank
you
so
much
again
for
presenting
and
yeah
amazing
presentation,
if
you
are
able
to
share
the
slides
later
on,
please
add
a
link
on
the
on
the
agenda.
If
that's
possible,
I
think
a
lot
of
people
would
love
to
revisit
them.
A
I
think
we
run
out
of
time
today
to
do
breakout
rooms,
as
we
usually
do,
but
we
can
do
that
next
time
and
you
will
find
these
the
recording
of
this
meetup,
as
well
as
the
the
cuts
of
the
demo
in
the
creative
youtube
channel.
So
thank
you,
everybody
for
joining
us
and
I
hope
to
see
you
next.