►
From YouTube: Knative Demo: Taking AI to the Edge
Description
On January 27, 2021, P.J Ćaszkowicz, Creative Technologist at Omnijar presented on Taking AI to the Edge a demo about a decade in evolving a large-scale sustainable agriculture project, including distributed machine learning across edge and cloud platforms.
A
Okay,
so
I
apologize
firstly
because
this
deck
is
pretty
much
shared
from
conference
to
conference.
At
the
moment,
I
haven't
really
had
time
to
update
it,
but
I'll
try
and
extrapolate
the
the
specific
k-native
elements
as
I
go
through,
and
so
I'm
not
boring
you
with
too
many
superfluous
components.
A
A
So
I
want
to
cover
how
those
two
things
differ
in
this
particular
solution,
and
this
is
an
11
year
old
project
or
almost
11
years
this
year
and
originally
was
started
as
a
hobby
project
and
it's
now
a
large
commercial
project,
so
I'll
cover
some
of
that
evolution
as
I
go
through
as
well.
But
this
means
that
a
lot
of
the
components
are
not
consistent
throughout
this
solution,
but
I'll
try
and
talk
about
some
of
the
difficulties
of
how
that
technology
roadmap
has
evolved
as
we've
gone
through
as
well.
A
So
I've
said
adding
to
coffee
here,
because
it's
actually
a
agricultural
project
specifically
focused
on
the
coffee
industry.
So
the
case
that
I
I
had
originally
was
to
build
a
sustainable
agricultural
product
and
again
as
a
hobby
project
before
it
kind
of
got
out
of
hand,
so
it
focused
on
urban
farming
and
raw
farming,
predominantly
in
kenya
originally,
but
now
across
africa
and
south
america
around
60
000
farms.
I
think
one
on
one
on
the
platform.
A
It
focuses
on
coffee
production
only,
but
that
was
meant
as
a
case
study.
It's
actually
the
components
now
run
on
forestry
solutions
as
well
to
do
sustainable,
forestry's
and
all
the
food
areas
as
well,
but
not
put
on
me
with
the
same
core
components.
A
So
the
idea
is
to
track
coffee
production
from
the
origin
all
the
way
through
to
the
cup
and
make
sure
that
it's
sustainable
throughout
and
make
sure
that
it's
both
sustainable,
ecologically
and
economically
so
covering
both
of
those
elements.
So
that
means
understanding
the
market,
data
and
understanding
the
environmental
data,
as
well
so
using
predictions
from
weather
systems
and
commodity
markets
and
trading
markets,
and
to
advise
the
farm
producers
and
the
retailers
on
what
they
need
to
do
to
make
it
more
sustainable,
and
so
the
challenges
were
low-cost
delivery.
A
I
didn't
want
to
pass
on
many
costs
to
farmers,
and
I
also
wanted
to
make
sure
that
I
could
afford
to
build
in
the
first
place:
low
connectivity
services,
so
we
had
2g
and
below.
Sometimes
it
was
just
simply
1g
to
get
the
connectivity,
so
we
were
using
sms
data
essentially
to
send
data
up
from
the
farms,
so
those
pipelines
have
to
be
efficient
and
I
say
a
small
team
on
the
next
point.
A
But
essentially
it
was
just
me
for
at
least
five
or
six
years,
and
then
it
was
multiple,
complementary
teams
that
helped
me
get
there
from
different
companies
and
then
multi-region
so
again,
I've
said
before
it
goes
across
borders
across
africa
and
south
america
and
europe,
and
then
I've
used
the
term
cell
sub
vanity
here.
So
for
those
who
aren't
aware
of
the
term,
it's
just
essentially
giving
everybody
the
ownership
of
their
data
here.
A
So
that
was
quite
a
technical
obstacle
to
actually
getting
this
rolled
out,
because
we
wanted
to
make
sure
that
when
the
data
wasn't
centralized
and
on
the
actual
software
was
centralized.
So
the
way
we
rolled
out
software
had
to
mean
that
people
could
own
that
pipeline
themselves
as
well.
A
So,
starting
off
with
the
farm,
we
had
the
original
mist.
When
I
talked
about
mist,
I
mean
the
sensors
and
we
also
had
automated
machinery
and
drones.
A
We
rolled
out
custom
software
to
those
devices
and
we
had
different
partners
for
different
stages,
so
I
think
in
2015
or
2016,
nvidia
helped
out
with
some
drones.
Otherwise
we
couldn't
have
done
that
and
we
did
the
machine
learning
on
board
the
drone.
So
we
didn't
have
to
take
that
data
afterwards
and
so
fully
automated,
and
then
we
had
sensors
picking
up
things
like
moisture
lighting
and
different
different
weather
effects
like
wind
as
well
just
to
see
what
the
effect
would
be
on
the
crops
and
with
the
zones.
A
We
did
spectrographic
analysis,
so
we
could
scan
the
crops
by
flying
drones
over
and
then
taking
imagery
and
then
working
out
what
looked
healthy
and
what
didn't
and
that's
what
we're
using
in
the
forestry
solutions
now
in
large
forestry
solutions
in
canada
and
the
nordics,
and
so
we
scan
the
forest
and
then
read
the
data
and
figure
out
how
healthy
the
forests
actually
are.
A
And
so
that's
the
mist.
The
fog
is
the
components
that
aggregate
in
different
regions,
so
typically
one
farm
could
span
quite
quite
a
large
landmass
and
then
they'd
have
multiple
mists
and
then
the
fog
would
aggregate
that
data.
So
we
didn't
need
to
send
data
to
the
cloud
so
quite
a
few
different
gateways.
A
We
had
control
management
systems
there,
so
you
could
manage
the
system
with
your
sms
messages,
data
processing
we
had
etl
for
data
transformations
locally,
so
we
didn't
send
superfluous
data
up
and
then
we
had
machine
learning
inference
locally,
so
we
could
figure
out
how
to
operate
the
machinery,
but
also
how
to
advise
the
producers
on
the
ground
what
to
do
in
case
we
were
predicting
bad
weather
effects
or
predicting
significant
market
changes.
A
A
lot
of
these
components
are
fairly
off
the
shelf
components,
but
we
worked
with
arm
and
ibm
to
provision
a
lot
of
the
hardware
to
make
it
easier.
So
this
is
holistically
what
a
single
farm
would
look
like
at
a
very
simple
high
level.
We
have
the
new
processing
units,
so
nvidia,
provided
us
with
a
lot
of
hardware
for
that
anticipation,
control
to
actually
control
the
devices
using
sms,
and
we
used
apache
kafka
to
do
aggregation
of
the
data
and
spark
do
the
tl
or
elt.
A
And
we
used
low
on
gateways
across
different
farms
to
aggregate
that
data
from
the
different
devices,
including
landing
areas
for
the
drones.
A
So
the
fog
management
was
pelian.
It
was
originally
owned
by
arm
what
pelian
did,
because
it
had
to
be
multi-cloud
for
the
farms,
because
there's
a
self-sovereignty
issue
and
pelian
was
used
to
do
the
device
management
rather
than
using
something
like
azure,
iot,
hub
or
aws.
Society
management,
software
and
use
custom
yoctobills
for
the
implementations
and
then
k3s
orchestration
over
the
top
of
that
on
the
fog
platforms
and
the
missed
platforms
had
lower
level
software.
A
But
the
fog
platforms
used
k3s
orchestration,
both
from
the
cloud
through
to
the
hd
devices,
and
that
was
provisioned
predominantly
manually
in
the
beginning
and
then
using
pellion
and
then
eventually
using
an
automated
pipeline
which
I'll
cover
in
a
moment
and
then
on
each
fog.
We
had
a
custom
low-level
update
manager
that
was
written
in
rust
as
well
just
to
ensure
that
we
could
replace
the
orchestration
layer
if
need
be,
and
we
work
with
arm
on
that
software.
A
So
I'm
going
to
focus
on
the
data
platform,
there's
a
lot
of
other
components
within
this
solution.
There's
mobile
apps,
where
that's
different
interface
and
integration
components
as
well,
but
that
would
take
far
too
long
to
go
over
and
I
don't
think
I
could
keep
everyone's
attention
for
that
one.
A
So
I'll
focus
on
the
data
platform
for
now
and
that's
where
the
pipelines
are
the
most
interesting,
I
think,
and
so
for
the
interest
I
used
kafka
for
data
streaming,
used
link
for
the
aggregation
spark
for
the
processing
and
then
hadoop
over
hdfs
and
luster
for
storage.
So
that
was
high
performance
storage
of
data.
Each
ingest
area
for
the
cloud
because
there
was
multiple
ingests,
would
be
handling
around
320
terabytes
of
data
a
month
on
average.
A
So
we
had
to
ensure
that
the
file
system
itself
was
high
performance
and
in
that
solution
and
then
a
data
lake
I
built
again
it
had
to
be
cross-platform,
although
a
lot
of
cloud
providers
provide
their
own.
But
this
was
a
generic
portable
one
that
anybody
else
could
own
if
they
wanted
to
reload
this
solution
open
and
monitor
on
their
own
farm.
So
it
extends
fairly
standard
components
and
fairly
relevant
to
the
conversation.
But
I'll
just
go
over
quickly.
A
That
and
we
used
airflow
for
the
orchestration
of
the
pipeline
for
the
data
apache
atlas,
for
governance,
spark
for
the
etl
and
apache
ranger
for
security,
and
then
there
was
a
amundsen's
data
catalog
for
actually
listing
the
data
and
allowing
it
to
be
queriable
by
other
sources.
A
So
the
data
lake
was
singular
and
central
physically,
but
conceptually
it
could
be
managed
by
multiple
teams
with
their
own
governance
and
their
own
security
protocols,
and
this
is
has
been
reused
on
multiple
solutions.
Since
I
bought
this
for
this
project,
it's
kind
of
the
same
components
and
again
solutions
like
the
forestry
solution
and
we're
using
a
lot
of
these
components
in
medical
solutions
as
well,
so
like
medical
equipment,
where
we
can
sense
what's
happening
with
a
certain
piece
of
medical
equipment
in
the
hospital.
A
And
then
separated
the
data
lake
into
a
data
mesh.
So
what
I
mean
by
that
is
technically
it's
very
similar,
but
we
separated
the
teams
into
multi-disciplinary
teams,
so
the
data
scientists
were
not
separated
into
their
own
containers.
They
they
worked
with
the
development
teams
into
functional
product
teams
instead,
so
they
would
develop
with
the
development
teams
on
an
end-to-end
solution
for
every
day
to
ingest
that
came
into
every
source
like,
for
example,
lighting
data
or
moisture
data
that
would
be
a
product
and
would
build
a
service
based
on
that.
A
So
we'd
build
a
batch
service
and
a
standard
vest
or
graphql
web
service
as
well.
Based
on
that,
so
it
will
be
managed
by
the
same
same
group
of
people
and
that
allowed
a
different
focus
and
set
priorities.
Depending
on
who
I
was
working
with
in
scaling
this
solution
up
and
then
each
product
is
served
to
the
lake
and
it's
also
can
be
served
to
third-party
components
as
well
like
mobile,
apps
or
web
apps,
and
then
the
governance
is
distributed.
A
So,
as
I
mentioned
previously,
we
use
the
patch
outlets
for
distributed
governance,
but
the
data
could
be
centralized.
So
it's
physically
how
it
works
is
different
from
how
the
teams
operate.
The
teams
are
decentralized,
but
the
storage
can
be
centralized,
so
the
policies
were
managed
centrally
as
well,
and
it
was
important
to
manage
those
policies
in
a
ubiquitous
way,
because
the
complexity
of
this
is
quite
high,
and
then
this
is
specific
to
the
mapping
solutions.
A
We
had
to
do
a
lot
of
augmenting
of
the
data
indexing
again,
it's
irrelevant
to
the
k
native,
but
it's
it's
worth
going
over,
that
there
was
a
lot
of
custom
mapping
solutions
that
were
worked
on
to
to
get
the
indexing
and
performance
upon
energy
memory's
data.
A
So
the
business
intelligence
was
the
bit
that
made
most
organizations
interested
in
this
solution,
so
that
was
all
about
providing
pretty
graphs
and
reporting.
So
there's
a
lot
of
different
components
in
there
that
made
that
happen.
A
lot
of
the
companies
we
worked
with
were
using
things
like
power
bi,
which
is
quite
ubiquitous
in
a
lot
of
organizations.
A
We
needed
something
that
could
roll
over
in
each
organization
depend
independent
of
what
kind
of
cloud
provider
go
with,
or
what
hardware
that
we're
using
so
again
kafka
and
spark
were
there,
we
used
dewey
for
the
time
series
and
real
time
data
and
killing
for
historical
data,
and
most
of
these
are
apache
projects
and
they
they're
quite
proven
in
this
kind
of
space.
A
So
it's
been
very
easy
to
get
them
to
roll
out,
but
the
rollout
originally
was
very
manual
to
get
all
of
this
to
work
and
then
re-dash
used
for
the
actual
dashboard
and
superset
as
well
for
for
the
real-time
dashboards
added
a
few
extra
things
at
the
end
of
what
I
started
adding
last
year
to
this
solution,
which
is
pino
for
new
data,
we're
still
reviewing
that
and
then
pressed
over
data
querying.
So
that's
the
facebook
aggregation
for
data
query.
A
And
then
the
data
science
elements,
so
this
is
where
the
research
happens.
So
a
lot
of
this
is
hypothesis
during
development.
A
lot
of
the
machine
learning
to
do
the
predictive
intelligence
is
based
on
ideas
and
concepts
that
aren't
proven.
So
we
need
the
data
scientists
to
be
able
to
look
at
that
data
in
a
safe
way
without
any
of
the
data
getting
neat.
And
so
we
create
a
secure
data,
science
pocket
that
comes
through
the
same
data
lake,
and
then
they
can
experiment
with
that
data
without
actually
having
access
to
that
data.
A
So
they
go
through
the
same
catalog,
but
there's
a
set
of
policies
in
the
governance
to
prevent
them
doing
anything
else
for
the
data
and
then
once
they've
done
a
model
or
report,
then
they
can
put
that
into
a
pipeline
and
then
ship
it
to
either
the
bi
tool
delivers
report
or
they
can
ship
it
to
the
edge
network
or
the
cloud
intelligence
services
if
they
want
to
actually
render
that
model
in
a
solution
somewhere,
and
that
is
automated
as
well.
So
the
actual
pipeline
deployments
are
so.
A
This
is
where
it
actually
gets
interesting,
where
we
start
looking
at
the
those
pipelines
of
how
everything
gets
built
and
deployed.
So
one
of
the
things
I
did
differently
to
most
client
projects,
I
work
on.
Typically,
they
use
something
like
terraform.
I
I
don't
if
I
can
get
away
with
it,
so
I
didn't
on
this
project.
I
had
free
range
to
do
whatever
I
want
wanted,
so
I
created
a
custom
cloud
bootloader
and
this
bootloader
would
load
up
any
cloud
provider,
and
that
was
supported
on
this.
A
So
we
did
the
big
three
and
we
also
supported
red
hats
and
alibaba
as
well
and
digitalocean
as
well
from
the
offset,
and
then
we
supported
custom
hardware
if
somebody
wanted
to
install
any
of
these
components
on
their
own
infrastructure
so
that
custom
boot
loader
was
written
in
rust
and
go
combination
of
both
go
for
the
cdk
and
sdks
for
the
cloud
provider
and
was
for
the
actual
runtime,
and
then
it
was
event
based
so
rather
than
things
like
terraform,
which
are
configuration
based
for
deployments.
A
This
you
could
tell
it
how
to
react
to
different
events
happening
within
the
cloud
infrastructure
or
even
on
the
boot,
to
to
be
reactive
and
make
additional
changes
to
the
infrastructure.
We
actually
added
machine
learning
to
this
component
as
well,
so
it
could
actually
react
based
on
what
it
learned
and
to
do
on
to
scale
infrastructure
appropriately,
and
this
is
essentially
a
single
click
application.
A
So
you
double
click
once
you've,
given
it
the
config
of
what
the
where
the
hardware
is
in
terms
of
the
network
and
it
will
copy
itself
onto
that
network
and
build
it
and
also
secure
it,
so
we'll
create
the
firewalls
and
unlock
down
the
ssh
port.
So
no
nobody
can
physically
get
onto
that
infrastructure
once
it's
built
and
then
it
just
destroys
and
rebuilds
itself
over
time,
and
so
the
cdks
and
sdks
are
already
provided
by
the
cloud
providers
and
then
it's
stateless
and
reusable.
A
So
if
you
run
it
again,
it
will
just
look
at
things
like
dns
settings
and
figure
out
what
the
infrastructure
looks
like
and
destroy
and
rebuild
what
it
needs
to
each
time.
A
A
I
spent
a
stupid
amount
of
time
writing
code
over
and
over
again
and
then
eventually,
I
moved
out
using
argo
cd
for
getting
the
clusters
up
and
because
I
could
just
do
the
configuration
once
yeah,
I'm
using
customize
and
throwing
some
helm
configs
over
and
it
would
build
that
infrastructure
for
me
and
that
would
bring
it
from
get
git
originally,
but
that's
something
I
tried
to
avoid.
So
I
didn't
want
to
drive
the
infrastructure
from
git.
A
I
wanted
to
drive
it
from
the
bootloader,
so
the
bootloader
would
say
what
infrastructure
needs
to
happen,
regardless
of
what's
happening
in
git.
It
would
just
pull
it
out
of
it
when
it
decides
it
needs
to,
but
essentially
argo
cd,
which
is
what
I'm
using
at
this
point,
is,
is
git,
ops,
driven
and
then
it
separates
the
configurations
from
the
code.
So
I
could
just
say:
hey
here's
a
argo
cd
deployment,
which
is
the
first
thing
I
did.
A
I
launched
two
components
with
this,
so
I
launched
a
authentication
component
and
then
argo
cd,
and
that
was
essentially
my
pipeline
up
and
running
and
then
once
I'll
go
see
these
up,
it
would
build
the
rest
of
the
infrastructure
and
the
code
would
manage
and
how
that
looks,
and
then
this
would
be
configuration
driven
after
that.
So
this
only
happened
beginning
of
last
year
or
the
year
before,
I
think
2019
I'd
put
our
gossid
in
experimentally,
but
it's
scaling
up
at
the
moment.
A
So
the
cluster
architecture
now
looks
like
this
is
a
very
simple
overview,
but
this
is
zero
trust
architecture.
So
we
have
an
authentication
system
across
the
entire
environment
and
again
you
can,
when
you're,
on
all
both
layers
on
this
oauth
2
and
oidc,
there's
standard
components
that
we
have
issue
for
the
cloud-based
clusters
and
there's
clusters
on
the
edge
devices.
But
I
won't
go
into
detail
on
those
they're,
not
that
different.
A
Apart
from
we're
not
running
hdl
on
the
edge
devices
due
to
memory
issues
originally,
and
then
we
have
prometheus
for
logging
and
then
we
have
because
it's
envoy
based,
we
use
webassembly
custom
extensions
to
the
actual
http
traffic,
so
we
handle
the
http
and
the
udp
traffic
use
using
webassembly
extensions
and
then
that
ports
through
on
each
of
these
service
nodes,
we
push
it
through
to
a
k
native
deployment
so
that
workload
service
function
component.
A
You
can
see
there,
that's
essentially
a
k-native
component,
and
then
we
used
nats
for
doing
the
event
queuing
and
there's
cloud
events
throughout
this
architecture
as
well
for
how
things
communicate
with
each
other.
This
was
a
sample
thing
I
sent
to
one
of
the
partners.
We
were
working
with
to
show
them
how
we
did
the
enterprise
integration
patterns,
and
so
that's
why
we've
got
eib
on
the
middle
component
there,
and
but
it's
a
fairly
generic
overview
of
our
k
native
cloud
deployment
for
clusters,
they're
all
essentially
the
same
and
they're
decentralized
stateless.
A
So
we
have
tekton
now
for
building
services,
so
tectum
is
essentially
all
about
ci.
So
we
only
use
argo
for
the
cd
and
tech
10
to
the
ci
and
that
begins
with
the
k
native
tasks.
So
we
from
the
code
repository.
We
do
the
builds
once
and
then
we
can
do
multiple
deployments
from
those
bills.
So
we've
separated
ci
and
cd
completely.
A
That's
two
separate
ideas
and
to
simplify
the
the
amount
of
computation,
that's
happening
and
reduce
the
amount
of
potential
forevers
using
helm
throughout,
as
well
with
a
custom
chart
museum
and
then
it
when
it
gets
detected.
There's
a
couple
of
different
components
that
are
managing
security
to
to
keep
the
integrity
of
the
builds
high.
So
notary
and
falco
are
both
used
to
mine
the
integrity
of
those
images
and
the
dependencies,
and
then
it
goes
into
container
storage.
A
So
once
the
ci
pipeline
goes
into
container
storage
and
it's
fully
tested-
and
it's
tagged-
and
we
assume
at
that
point
that
that
image
is
is
perfectly
usable
and
we
don't
have
to
retest
it
when
we
go
to
a
deployment
and
and
then
we
do,
there's
a
lot
of
canadian
deployments.
Blue
green
deployments
and
multiple
version
deployments
at
the
same
time
managed
in
the
cluster
configs.
A
A
So
cloud
events
are
both
used
in
the
custom
services,
but
also
used
in
the
pipeline
as
well
for
making
sure
that
when
something
happens
within
a
build
or
within
the
infrastructure
that
we
manage
the
provisioning
dynamically
based
on
those
cloud
events
and
then,
as
I
said,
this
integrity
and
security
checks
are
part
of
the
automated
audit
and
then
there's
a
developer
gateway.
So
that
was
simply
a
web-based
generation.
A
So
using
something
like,
I
think,
it's
hugo
for
developing
a
static
website
from
open
api,
three
and
graphql
documents
and
then
generating
a
documentation
and
deploying
it
with
redoc
into
a
cluster
environment.
And
then
it
uses
the
same
zero
torster
warf
layer
to
actually
do
the
authentication
after
that.
So
you're
still
logging
in
through
the
same
system
to
manage,
who
has
access
to
different
services
to
actually
develop
against
those.
This
is
for
both
third-party
internet
and
internal
developers.
A
And
now
for
the
building
models,
and
so
this
way
it
got
a
little
bit
complicated.
Originally
we
were
doing
a
lot
of
this
manually
so
we're
using
a
lot
of
it,
nvidia
hardware
for
doing
massive
model,
development
and
building
and
then
testing
which
didn't
really
work
with
them
doing
manual
deployment
and
having
to
go
through
palliation
for
that
as
well
and
to
deploy
it
to
th
devices
and
mobile
devices
and
now
web
applications
as
well
have
models
built
in
so
now
it's
driven
by
tecton.
A
So
we
I
took
the
same
pipelines
once
I
got
them
working
with
the
service
deployment
and
then
got
it
to
continue
that
build
through
into
tecton
through
text
and
into
kubeflow.
So
texan
throws
the
process
for
building
a
model
into
kubeflow.
We
use
feature
extraction,
indicative
for
hyper
parameter
tuning
use,
tensorflow
data
validation
to
validate
the
data,
then
use
intensive
model
training,
predominantly
there's
a
couple
of
other.
A
The
data
science
tools
that
we're
using,
but
mostly
it's
tensorflow
and
I've
been
using
tensorflow
for
quite
quite
a
few
years
now.
So
it's
it
works
well
on
this
scale
and
then
we
use
tenth
flow
model
analysis
to
do
the
analysis
of
the
model
and
make
sure
it's
performing
so
there's
in
fact,
in
the
etl
and
in
the
models
and
model
training,
there's
a
lot
of
unit
testing
throughout.
So
this
isn't
typical
in
the
industry
when
you're
doing
data
science
projects,
but
it
is
something
I've
ensured
that
goes
throughout
this.
A
So
there's
unit
testing
and
model
testing
throughout
to
make
sure
that
the
integrity
of
the
models
are
correct
and
there's
federation
on
these
models
as
well.
So
this
is
a
centralized
machine
learning
system.
This
is
a
federated
one
where
we
deploy
the
models
to
the
edge
and
even
mobile
devices,
and
they
run
their.
They
don't
want
an
essential
cloud
environment
that
often,
and
then
they
federate
their
intelligence
together
dynamically.
A
So
they
have
to
be
tested
vigorously
to
make
sure
that
the
results
will
be
usable
and
then
they're
deployed
using
tensorflow,
serving
on
k
native
as
well
so
tensorflow
servings
running
on
a
k
native
deployment,
and
then
the
models
are
deployed
there
with
an
api
and
then
they
can
be
pulled
down
there
or
they
can
be
pulled
down
physically
using
tensorflow
lite
onto
devices
dynamically.
A
So
we
can
update
mobile
apps
without
replacing
the
app
itself
just
by
pulling
down
the
model,
and
we
do
that
with
the
edge
deployment
as
well,
so
the
edge
deployment
again
pullian
device
management
at
yakuto.
But
when
we're
rolling
out
things
we're
doing
it
with
phase
stage
developments,
so
we
can
schedule
what's
happening.
A
For
example,
if
we
pick
kenya
as
a
region,
we
can
say
kenya's
getting
an
update
next
week,
but
the
rest
of
the
world's
not
going
to
get
it
for
another
month
and
then
it
allows
us
to
manage
those
rollouts
a
bit
more
effectively.
Again.
We
use
this
process
for
rolling
out
deployments
in
medical
projects
in
the
forestry
solutions
as
well
to
make
sure
that
we
don't
roll
out
everything
at
once
and
we
can
test
different
regions
and
we
also
can
do
roll
backs,
so
we
can
do
region
and
customer.
A
In
this
case
there
wasn't
really
a
customers,
but
there
is
a
customer
in
the
medical
solutions
in
the
forestry
solution.
So
we
have
to
be
able
to
tag
those
separately
as
well
and
give
them
a
specific
version
in
the
pipeline
rollouts.
A
And
then
we
do
increment
versions
with
canary
and
sember
two
deployments
and
we
use
those
in
the
headers.
So
the
a
lot
of
people
will
specifically
set
versions,
for
example
in
the
urls
for
services.
We
use
http
headers.
So
that's
how
I
determine
which
version
we
should
be.
We
should
be
directing
traffic
to
on
the
actual
clusters
and
then
then
micro
devices
download
the
tf
lite
models
and
again
the
federators.
A
So
when
the
tflight
models
won
the
devices,
then
the
the
models
learn
on
those
devices
and
send
their
learnings,
not
data
back
to
the
cloud.
So
it's
entirely
privacy
aware
we
were
not
actually
sharing
data
from
any
organization,
we're
just
sharing
the
learnings
and
then
the
models
are
improved
at
that
point
and
and
thrown
back
down,
and
so
this
solution
entirely
from
end
to
end
is
now.
A
We
have
a
cluster
set
up
on
day,
one
with
argo
cd
and
then
we
have
tecton
doing
the
ci
bills
throughout
and
then
that
drives
the
the
versioning
system.
And
then
everything
is
k
native,
including
on
the
edge.
The
k3s
deployment
on
the
edge
devices
is
a
k-native
deployment
as
well
yep.
You
have
about
three
to
five
more
minutes.
I
I'm
done
that's.
Thank
you.